Jul 7 00:22:15.867868 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:58:13 -00 2025 Jul 7 00:22:15.867898 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:22:15.867908 kernel: BIOS-provided physical RAM map: Jul 7 00:22:15.867915 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 7 00:22:15.867921 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 7 00:22:15.867928 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 7 00:22:15.867936 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jul 7 00:22:15.867949 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jul 7 00:22:15.867958 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 7 00:22:15.867965 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 7 00:22:15.867972 kernel: NX (Execute Disable) protection: active Jul 7 00:22:15.867979 kernel: APIC: Static calls initialized Jul 7 00:22:15.867986 kernel: SMBIOS 2.8 present. Jul 7 00:22:15.867993 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jul 7 00:22:15.868006 kernel: DMI: Memory slots populated: 1/1 Jul 7 00:22:15.868013 kernel: Hypervisor detected: KVM Jul 7 00:22:15.868024 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 00:22:15.868032 kernel: kvm-clock: using sched offset of 4868095216 cycles Jul 7 00:22:15.868041 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 00:22:15.868049 kernel: tsc: Detected 2494.140 MHz processor Jul 7 00:22:15.868057 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 00:22:15.868066 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 00:22:15.868074 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jul 7 00:22:15.868084 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 7 00:22:15.868092 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 00:22:15.868100 kernel: ACPI: Early table checksum verification disabled Jul 7 00:22:15.868108 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jul 7 00:22:15.868116 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:22:15.868124 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:22:15.868132 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:22:15.868140 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jul 7 00:22:15.868148 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:22:15.868158 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:22:15.868166 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:22:15.868174 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:22:15.868182 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jul 7 00:22:15.868190 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jul 7 00:22:15.868197 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jul 7 00:22:15.868205 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jul 7 00:22:15.868213 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jul 7 00:22:15.868227 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jul 7 00:22:15.868235 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jul 7 00:22:15.868243 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 7 00:22:15.868252 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 7 00:22:15.868260 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Jul 7 00:22:15.868271 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Jul 7 00:22:15.868279 kernel: Zone ranges: Jul 7 00:22:15.868288 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 00:22:15.868296 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jul 7 00:22:15.868304 kernel: Normal empty Jul 7 00:22:15.868313 kernel: Device empty Jul 7 00:22:15.868321 kernel: Movable zone start for each node Jul 7 00:22:15.868329 kernel: Early memory node ranges Jul 7 00:22:15.868337 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 7 00:22:15.868345 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jul 7 00:22:15.868357 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jul 7 00:22:15.868365 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 00:22:15.868374 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 7 00:22:15.868382 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jul 7 00:22:15.868390 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 7 00:22:15.868398 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 00:22:15.868409 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 00:22:15.868417 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 7 00:22:15.868427 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 00:22:15.868438 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 00:22:15.868449 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 00:22:15.868457 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 00:22:15.868465 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 00:22:15.868474 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 7 00:22:15.868482 kernel: TSC deadline timer available Jul 7 00:22:15.868490 kernel: CPU topo: Max. logical packages: 1 Jul 7 00:22:15.868499 kernel: CPU topo: Max. logical dies: 1 Jul 7 00:22:15.868507 kernel: CPU topo: Max. dies per package: 1 Jul 7 00:22:15.868519 kernel: CPU topo: Max. threads per core: 1 Jul 7 00:22:15.868533 kernel: CPU topo: Num. cores per package: 2 Jul 7 00:22:15.868546 kernel: CPU topo: Num. threads per package: 2 Jul 7 00:22:15.868558 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 7 00:22:15.868571 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 7 00:22:15.868583 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jul 7 00:22:15.868592 kernel: Booting paravirtualized kernel on KVM Jul 7 00:22:15.868600 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 00:22:15.868609 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 7 00:22:15.868621 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 7 00:22:15.868629 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 7 00:22:15.868637 kernel: pcpu-alloc: [0] 0 1 Jul 7 00:22:15.868645 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 7 00:22:15.868656 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:22:15.868665 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 00:22:15.869699 kernel: random: crng init done Jul 7 00:22:15.869735 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 00:22:15.869750 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 7 00:22:15.869771 kernel: Fallback order for Node 0: 0 Jul 7 00:22:15.869784 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Jul 7 00:22:15.869797 kernel: Policy zone: DMA32 Jul 7 00:22:15.869808 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 00:22:15.869816 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 00:22:15.869825 kernel: Kernel/User page tables isolation: enabled Jul 7 00:22:15.869834 kernel: ftrace: allocating 40095 entries in 157 pages Jul 7 00:22:15.869842 kernel: ftrace: allocated 157 pages with 5 groups Jul 7 00:22:15.869851 kernel: Dynamic Preempt: voluntary Jul 7 00:22:15.869863 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 00:22:15.869873 kernel: rcu: RCU event tracing is enabled. Jul 7 00:22:15.869881 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 00:22:15.869890 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 00:22:15.869899 kernel: Rude variant of Tasks RCU enabled. Jul 7 00:22:15.869907 kernel: Tracing variant of Tasks RCU enabled. Jul 7 00:22:15.869915 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 00:22:15.869924 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 00:22:15.869932 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:22:15.869950 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:22:15.869959 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:22:15.869968 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 7 00:22:15.869976 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 00:22:15.869985 kernel: Console: colour VGA+ 80x25 Jul 7 00:22:15.869993 kernel: printk: legacy console [tty0] enabled Jul 7 00:22:15.870002 kernel: printk: legacy console [ttyS0] enabled Jul 7 00:22:15.870010 kernel: ACPI: Core revision 20240827 Jul 7 00:22:15.870019 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 7 00:22:15.870038 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 00:22:15.870047 kernel: x2apic enabled Jul 7 00:22:15.870056 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 00:22:15.870068 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 7 00:22:15.870080 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jul 7 00:22:15.870091 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Jul 7 00:22:15.870105 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 7 00:22:15.870118 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 7 00:22:15.870128 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 00:22:15.870143 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 00:22:15.870155 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 00:22:15.870164 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jul 7 00:22:15.870174 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 7 00:22:15.870182 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 7 00:22:15.870191 kernel: MDS: Mitigation: Clear CPU buffers Jul 7 00:22:15.870200 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 7 00:22:15.870212 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 7 00:22:15.870221 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 00:22:15.870230 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 00:22:15.870239 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 00:22:15.870248 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 00:22:15.870257 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 7 00:22:15.870266 kernel: Freeing SMP alternatives memory: 32K Jul 7 00:22:15.870275 kernel: pid_max: default: 32768 minimum: 301 Jul 7 00:22:15.870284 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 00:22:15.870297 kernel: landlock: Up and running. Jul 7 00:22:15.870305 kernel: SELinux: Initializing. Jul 7 00:22:15.870314 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 7 00:22:15.870323 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 7 00:22:15.870332 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jul 7 00:22:15.870341 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jul 7 00:22:15.870351 kernel: signal: max sigframe size: 1776 Jul 7 00:22:15.870359 kernel: rcu: Hierarchical SRCU implementation. Jul 7 00:22:15.870369 kernel: rcu: Max phase no-delay instances is 400. Jul 7 00:22:15.870381 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 7 00:22:15.870390 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 7 00:22:15.870402 kernel: smp: Bringing up secondary CPUs ... Jul 7 00:22:15.870415 kernel: smpboot: x86: Booting SMP configuration: Jul 7 00:22:15.870434 kernel: .... node #0, CPUs: #1 Jul 7 00:22:15.870449 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 00:22:15.870459 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Jul 7 00:22:15.870469 kernel: Memory: 1966908K/2096612K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 125140K reserved, 0K cma-reserved) Jul 7 00:22:15.870478 kernel: devtmpfs: initialized Jul 7 00:22:15.870490 kernel: x86/mm: Memory block size: 128MB Jul 7 00:22:15.870499 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 00:22:15.870508 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 00:22:15.870517 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 00:22:15.870526 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 00:22:15.870542 kernel: audit: initializing netlink subsys (disabled) Jul 7 00:22:15.870559 kernel: audit: type=2000 audit(1751847732.298:1): state=initialized audit_enabled=0 res=1 Jul 7 00:22:15.870572 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 00:22:15.870581 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 00:22:15.870594 kernel: cpuidle: using governor menu Jul 7 00:22:15.870610 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 00:22:15.870620 kernel: dca service started, version 1.12.1 Jul 7 00:22:15.870629 kernel: PCI: Using configuration type 1 for base access Jul 7 00:22:15.870639 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 00:22:15.870648 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 00:22:15.870660 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 00:22:15.871700 kernel: ACPI: Added _OSI(Module Device) Jul 7 00:22:15.871725 kernel: ACPI: Added _OSI(Processor Device) Jul 7 00:22:15.871739 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 00:22:15.871749 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 00:22:15.871758 kernel: ACPI: Interpreter enabled Jul 7 00:22:15.871767 kernel: ACPI: PM: (supports S0 S5) Jul 7 00:22:15.871776 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 00:22:15.871787 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 00:22:15.871804 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 00:22:15.871817 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 7 00:22:15.871829 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 00:22:15.872199 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 7 00:22:15.872347 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 7 00:22:15.872464 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 7 00:22:15.872477 kernel: acpiphp: Slot [3] registered Jul 7 00:22:15.872487 kernel: acpiphp: Slot [4] registered Jul 7 00:22:15.872496 kernel: acpiphp: Slot [5] registered Jul 7 00:22:15.872505 kernel: acpiphp: Slot [6] registered Jul 7 00:22:15.872520 kernel: acpiphp: Slot [7] registered Jul 7 00:22:15.872529 kernel: acpiphp: Slot [8] registered Jul 7 00:22:15.872538 kernel: acpiphp: Slot [9] registered Jul 7 00:22:15.872547 kernel: acpiphp: Slot [10] registered Jul 7 00:22:15.872556 kernel: acpiphp: Slot [11] registered Jul 7 00:22:15.872565 kernel: acpiphp: Slot [12] registered Jul 7 00:22:15.872574 kernel: acpiphp: Slot [13] registered Jul 7 00:22:15.872584 kernel: acpiphp: Slot [14] registered Jul 7 00:22:15.872593 kernel: acpiphp: Slot [15] registered Jul 7 00:22:15.872602 kernel: acpiphp: Slot [16] registered Jul 7 00:22:15.872613 kernel: acpiphp: Slot [17] registered Jul 7 00:22:15.872622 kernel: acpiphp: Slot [18] registered Jul 7 00:22:15.872631 kernel: acpiphp: Slot [19] registered Jul 7 00:22:15.872640 kernel: acpiphp: Slot [20] registered Jul 7 00:22:15.872648 kernel: acpiphp: Slot [21] registered Jul 7 00:22:15.872657 kernel: acpiphp: Slot [22] registered Jul 7 00:22:15.872666 kernel: acpiphp: Slot [23] registered Jul 7 00:22:15.872686 kernel: acpiphp: Slot [24] registered Jul 7 00:22:15.874084 kernel: acpiphp: Slot [25] registered Jul 7 00:22:15.874111 kernel: acpiphp: Slot [26] registered Jul 7 00:22:15.874179 kernel: acpiphp: Slot [27] registered Jul 7 00:22:15.874189 kernel: acpiphp: Slot [28] registered Jul 7 00:22:15.874198 kernel: acpiphp: Slot [29] registered Jul 7 00:22:15.874207 kernel: acpiphp: Slot [30] registered Jul 7 00:22:15.874216 kernel: acpiphp: Slot [31] registered Jul 7 00:22:15.874225 kernel: PCI host bridge to bus 0000:00 Jul 7 00:22:15.874402 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 00:22:15.874507 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 00:22:15.874611 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 00:22:15.874737 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 7 00:22:15.874820 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jul 7 00:22:15.874900 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 00:22:15.875032 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jul 7 00:22:15.875175 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jul 7 00:22:15.875289 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Jul 7 00:22:15.875382 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Jul 7 00:22:15.875490 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jul 7 00:22:15.875634 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jul 7 00:22:15.876879 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jul 7 00:22:15.876988 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Jul 7 00:22:15.877103 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Jul 7 00:22:15.877204 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Jul 7 00:22:15.877345 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jul 7 00:22:15.877442 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 7 00:22:15.877562 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 7 00:22:15.877696 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jul 7 00:22:15.877794 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Jul 7 00:22:15.877893 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Jul 7 00:22:15.877985 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Jul 7 00:22:15.878077 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Jul 7 00:22:15.878168 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 00:22:15.878296 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 7 00:22:15.878407 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Jul 7 00:22:15.878530 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Jul 7 00:22:15.878622 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Jul 7 00:22:15.881241 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 7 00:22:15.881366 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Jul 7 00:22:15.881464 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Jul 7 00:22:15.881734 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Jul 7 00:22:15.881863 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Jul 7 00:22:15.881978 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Jul 7 00:22:15.882070 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Jul 7 00:22:15.882163 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Jul 7 00:22:15.882268 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 7 00:22:15.882364 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Jul 7 00:22:15.882483 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Jul 7 00:22:15.882581 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Jul 7 00:22:15.885747 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 7 00:22:15.885947 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Jul 7 00:22:15.886046 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Jul 7 00:22:15.886139 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Jul 7 00:22:15.886255 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Jul 7 00:22:15.886350 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Jul 7 00:22:15.886446 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Jul 7 00:22:15.886459 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 00:22:15.886468 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 00:22:15.886478 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 00:22:15.886487 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 00:22:15.886496 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 7 00:22:15.886505 kernel: iommu: Default domain type: Translated Jul 7 00:22:15.886515 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 00:22:15.886524 kernel: PCI: Using ACPI for IRQ routing Jul 7 00:22:15.886536 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 00:22:15.886546 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 7 00:22:15.886555 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jul 7 00:22:15.886647 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 7 00:22:15.886754 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 7 00:22:15.886845 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 00:22:15.886857 kernel: vgaarb: loaded Jul 7 00:22:15.886867 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 7 00:22:15.886876 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 7 00:22:15.886890 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 00:22:15.886899 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 00:22:15.886908 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 00:22:15.886917 kernel: pnp: PnP ACPI init Jul 7 00:22:15.886926 kernel: pnp: PnP ACPI: found 4 devices Jul 7 00:22:15.886936 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 00:22:15.886945 kernel: NET: Registered PF_INET protocol family Jul 7 00:22:15.886954 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 00:22:15.886963 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 7 00:22:15.886975 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 00:22:15.886985 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 7 00:22:15.886994 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 7 00:22:15.887003 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 7 00:22:15.887012 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 7 00:22:15.887021 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 7 00:22:15.887030 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 00:22:15.887039 kernel: NET: Registered PF_XDP protocol family Jul 7 00:22:15.887168 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 00:22:15.887257 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 00:22:15.887349 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 00:22:15.887430 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 7 00:22:15.887511 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jul 7 00:22:15.887632 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 7 00:22:15.888830 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 7 00:22:15.888850 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 7 00:22:15.888954 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 27444 usecs Jul 7 00:22:15.888967 kernel: PCI: CLS 0 bytes, default 64 Jul 7 00:22:15.888976 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 7 00:22:15.888987 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jul 7 00:22:15.888997 kernel: Initialise system trusted keyrings Jul 7 00:22:15.889007 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 7 00:22:15.889016 kernel: Key type asymmetric registered Jul 7 00:22:15.889025 kernel: Asymmetric key parser 'x509' registered Jul 7 00:22:15.889035 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 00:22:15.889048 kernel: io scheduler mq-deadline registered Jul 7 00:22:15.889057 kernel: io scheduler kyber registered Jul 7 00:22:15.889066 kernel: io scheduler bfq registered Jul 7 00:22:15.889076 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 00:22:15.889085 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 7 00:22:15.889094 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 7 00:22:15.889103 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 7 00:22:15.889112 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 00:22:15.889122 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 00:22:15.889135 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 00:22:15.889144 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 00:22:15.889153 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 00:22:15.889275 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 7 00:22:15.889364 kernel: rtc_cmos 00:03: registered as rtc0 Jul 7 00:22:15.889473 kernel: rtc_cmos 00:03: setting system clock to 2025-07-07T00:22:15 UTC (1751847735) Jul 7 00:22:15.889595 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jul 7 00:22:15.889609 kernel: intel_pstate: CPU model not supported Jul 7 00:22:15.889627 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jul 7 00:22:15.889640 kernel: NET: Registered PF_INET6 protocol family Jul 7 00:22:15.889652 kernel: Segment Routing with IPv6 Jul 7 00:22:15.889663 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 00:22:15.890698 kernel: NET: Registered PF_PACKET protocol family Jul 7 00:22:15.890717 kernel: Key type dns_resolver registered Jul 7 00:22:15.890728 kernel: IPI shorthand broadcast: enabled Jul 7 00:22:15.890737 kernel: sched_clock: Marking stable (3418002863, 127310683)->(3567516784, -22203238) Jul 7 00:22:15.890746 kernel: registered taskstats version 1 Jul 7 00:22:15.890762 kernel: Loading compiled-in X.509 certificates Jul 7 00:22:15.890771 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: 025c05e23c9778f7a70ff09fb369dd949499fb06' Jul 7 00:22:15.890780 kernel: Demotion targets for Node 0: null Jul 7 00:22:15.890790 kernel: Key type .fscrypt registered Jul 7 00:22:15.890799 kernel: Key type fscrypt-provisioning registered Jul 7 00:22:15.890810 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 00:22:15.890837 kernel: ima: Allocated hash algorithm: sha1 Jul 7 00:22:15.890849 kernel: ima: No architecture policies found Jul 7 00:22:15.890861 kernel: clk: Disabling unused clocks Jul 7 00:22:15.890871 kernel: Warning: unable to open an initial console. Jul 7 00:22:15.890881 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 7 00:22:15.890890 kernel: Write protecting the kernel read-only data: 24576k Jul 7 00:22:15.890900 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 7 00:22:15.890910 kernel: Run /init as init process Jul 7 00:22:15.890919 kernel: with arguments: Jul 7 00:22:15.890929 kernel: /init Jul 7 00:22:15.890938 kernel: with environment: Jul 7 00:22:15.890947 kernel: HOME=/ Jul 7 00:22:15.890959 kernel: TERM=linux Jul 7 00:22:15.890968 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 00:22:15.890980 systemd[1]: Successfully made /usr/ read-only. Jul 7 00:22:15.890994 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 00:22:15.891004 systemd[1]: Detected virtualization kvm. Jul 7 00:22:15.891014 systemd[1]: Detected architecture x86-64. Jul 7 00:22:15.891023 systemd[1]: Running in initrd. Jul 7 00:22:15.891035 systemd[1]: No hostname configured, using default hostname. Jul 7 00:22:15.891046 systemd[1]: Hostname set to . Jul 7 00:22:15.891055 systemd[1]: Initializing machine ID from VM UUID. Jul 7 00:22:15.891066 systemd[1]: Queued start job for default target initrd.target. Jul 7 00:22:15.891076 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:22:15.891086 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:22:15.891097 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 00:22:15.891107 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:22:15.891120 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 00:22:15.891130 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 00:22:15.891142 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 00:22:15.891154 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 00:22:15.891167 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:22:15.891177 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:22:15.891187 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:22:15.891197 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:22:15.891207 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:22:15.891217 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:22:15.891227 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:22:15.891237 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:22:15.891249 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 00:22:15.891259 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 00:22:15.891269 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:22:15.891279 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:22:15.891289 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:22:15.891299 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:22:15.891309 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 00:22:15.891319 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:22:15.891329 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 00:22:15.891342 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 00:22:15.891352 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 00:22:15.891362 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:22:15.891372 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:22:15.891382 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:22:15.891392 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 00:22:15.891405 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:22:15.891415 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 00:22:15.891425 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 00:22:15.891480 systemd-journald[213]: Collecting audit messages is disabled. Jul 7 00:22:15.891509 systemd-journald[213]: Journal started Jul 7 00:22:15.891531 systemd-journald[213]: Runtime Journal (/run/log/journal/865912ec1ac44f83a5b1d54a4d3916c5) is 4.9M, max 39.5M, 34.6M free. Jul 7 00:22:15.872279 systemd-modules-load[214]: Inserted module 'overlay' Jul 7 00:22:15.914066 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:22:15.914123 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 00:22:15.914164 kernel: Bridge firewalling registered Jul 7 00:22:15.906056 systemd-modules-load[214]: Inserted module 'br_netfilter' Jul 7 00:22:15.914818 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:22:15.915476 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:22:15.916839 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:22:15.921170 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:22:15.922389 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:22:15.924875 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:22:15.929348 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:22:15.947826 systemd-tmpfiles[234]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 00:22:15.953475 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:22:15.955455 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:22:15.957924 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:22:15.960863 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:22:15.963315 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:22:15.966870 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 00:22:15.989399 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:22:16.012091 systemd-resolved[249]: Positive Trust Anchors: Jul 7 00:22:16.012106 systemd-resolved[249]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:22:16.012143 systemd-resolved[249]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:22:16.018637 systemd-resolved[249]: Defaulting to hostname 'linux'. Jul 7 00:22:16.020998 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:22:16.022238 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:22:16.113725 kernel: SCSI subsystem initialized Jul 7 00:22:16.123706 kernel: Loading iSCSI transport class v2.0-870. Jul 7 00:22:16.135702 kernel: iscsi: registered transport (tcp) Jul 7 00:22:16.159877 kernel: iscsi: registered transport (qla4xxx) Jul 7 00:22:16.159976 kernel: QLogic iSCSI HBA Driver Jul 7 00:22:16.184350 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:22:16.213021 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:22:16.215935 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:22:16.280830 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 00:22:16.284313 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 00:22:16.336729 kernel: raid6: avx2x4 gen() 16769 MB/s Jul 7 00:22:16.353727 kernel: raid6: avx2x2 gen() 17550 MB/s Jul 7 00:22:16.371039 kernel: raid6: avx2x1 gen() 13404 MB/s Jul 7 00:22:16.371141 kernel: raid6: using algorithm avx2x2 gen() 17550 MB/s Jul 7 00:22:16.389469 kernel: raid6: .... xor() 16146 MB/s, rmw enabled Jul 7 00:22:16.389618 kernel: raid6: using avx2x2 recovery algorithm Jul 7 00:22:16.412740 kernel: xor: automatically using best checksumming function avx Jul 7 00:22:16.588722 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 00:22:16.597424 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:22:16.600514 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:22:16.632990 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jul 7 00:22:16.640946 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:22:16.645003 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 00:22:16.683993 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Jul 7 00:22:16.717847 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:22:16.719775 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:22:16.794494 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:22:16.797312 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 00:22:16.863753 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jul 7 00:22:16.876067 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jul 7 00:22:16.880729 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Jul 7 00:22:16.890701 kernel: scsi host0: Virtio SCSI HBA Jul 7 00:22:16.904988 kernel: ACPI: bus type USB registered Jul 7 00:22:16.905050 kernel: usbcore: registered new interface driver usbfs Jul 7 00:22:16.905064 kernel: usbcore: registered new interface driver hub Jul 7 00:22:16.905793 kernel: usbcore: registered new device driver usb Jul 7 00:22:16.910805 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 00:22:16.910896 kernel: GPT:9289727 != 125829119 Jul 7 00:22:16.910916 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 00:22:16.910937 kernel: GPT:9289727 != 125829119 Jul 7 00:22:16.910955 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 00:22:16.910975 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 00:22:16.932128 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jul 7 00:22:16.932466 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jul 7 00:22:16.932616 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jul 7 00:22:16.933797 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jul 7 00:22:16.934886 kernel: hub 1-0:1.0: USB hub found Jul 7 00:22:16.935104 kernel: hub 1-0:1.0: 2 ports detected Jul 7 00:22:16.942762 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jul 7 00:22:16.945356 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Jul 7 00:22:16.964734 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 00:22:16.964768 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 7 00:22:16.976740 kernel: libata version 3.00 loaded. Jul 7 00:22:16.978750 kernel: AES CTR mode by8 optimization enabled Jul 7 00:22:17.001724 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 7 00:22:17.013757 kernel: scsi host1: ata_piix Jul 7 00:22:17.026426 kernel: scsi host2: ata_piix Jul 7 00:22:17.026860 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Jul 7 00:22:17.026887 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Jul 7 00:22:17.035775 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:22:17.035927 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:22:17.038607 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:22:17.042327 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:22:17.048411 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:22:17.100566 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 7 00:22:17.121597 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:22:17.131769 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 7 00:22:17.148248 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 00:22:17.155882 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 7 00:22:17.156406 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 7 00:22:17.158512 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 00:22:17.188324 disk-uuid[609]: Primary Header is updated. Jul 7 00:22:17.188324 disk-uuid[609]: Secondary Entries is updated. Jul 7 00:22:17.188324 disk-uuid[609]: Secondary Header is updated. Jul 7 00:22:17.196702 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 00:22:17.829258 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 00:22:17.831087 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:22:17.832043 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:22:17.832911 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:22:17.834891 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 00:22:17.865893 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:22:18.205715 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 00:22:18.206299 disk-uuid[610]: The operation has completed successfully. Jul 7 00:22:18.254185 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 00:22:18.254293 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 00:22:18.289426 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 00:22:18.305846 sh[639]: Success Jul 7 00:22:18.327819 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 00:22:18.327926 kernel: device-mapper: uevent: version 1.0.3 Jul 7 00:22:18.329081 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 00:22:18.339752 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jul 7 00:22:18.401345 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 00:22:18.406825 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 00:22:18.423795 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 00:22:18.441719 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 00:22:18.441807 kernel: BTRFS: device fsid 9d729180-1373-4e9f-840c-4db0e9220239 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (651) Jul 7 00:22:18.442745 kernel: BTRFS info (device dm-0): first mount of filesystem 9d729180-1373-4e9f-840c-4db0e9220239 Jul 7 00:22:18.443955 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:22:18.445272 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 00:22:18.455228 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 00:22:18.456305 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 00:22:18.457468 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 00:22:18.459836 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 00:22:18.461793 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 00:22:18.494583 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (682) Jul 7 00:22:18.494659 kernel: BTRFS info (device vda6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:22:18.497311 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:22:18.497392 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 00:22:18.506716 kernel: BTRFS info (device vda6): last unmount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:22:18.508452 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 00:22:18.511172 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 00:22:18.599975 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:22:18.604884 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:22:18.660243 systemd-networkd[821]: lo: Link UP Jul 7 00:22:18.660255 systemd-networkd[821]: lo: Gained carrier Jul 7 00:22:18.664735 systemd-networkd[821]: Enumeration completed Jul 7 00:22:18.665127 systemd-networkd[821]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jul 7 00:22:18.665133 systemd-networkd[821]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jul 7 00:22:18.665832 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:22:18.666882 systemd-networkd[821]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:22:18.666886 systemd-networkd[821]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:22:18.667511 systemd-networkd[821]: eth0: Link UP Jul 7 00:22:18.667515 systemd-networkd[821]: eth0: Gained carrier Jul 7 00:22:18.667525 systemd-networkd[821]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jul 7 00:22:18.671482 systemd[1]: Reached target network.target - Network. Jul 7 00:22:18.672011 systemd-networkd[821]: eth1: Link UP Jul 7 00:22:18.672016 systemd-networkd[821]: eth1: Gained carrier Jul 7 00:22:18.672033 systemd-networkd[821]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:22:18.696793 systemd-networkd[821]: eth0: DHCPv4 address 146.190.122.157/20, gateway 146.190.112.1 acquired from 169.254.169.253 Jul 7 00:22:18.699811 systemd-networkd[821]: eth1: DHCPv4 address 10.124.0.19/20 acquired from 169.254.169.253 Jul 7 00:22:18.725102 ignition[726]: Ignition 2.21.0 Jul 7 00:22:18.725121 ignition[726]: Stage: fetch-offline Jul 7 00:22:18.725202 ignition[726]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:22:18.725217 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 7 00:22:18.725380 ignition[726]: parsed url from cmdline: "" Jul 7 00:22:18.725386 ignition[726]: no config URL provided Jul 7 00:22:18.725396 ignition[726]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 00:22:18.725409 ignition[726]: no config at "/usr/lib/ignition/user.ign" Jul 7 00:22:18.725417 ignition[726]: failed to fetch config: resource requires networking Jul 7 00:22:18.730742 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:22:18.728383 ignition[726]: Ignition finished successfully Jul 7 00:22:18.733583 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 00:22:18.770846 ignition[831]: Ignition 2.21.0 Jul 7 00:22:18.770861 ignition[831]: Stage: fetch Jul 7 00:22:18.771016 ignition[831]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:22:18.771026 ignition[831]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 7 00:22:18.771150 ignition[831]: parsed url from cmdline: "" Jul 7 00:22:18.771155 ignition[831]: no config URL provided Jul 7 00:22:18.771163 ignition[831]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 00:22:18.771176 ignition[831]: no config at "/usr/lib/ignition/user.ign" Jul 7 00:22:18.771227 ignition[831]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jul 7 00:22:18.801890 ignition[831]: GET result: OK Jul 7 00:22:18.802484 ignition[831]: parsing config with SHA512: b98c938e7798315965eace90dbcc652dbf2941e235e1665934ed9fade2635015a6fd1d2fb437d50a51fca9cc6bad84723f6b00cc281296a599a47b62946203f9 Jul 7 00:22:18.807937 unknown[831]: fetched base config from "system" Jul 7 00:22:18.807947 unknown[831]: fetched base config from "system" Jul 7 00:22:18.808262 ignition[831]: fetch: fetch complete Jul 7 00:22:18.807953 unknown[831]: fetched user config from "digitalocean" Jul 7 00:22:18.808267 ignition[831]: fetch: fetch passed Jul 7 00:22:18.808318 ignition[831]: Ignition finished successfully Jul 7 00:22:18.810512 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 00:22:18.812942 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 00:22:18.858352 ignition[838]: Ignition 2.21.0 Jul 7 00:22:18.858972 ignition[838]: Stage: kargs Jul 7 00:22:18.859142 ignition[838]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:22:18.859152 ignition[838]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 7 00:22:18.860290 ignition[838]: kargs: kargs passed Jul 7 00:22:18.863902 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 00:22:18.860350 ignition[838]: Ignition finished successfully Jul 7 00:22:18.867437 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 00:22:18.913429 ignition[844]: Ignition 2.21.0 Jul 7 00:22:18.913454 ignition[844]: Stage: disks Jul 7 00:22:18.913820 ignition[844]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:22:18.913840 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 7 00:22:18.917179 ignition[844]: disks: disks passed Jul 7 00:22:18.917278 ignition[844]: Ignition finished successfully Jul 7 00:22:18.919283 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 00:22:18.920271 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 00:22:18.920657 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 00:22:18.921666 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:22:18.922553 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:22:18.923260 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:22:18.925165 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 00:22:18.957625 systemd-fsck[853]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 7 00:22:18.961277 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 00:22:18.963045 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 00:22:19.102704 kernel: EXT4-fs (vda9): mounted filesystem 98c55dfc-aac4-4fdd-8ec0-1f5587b3aa36 r/w with ordered data mode. Quota mode: none. Jul 7 00:22:19.103409 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 00:22:19.104789 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 00:22:19.107248 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:22:19.109257 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 00:22:19.121083 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Jul 7 00:22:19.123829 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 7 00:22:19.125781 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 00:22:19.126855 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:22:19.130728 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (861) Jul 7 00:22:19.132193 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 00:22:19.135710 kernel: BTRFS info (device vda6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:22:19.137579 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 00:22:19.138666 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:22:19.138715 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 00:22:19.172430 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:22:19.224655 initrd-setup-root[891]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 00:22:19.236501 coreos-metadata[863]: Jul 07 00:22:19.236 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 7 00:22:19.238834 initrd-setup-root[898]: cut: /sysroot/etc/group: No such file or directory Jul 7 00:22:19.242807 coreos-metadata[864]: Jul 07 00:22:19.242 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 7 00:22:19.245734 initrd-setup-root[905]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 00:22:19.248002 coreos-metadata[863]: Jul 07 00:22:19.247 INFO Fetch successful Jul 7 00:22:19.253529 coreos-metadata[864]: Jul 07 00:22:19.253 INFO Fetch successful Jul 7 00:22:19.257410 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Jul 7 00:22:19.257577 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Jul 7 00:22:19.260135 initrd-setup-root[912]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 00:22:19.262968 coreos-metadata[864]: Jul 07 00:22:19.262 INFO wrote hostname ci-4344.1.1-7-4873e20794 to /sysroot/etc/hostname Jul 7 00:22:19.265041 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 00:22:19.376912 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 00:22:19.379115 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 00:22:19.380583 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 00:22:19.407703 kernel: BTRFS info (device vda6): last unmount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:22:19.425097 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 00:22:19.440824 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 00:22:19.444020 ignition[983]: INFO : Ignition 2.21.0 Jul 7 00:22:19.444020 ignition[983]: INFO : Stage: mount Jul 7 00:22:19.445792 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:22:19.445792 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 7 00:22:19.449374 ignition[983]: INFO : mount: mount passed Jul 7 00:22:19.449374 ignition[983]: INFO : Ignition finished successfully Jul 7 00:22:19.452541 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 00:22:19.455039 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 00:22:19.483365 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:22:19.510727 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (993) Jul 7 00:22:19.514896 kernel: BTRFS info (device vda6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:22:19.514987 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:22:19.515009 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 00:22:19.520800 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:22:19.561769 ignition[1009]: INFO : Ignition 2.21.0 Jul 7 00:22:19.561769 ignition[1009]: INFO : Stage: files Jul 7 00:22:19.562817 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:22:19.562817 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 7 00:22:19.568101 ignition[1009]: DEBUG : files: compiled without relabeling support, skipping Jul 7 00:22:19.569912 ignition[1009]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 00:22:19.569912 ignition[1009]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 00:22:19.573031 ignition[1009]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 00:22:19.573990 ignition[1009]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 00:22:19.574866 unknown[1009]: wrote ssh authorized keys file for user: core Jul 7 00:22:19.575435 ignition[1009]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 00:22:19.578325 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 00:22:19.578325 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 7 00:22:19.607350 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 00:22:19.743761 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 00:22:19.743761 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 00:22:19.743761 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 00:22:19.743761 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:22:19.743761 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:22:19.743761 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:22:19.748281 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:22:19.748281 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:22:19.748281 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:22:19.748281 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:22:19.748281 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:22:19.748281 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 00:22:19.748281 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 00:22:19.748281 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 00:22:19.748281 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 7 00:22:19.987505 systemd-networkd[821]: eth0: Gained IPv6LL Jul 7 00:22:20.034885 systemd-networkd[821]: eth1: Gained IPv6LL Jul 7 00:22:20.473585 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 00:22:20.766292 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 00:22:20.768448 ignition[1009]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 00:22:20.769683 ignition[1009]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:22:20.771226 ignition[1009]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:22:20.772531 ignition[1009]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 00:22:20.772531 ignition[1009]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 7 00:22:20.772531 ignition[1009]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 00:22:20.772531 ignition[1009]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:22:20.772531 ignition[1009]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:22:20.772531 ignition[1009]: INFO : files: files passed Jul 7 00:22:20.772531 ignition[1009]: INFO : Ignition finished successfully Jul 7 00:22:20.775614 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 00:22:20.778766 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 00:22:20.781867 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 00:22:20.794383 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 00:22:20.794546 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 00:22:20.802804 initrd-setup-root-after-ignition[1039]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:22:20.802804 initrd-setup-root-after-ignition[1039]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:22:20.804249 initrd-setup-root-after-ignition[1043]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:22:20.806099 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:22:20.807510 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 00:22:20.809350 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 00:22:20.871199 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 00:22:20.871325 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 00:22:20.872459 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 00:22:20.872939 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 00:22:20.873700 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 00:22:20.874716 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 00:22:20.915868 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:22:20.918775 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 00:22:20.943639 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:22:20.944263 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:22:20.945844 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 00:22:20.946333 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 00:22:20.946507 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:22:20.947312 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 00:22:20.947781 systemd[1]: Stopped target basic.target - Basic System. Jul 7 00:22:20.948439 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 00:22:20.949101 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:22:20.949875 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 00:22:20.950627 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 00:22:20.951349 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 00:22:20.952017 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:22:20.952795 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 00:22:20.953489 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 00:22:20.954253 systemd[1]: Stopped target swap.target - Swaps. Jul 7 00:22:20.955044 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 00:22:20.955262 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:22:20.956454 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:22:20.957381 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:22:20.958393 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 00:22:20.958527 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:22:20.959288 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 00:22:20.959480 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 00:22:20.960613 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 00:22:20.960821 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:22:20.962185 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 00:22:20.962367 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 00:22:20.963009 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 7 00:22:20.963177 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 00:22:20.965154 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 00:22:20.967019 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 00:22:20.967239 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:22:20.971960 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 00:22:20.972471 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 00:22:20.972702 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:22:20.975190 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 00:22:20.975366 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:22:20.985627 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 00:22:20.989826 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 00:22:21.008334 ignition[1063]: INFO : Ignition 2.21.0 Jul 7 00:22:21.009810 ignition[1063]: INFO : Stage: umount Jul 7 00:22:21.011272 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:22:21.011272 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 7 00:22:21.014516 ignition[1063]: INFO : umount: umount passed Jul 7 00:22:21.013161 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 00:22:21.015797 ignition[1063]: INFO : Ignition finished successfully Jul 7 00:22:21.018612 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 00:22:21.021050 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 00:22:21.029944 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 00:22:21.030063 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 00:22:21.031957 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 00:22:21.032033 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 00:22:21.032821 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 00:22:21.032873 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 00:22:21.033923 systemd[1]: Stopped target network.target - Network. Jul 7 00:22:21.035911 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 00:22:21.036028 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:22:21.045207 systemd[1]: Stopped target paths.target - Path Units. Jul 7 00:22:21.045576 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 00:22:21.049785 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:22:21.050234 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 00:22:21.050540 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 00:22:21.050877 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 00:22:21.050926 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:22:21.051247 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 00:22:21.051284 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:22:21.051597 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 00:22:21.051650 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 00:22:21.052852 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 00:22:21.052916 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 00:22:21.053876 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 00:22:21.054772 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 00:22:21.056116 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 00:22:21.056236 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 00:22:21.057310 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 00:22:21.057428 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 00:22:21.063398 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 00:22:21.063535 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 00:22:21.067237 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 00:22:21.067511 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 00:22:21.067629 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 00:22:21.069313 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 00:22:21.070353 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 00:22:21.070976 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 00:22:21.071016 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:22:21.072653 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 00:22:21.073020 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 00:22:21.073073 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:22:21.075383 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 00:22:21.075448 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:22:21.076811 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 00:22:21.076864 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 00:22:21.078361 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 00:22:21.078416 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:22:21.079551 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:22:21.083805 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 00:22:21.083912 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:22:21.098880 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 00:22:21.102145 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:22:21.103584 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 00:22:21.103845 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 00:22:21.105607 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 00:22:21.105749 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 00:22:21.106614 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 00:22:21.106662 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:22:21.107441 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 00:22:21.107532 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:22:21.108790 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 00:22:21.108848 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 00:22:21.109746 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 00:22:21.109815 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:22:21.112162 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 00:22:21.113330 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 00:22:21.113402 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:22:21.116729 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 00:22:21.117332 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:22:21.118554 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:22:21.119197 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:22:21.122158 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 7 00:22:21.122801 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 7 00:22:21.122844 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:22:21.132735 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 00:22:21.132873 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 00:22:21.134122 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 00:22:21.135533 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 00:22:21.166848 systemd[1]: Switching root. Jul 7 00:22:21.199444 systemd-journald[213]: Journal stopped Jul 7 00:22:22.393954 systemd-journald[213]: Received SIGTERM from PID 1 (systemd). Jul 7 00:22:22.394045 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 00:22:22.394066 kernel: SELinux: policy capability open_perms=1 Jul 7 00:22:22.394081 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 00:22:22.394093 kernel: SELinux: policy capability always_check_network=0 Jul 7 00:22:22.394105 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 00:22:22.394118 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 00:22:22.394132 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 00:22:22.394150 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 00:22:22.394167 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 00:22:22.394182 kernel: audit: type=1403 audit(1751847741.346:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 00:22:22.394201 systemd[1]: Successfully loaded SELinux policy in 56.492ms. Jul 7 00:22:22.394235 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.760ms. Jul 7 00:22:22.394250 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 00:22:22.394264 systemd[1]: Detected virtualization kvm. Jul 7 00:22:22.394276 systemd[1]: Detected architecture x86-64. Jul 7 00:22:22.394290 systemd[1]: Detected first boot. Jul 7 00:22:22.394303 systemd[1]: Hostname set to . Jul 7 00:22:22.394315 systemd[1]: Initializing machine ID from VM UUID. Jul 7 00:22:22.394329 zram_generator::config[1107]: No configuration found. Jul 7 00:22:22.394346 kernel: Guest personality initialized and is inactive Jul 7 00:22:22.394358 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 7 00:22:22.394371 kernel: Initialized host personality Jul 7 00:22:22.394382 kernel: NET: Registered PF_VSOCK protocol family Jul 7 00:22:22.394399 systemd[1]: Populated /etc with preset unit settings. Jul 7 00:22:22.394414 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 00:22:22.394427 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 00:22:22.394440 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 00:22:22.394456 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 00:22:22.394469 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 00:22:22.394482 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 00:22:22.394494 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 00:22:22.394506 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 00:22:22.394519 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 00:22:22.394533 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 00:22:22.394545 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 00:22:22.394558 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 00:22:22.394575 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:22:22.394589 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:22:22.394602 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 00:22:22.394616 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 00:22:22.394629 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 00:22:22.394646 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:22:22.394661 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 00:22:22.394688 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:22:22.399962 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:22:22.399995 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 00:22:22.400009 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 00:22:22.400022 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 00:22:22.400035 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 00:22:22.400047 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:22:22.400060 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:22:22.400088 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:22:22.400101 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:22:22.400113 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 00:22:22.400127 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 00:22:22.400139 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 00:22:22.400152 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:22:22.400164 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:22:22.400177 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:22:22.400189 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 00:22:22.400205 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 00:22:22.400218 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 00:22:22.400234 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 00:22:22.400247 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:22:22.400260 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 00:22:22.400273 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 00:22:22.400285 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 00:22:22.400301 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 00:22:22.400323 systemd[1]: Reached target machines.target - Containers. Jul 7 00:22:22.400345 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 00:22:22.400364 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:22:22.400384 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:22:22.400400 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 00:22:22.400419 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:22:22.400437 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:22:22.400450 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:22:22.400463 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 00:22:22.400479 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:22:22.400492 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 00:22:22.400506 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 00:22:22.400519 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 00:22:22.400531 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 00:22:22.400543 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 00:22:22.400562 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:22:22.400576 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:22:22.400592 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:22:22.400605 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:22:22.400619 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 00:22:22.400631 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 00:22:22.400645 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:22:22.400660 kernel: fuse: init (API version 7.41) Jul 7 00:22:22.401070 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 00:22:22.401108 systemd[1]: Stopped verity-setup.service. Jul 7 00:22:22.401129 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:22:22.401155 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 00:22:22.401174 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 00:22:22.401195 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 00:22:22.401259 systemd-journald[1181]: Collecting audit messages is disabled. Jul 7 00:22:22.401291 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 00:22:22.401304 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 00:22:22.401316 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 00:22:22.401331 systemd-journald[1181]: Journal started Jul 7 00:22:22.401359 systemd-journald[1181]: Runtime Journal (/run/log/journal/865912ec1ac44f83a5b1d54a4d3916c5) is 4.9M, max 39.5M, 34.6M free. Jul 7 00:22:22.116854 systemd[1]: Queued start job for default target multi-user.target. Jul 7 00:22:22.140281 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 7 00:22:22.140864 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 00:22:22.411532 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:22:22.411606 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:22:22.415287 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 00:22:22.415528 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 00:22:22.417425 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:22:22.427698 kernel: ACPI: bus type drm_connector registered Jul 7 00:22:22.429888 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:22:22.431029 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:22:22.432139 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:22:22.433140 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:22:22.433389 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:22:22.435637 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 00:22:22.436776 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 00:22:22.444745 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:22:22.473926 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 00:22:22.478796 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 00:22:22.486153 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:22:22.491938 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 00:22:22.498942 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 00:22:22.500738 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 00:22:22.502921 kernel: loop: module loaded Jul 7 00:22:22.504593 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:22:22.507027 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:22:22.509111 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 00:22:22.513091 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 00:22:22.513173 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:22:22.520432 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 00:22:22.524909 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 00:22:22.525697 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:22:22.534177 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 00:22:22.539924 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 00:22:22.540548 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:22:22.544904 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 00:22:22.545586 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:22:22.549178 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 00:22:22.562099 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 00:22:22.565966 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:22:22.568264 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 00:22:22.572594 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:22:22.586804 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:22:22.593486 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 00:22:22.596009 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 00:22:22.604705 kernel: loop0: detected capacity change from 0 to 113872 Jul 7 00:22:22.604768 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 00:22:22.630722 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 00:22:22.641802 systemd-journald[1181]: Time spent on flushing to /var/log/journal/865912ec1ac44f83a5b1d54a4d3916c5 is 104.829ms for 1013 entries. Jul 7 00:22:22.641802 systemd-journald[1181]: System Journal (/var/log/journal/865912ec1ac44f83a5b1d54a4d3916c5) is 8M, max 195.6M, 187.6M free. Jul 7 00:22:22.762860 systemd-journald[1181]: Received client request to flush runtime journal. Jul 7 00:22:22.762938 kernel: loop1: detected capacity change from 0 to 146240 Jul 7 00:22:22.762957 kernel: loop2: detected capacity change from 0 to 221472 Jul 7 00:22:22.651257 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 00:22:22.728803 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 00:22:22.736046 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:22:22.766123 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 00:22:22.778594 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:22:22.791717 kernel: loop3: detected capacity change from 0 to 8 Jul 7 00:22:22.817094 kernel: loop4: detected capacity change from 0 to 113872 Jul 7 00:22:22.853625 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Jul 7 00:22:22.853652 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Jul 7 00:22:22.877422 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:22:22.895243 kernel: loop5: detected capacity change from 0 to 146240 Jul 7 00:22:22.935948 kernel: loop6: detected capacity change from 0 to 221472 Jul 7 00:22:22.972711 kernel: loop7: detected capacity change from 0 to 8 Jul 7 00:22:22.984094 (sd-merge)[1255]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jul 7 00:22:22.989657 (sd-merge)[1255]: Merged extensions into '/usr'. Jul 7 00:22:23.002862 systemd[1]: Reload requested from client PID 1233 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 00:22:23.002883 systemd[1]: Reloading... Jul 7 00:22:23.222162 zram_generator::config[1282]: No configuration found. Jul 7 00:22:23.407703 ldconfig[1228]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 00:22:23.434184 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:22:23.560149 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 00:22:23.560723 systemd[1]: Reloading finished in 557 ms. Jul 7 00:22:23.573756 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 00:22:23.578057 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 00:22:23.586887 systemd[1]: Starting ensure-sysext.service... Jul 7 00:22:23.588923 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:22:23.630204 systemd[1]: Reload requested from client PID 1325 ('systemctl') (unit ensure-sysext.service)... Jul 7 00:22:23.630226 systemd[1]: Reloading... Jul 7 00:22:23.678451 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 00:22:23.678502 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 00:22:23.681448 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 00:22:23.682930 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 00:22:23.687397 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 00:22:23.687849 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Jul 7 00:22:23.687930 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Jul 7 00:22:23.700614 systemd-tmpfiles[1327]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:22:23.700634 systemd-tmpfiles[1327]: Skipping /boot Jul 7 00:22:23.733714 zram_generator::config[1350]: No configuration found. Jul 7 00:22:23.757909 systemd-tmpfiles[1327]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:22:23.757925 systemd-tmpfiles[1327]: Skipping /boot Jul 7 00:22:23.929292 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:22:24.037303 systemd[1]: Reloading finished in 406 ms. Jul 7 00:22:24.051715 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 00:22:24.058742 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:22:24.066941 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:22:24.072096 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 00:22:24.075666 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 00:22:24.080292 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:22:24.092948 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:22:24.095425 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 00:22:24.106313 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:22:24.106601 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:22:24.113203 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:22:24.116599 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:22:24.124055 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:22:24.124776 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:22:24.124988 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:22:24.125171 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:22:24.131261 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:22:24.131565 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:22:24.133096 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:22:24.133255 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:22:24.133404 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:22:24.146198 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:22:24.146615 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:22:24.151375 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:22:24.152944 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:22:24.153014 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:22:24.162006 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 00:22:24.162619 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:22:24.164943 systemd[1]: Finished ensure-sysext.service. Jul 7 00:22:24.168930 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 00:22:24.194028 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 00:22:24.198353 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 00:22:24.199496 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:22:24.200176 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:22:24.213042 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 00:22:24.218139 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:22:24.221907 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:22:24.222575 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:22:24.224965 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 00:22:24.226446 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 00:22:24.229257 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:22:24.229484 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:22:24.230914 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:22:24.248403 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:22:24.248610 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:22:24.254234 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 00:22:24.269534 systemd-udevd[1403]: Using default interface naming scheme 'v255'. Jul 7 00:22:24.273648 augenrules[1440]: No rules Jul 7 00:22:24.275251 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:22:24.275513 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:22:24.295710 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 00:22:24.302302 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:22:24.306866 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:22:24.480518 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Jul 7 00:22:24.483612 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jul 7 00:22:24.484018 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:22:24.484168 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:22:24.487860 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:22:24.492893 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:22:24.495984 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:22:24.496459 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:22:24.496498 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:22:24.496529 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 00:22:24.496546 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:22:24.511968 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 00:22:24.532939 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:22:24.533778 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:22:24.548230 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:22:24.548813 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:22:24.550327 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:22:24.550507 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:22:24.551916 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:22:24.553736 kernel: ISO 9660 Extensions: RRIP_1991A Jul 7 00:22:24.553623 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:22:24.571637 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jul 7 00:22:24.704394 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 00:22:24.705670 systemd-networkd[1454]: lo: Link UP Jul 7 00:22:24.705694 systemd-networkd[1454]: lo: Gained carrier Jul 7 00:22:24.705867 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 00:22:24.708768 systemd-resolved[1402]: Positive Trust Anchors: Jul 7 00:22:24.709110 systemd-resolved[1402]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:22:24.709153 systemd-resolved[1402]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:22:24.711357 systemd-networkd[1454]: Enumeration completed Jul 7 00:22:24.711504 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:22:24.712528 systemd-networkd[1454]: eth0: Configuring with /run/systemd/network/10-22:3e:dc:c0:c6:f3.network. Jul 7 00:22:24.713662 systemd-networkd[1454]: eth1: Configuring with /run/systemd/network/10-46:dc:27:b5:9b:12.network. Jul 7 00:22:24.713854 systemd-timesyncd[1418]: No network connectivity, watching for changes. Jul 7 00:22:24.714663 systemd-networkd[1454]: eth0: Link UP Jul 7 00:22:24.715212 systemd-networkd[1454]: eth0: Gained carrier Jul 7 00:22:24.716809 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 00:22:24.718660 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 00:22:24.719996 systemd-networkd[1454]: eth1: Link UP Jul 7 00:22:24.720669 systemd-networkd[1454]: eth1: Gained carrier Jul 7 00:22:24.727079 systemd-timesyncd[1418]: Network configuration changed, trying to establish connection. Jul 7 00:22:24.731340 systemd-resolved[1402]: Using system hostname 'ci-4344.1.1-7-4873e20794'. Jul 7 00:22:24.743897 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:22:24.744427 systemd[1]: Reached target network.target - Network. Jul 7 00:22:24.744869 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:22:24.745291 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:22:24.745883 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 00:22:24.746324 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 00:22:24.746786 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 7 00:22:24.747353 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 00:22:24.748308 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 00:22:24.748802 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 00:22:24.750266 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 00:22:24.750322 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:22:24.750837 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:22:24.752513 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 00:22:24.754450 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 00:22:24.761560 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 00:22:24.761022 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 00:22:24.762225 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 7 00:22:24.764207 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 7 00:22:24.771793 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 00:22:24.772807 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 00:22:24.774744 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 00:22:24.780137 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 00:22:24.782606 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:22:24.783708 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jul 7 00:22:24.784460 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:22:24.784930 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:22:24.784964 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:22:24.787280 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 00:22:24.790860 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 00:22:24.795995 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 00:22:24.801081 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 00:22:24.805346 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 00:22:24.809816 kernel: ACPI: button: Power Button [PWRF] Jul 7 00:22:24.815135 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 00:22:24.815712 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 00:22:24.822981 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 7 00:22:24.830024 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 00:22:24.839004 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 00:22:24.852978 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 00:22:24.859150 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 00:22:24.866828 coreos-metadata[1507]: Jul 07 00:22:24.866 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 7 00:22:24.879756 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 7 00:22:24.880120 coreos-metadata[1507]: Jul 07 00:22:24.879 INFO Fetch successful Jul 7 00:22:24.871109 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 00:22:24.874654 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 00:22:24.876439 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 00:22:24.878824 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 00:22:24.880700 jq[1510]: false Jul 7 00:22:24.888976 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 00:22:24.902022 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 00:22:24.902773 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 00:22:24.903082 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 00:22:24.909850 systemd-timesyncd[1418]: Contacted time server 173.255.255.133:123 (1.flatcar.pool.ntp.org). Jul 7 00:22:24.909904 systemd-timesyncd[1418]: Initial clock synchronization to Mon 2025-07-07 00:22:25.146028 UTC. Jul 7 00:22:24.921363 google_oslogin_nss_cache[1512]: oslogin_cache_refresh[1512]: Refreshing passwd entry cache Jul 7 00:22:24.921764 oslogin_cache_refresh[1512]: Refreshing passwd entry cache Jul 7 00:22:24.930732 google_oslogin_nss_cache[1512]: oslogin_cache_refresh[1512]: Failure getting users, quitting Jul 7 00:22:24.930732 google_oslogin_nss_cache[1512]: oslogin_cache_refresh[1512]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 00:22:24.930732 google_oslogin_nss_cache[1512]: oslogin_cache_refresh[1512]: Refreshing group entry cache Jul 7 00:22:24.926844 oslogin_cache_refresh[1512]: Failure getting users, quitting Jul 7 00:22:24.926867 oslogin_cache_refresh[1512]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 00:22:24.926927 oslogin_cache_refresh[1512]: Refreshing group entry cache Jul 7 00:22:24.939754 google_oslogin_nss_cache[1512]: oslogin_cache_refresh[1512]: Failure getting groups, quitting Jul 7 00:22:24.939754 google_oslogin_nss_cache[1512]: oslogin_cache_refresh[1512]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 00:22:24.931887 oslogin_cache_refresh[1512]: Failure getting groups, quitting Jul 7 00:22:24.931902 oslogin_cache_refresh[1512]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 00:22:24.950665 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 7 00:22:24.950926 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 7 00:22:24.962632 jq[1521]: true Jul 7 00:22:24.973079 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 00:22:24.972317 dbus-daemon[1508]: [system] SELinux support is enabled Jul 7 00:22:24.994345 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 00:22:24.999354 extend-filesystems[1511]: Found /dev/vda6 Jul 7 00:22:24.994910 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 00:22:24.995246 (ntainerd)[1539]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 00:22:25.017670 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 00:22:25.018439 extend-filesystems[1511]: Found /dev/vda9 Jul 7 00:22:25.018919 update_engine[1520]: I20250707 00:22:25.015451 1520 main.cc:92] Flatcar Update Engine starting Jul 7 00:22:25.017726 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 00:22:25.019427 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 00:22:25.019524 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jul 7 00:22:25.019547 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 00:22:25.037739 extend-filesystems[1511]: Checking size of /dev/vda9 Jul 7 00:22:25.049568 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 7 00:22:25.049897 update_engine[1520]: I20250707 00:22:25.045879 1520 update_check_scheduler.cc:74] Next update check in 7m33s Jul 7 00:22:25.045270 systemd[1]: Started update-engine.service - Update Engine. Jul 7 00:22:25.078871 jq[1542]: true Jul 7 00:22:25.084217 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 00:22:25.085358 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 00:22:25.086835 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 00:22:25.089845 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 00:22:25.093288 extend-filesystems[1511]: Resized partition /dev/vda9 Jul 7 00:22:25.099644 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 00:22:25.100920 tar[1533]: linux-amd64/helm Jul 7 00:22:25.114312 extend-filesystems[1563]: resize2fs 1.47.2 (1-Jan-2025) Jul 7 00:22:25.123726 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jul 7 00:22:25.207916 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 00:22:25.253037 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jul 7 00:22:25.269025 extend-filesystems[1563]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 7 00:22:25.269025 extend-filesystems[1563]: old_desc_blocks = 1, new_desc_blocks = 8 Jul 7 00:22:25.269025 extend-filesystems[1563]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jul 7 00:22:25.290626 extend-filesystems[1511]: Resized filesystem in /dev/vda9 Jul 7 00:22:25.285586 systemd-logind[1519]: New seat seat0. Jul 7 00:22:25.300139 bash[1582]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:22:25.288510 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 00:22:25.288805 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 00:22:25.298443 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 00:22:25.327584 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 00:22:25.359279 systemd[1]: Starting sshkeys.service... Jul 7 00:22:25.365091 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 00:22:25.434991 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 00:22:25.468316 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 00:22:25.480957 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 00:22:25.629260 coreos-metadata[1602]: Jul 07 00:22:25.628 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 7 00:22:25.647581 sshd_keygen[1555]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 00:22:25.652516 coreos-metadata[1602]: Jul 07 00:22:25.652 INFO Fetch successful Jul 7 00:22:25.665130 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:22:25.678318 unknown[1602]: wrote ssh authorized keys file for user: core Jul 7 00:22:25.704120 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jul 7 00:22:25.704198 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jul 7 00:22:25.717599 update-ssh-keys[1617]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:22:25.718590 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 00:22:25.732243 systemd-logind[1519]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 00:22:25.733968 systemd[1]: Finished sshkeys.service. Jul 7 00:22:25.738672 locksmithd[1556]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 00:22:25.756492 containerd[1539]: time="2025-07-07T00:22:25Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 00:22:25.763888 containerd[1539]: time="2025-07-07T00:22:25.763824753Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 00:22:25.810755 kernel: Console: switching to colour dummy device 80x25 Jul 7 00:22:25.811863 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 7 00:22:25.811939 kernel: [drm] features: -context_init Jul 7 00:22:25.811831 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 00:22:25.814592 containerd[1539]: time="2025-07-07T00:22:25.814527634Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.834µs" Jul 7 00:22:25.814592 containerd[1539]: time="2025-07-07T00:22:25.814580461Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 00:22:25.815787 containerd[1539]: time="2025-07-07T00:22:25.814610336Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 00:22:25.816539 containerd[1539]: time="2025-07-07T00:22:25.815859580Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 00:22:25.816539 containerd[1539]: time="2025-07-07T00:22:25.815896397Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 00:22:25.816539 containerd[1539]: time="2025-07-07T00:22:25.815924854Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 00:22:25.816539 containerd[1539]: time="2025-07-07T00:22:25.815977969Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 00:22:25.816539 containerd[1539]: time="2025-07-07T00:22:25.815989588Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 00:22:25.816281 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 00:22:25.818213 containerd[1539]: time="2025-07-07T00:22:25.817729528Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 00:22:25.818213 containerd[1539]: time="2025-07-07T00:22:25.817752598Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 00:22:25.818213 containerd[1539]: time="2025-07-07T00:22:25.817769513Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 00:22:25.818213 containerd[1539]: time="2025-07-07T00:22:25.817778558Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 00:22:25.818213 containerd[1539]: time="2025-07-07T00:22:25.817894385Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 00:22:25.818213 containerd[1539]: time="2025-07-07T00:22:25.818128142Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 00:22:25.818213 containerd[1539]: time="2025-07-07T00:22:25.818163404Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 00:22:25.818213 containerd[1539]: time="2025-07-07T00:22:25.818177012Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 00:22:25.820199 containerd[1539]: time="2025-07-07T00:22:25.818853844Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 00:22:25.821288 containerd[1539]: time="2025-07-07T00:22:25.820968702Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 00:22:25.821288 containerd[1539]: time="2025-07-07T00:22:25.821117009Z" level=info msg="metadata content store policy set" policy=shared Jul 7 00:22:25.824119 containerd[1539]: time="2025-07-07T00:22:25.823605994Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 00:22:25.824119 containerd[1539]: time="2025-07-07T00:22:25.823693703Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 00:22:25.824119 containerd[1539]: time="2025-07-07T00:22:25.823749551Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 00:22:25.824119 containerd[1539]: time="2025-07-07T00:22:25.823764072Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 00:22:25.824119 containerd[1539]: time="2025-07-07T00:22:25.823776813Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 00:22:25.824119 containerd[1539]: time="2025-07-07T00:22:25.823787122Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 00:22:25.824119 containerd[1539]: time="2025-07-07T00:22:25.823813561Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 00:22:25.824119 containerd[1539]: time="2025-07-07T00:22:25.823828651Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 00:22:25.824119 containerd[1539]: time="2025-07-07T00:22:25.823839430Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 00:22:25.824119 containerd[1539]: time="2025-07-07T00:22:25.823849685Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 00:22:25.824119 containerd[1539]: time="2025-07-07T00:22:25.823860829Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 00:22:25.824119 containerd[1539]: time="2025-07-07T00:22:25.823873650Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 00:22:25.824119 containerd[1539]: time="2025-07-07T00:22:25.824039261Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 00:22:25.824119 containerd[1539]: time="2025-07-07T00:22:25.824068347Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 00:22:25.824635 containerd[1539]: time="2025-07-07T00:22:25.824086849Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 00:22:25.824635 containerd[1539]: time="2025-07-07T00:22:25.824097937Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 00:22:25.824635 containerd[1539]: time="2025-07-07T00:22:25.824111141Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 00:22:25.824635 containerd[1539]: time="2025-07-07T00:22:25.824121872Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 00:22:25.824635 containerd[1539]: time="2025-07-07T00:22:25.824132940Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 00:22:25.824635 containerd[1539]: time="2025-07-07T00:22:25.824146484Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 00:22:25.824635 containerd[1539]: time="2025-07-07T00:22:25.824157912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 00:22:25.824635 containerd[1539]: time="2025-07-07T00:22:25.824168270Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 00:22:25.824635 containerd[1539]: time="2025-07-07T00:22:25.824179273Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 00:22:25.824635 containerd[1539]: time="2025-07-07T00:22:25.824254509Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 00:22:25.824635 containerd[1539]: time="2025-07-07T00:22:25.824273472Z" level=info msg="Start snapshots syncer" Jul 7 00:22:25.827105 containerd[1539]: time="2025-07-07T00:22:25.825733494Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 00:22:25.827105 containerd[1539]: time="2025-07-07T00:22:25.826062606Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 00:22:25.827305 containerd[1539]: time="2025-07-07T00:22:25.826125531Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 00:22:25.827305 containerd[1539]: time="2025-07-07T00:22:25.826245785Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 00:22:25.827305 containerd[1539]: time="2025-07-07T00:22:25.826407930Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 00:22:25.827305 containerd[1539]: time="2025-07-07T00:22:25.826430713Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 00:22:25.827305 containerd[1539]: time="2025-07-07T00:22:25.826442408Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 00:22:25.827305 containerd[1539]: time="2025-07-07T00:22:25.826454744Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 00:22:25.827305 containerd[1539]: time="2025-07-07T00:22:25.826468745Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 00:22:25.827305 containerd[1539]: time="2025-07-07T00:22:25.826479404Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 00:22:25.827305 containerd[1539]: time="2025-07-07T00:22:25.826490838Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 00:22:25.827305 containerd[1539]: time="2025-07-07T00:22:25.826518409Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 00:22:25.827305 containerd[1539]: time="2025-07-07T00:22:25.826529303Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 00:22:25.827305 containerd[1539]: time="2025-07-07T00:22:25.826539026Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 00:22:25.830600 kernel: [drm] number of scanouts: 1 Jul 7 00:22:25.831202 kernel: [drm] number of cap sets: 0 Jul 7 00:22:25.831262 containerd[1539]: time="2025-07-07T00:22:25.828155264Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 00:22:25.831262 containerd[1539]: time="2025-07-07T00:22:25.828259841Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 00:22:25.831262 containerd[1539]: time="2025-07-07T00:22:25.828270800Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 00:22:25.831262 containerd[1539]: time="2025-07-07T00:22:25.828281388Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 00:22:25.831262 containerd[1539]: time="2025-07-07T00:22:25.828289597Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 00:22:25.831262 containerd[1539]: time="2025-07-07T00:22:25.828308270Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 00:22:25.831262 containerd[1539]: time="2025-07-07T00:22:25.828327246Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 00:22:25.831262 containerd[1539]: time="2025-07-07T00:22:25.828345705Z" level=info msg="runtime interface created" Jul 7 00:22:25.831262 containerd[1539]: time="2025-07-07T00:22:25.828351122Z" level=info msg="created NRI interface" Jul 7 00:22:25.831262 containerd[1539]: time="2025-07-07T00:22:25.828359370Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 00:22:25.831262 containerd[1539]: time="2025-07-07T00:22:25.828375335Z" level=info msg="Connect containerd service" Jul 7 00:22:25.831262 containerd[1539]: time="2025-07-07T00:22:25.828422744Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 00:22:25.833169 containerd[1539]: time="2025-07-07T00:22:25.832879503Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:22:25.858863 systemd-networkd[1454]: eth0: Gained IPv6LL Jul 7 00:22:25.859356 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 00:22:25.859744 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 00:22:25.864897 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 00:22:25.865652 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 00:22:25.867655 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 00:22:25.873245 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:22:25.877266 systemd-logind[1519]: Watching system buttons on /dev/input/event2 (Power Button) Jul 7 00:22:25.878068 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 00:22:25.908751 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Jul 7 00:22:25.988140 systemd-networkd[1454]: eth1: Gained IPv6LL Jul 7 00:22:25.997875 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:22:26.019960 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 00:22:26.030454 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 00:22:26.041582 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 00:22:26.043234 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 00:22:26.095724 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 00:22:26.121408 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:22:26.122192 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:22:26.123915 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:22:26.127158 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:22:26.132274 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:22:26.144815 containerd[1539]: time="2025-07-07T00:22:26.140154955Z" level=info msg="Start subscribing containerd event" Jul 7 00:22:26.144815 containerd[1539]: time="2025-07-07T00:22:26.140224533Z" level=info msg="Start recovering state" Jul 7 00:22:26.144815 containerd[1539]: time="2025-07-07T00:22:26.140341727Z" level=info msg="Start event monitor" Jul 7 00:22:26.144815 containerd[1539]: time="2025-07-07T00:22:26.140354975Z" level=info msg="Start cni network conf syncer for default" Jul 7 00:22:26.144815 containerd[1539]: time="2025-07-07T00:22:26.140375999Z" level=info msg="Start streaming server" Jul 7 00:22:26.144815 containerd[1539]: time="2025-07-07T00:22:26.140385638Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 00:22:26.144815 containerd[1539]: time="2025-07-07T00:22:26.140393972Z" level=info msg="runtime interface starting up..." Jul 7 00:22:26.144815 containerd[1539]: time="2025-07-07T00:22:26.140400290Z" level=info msg="starting plugins..." Jul 7 00:22:26.144815 containerd[1539]: time="2025-07-07T00:22:26.140414190Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 00:22:26.144815 containerd[1539]: time="2025-07-07T00:22:26.143059705Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 00:22:26.144815 containerd[1539]: time="2025-07-07T00:22:26.143136097Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 00:22:26.144815 containerd[1539]: time="2025-07-07T00:22:26.143239608Z" level=info msg="containerd successfully booted in 0.388493s" Jul 7 00:22:26.143525 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 00:22:26.335794 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:22:26.479938 kernel: EDAC MC: Ver: 3.0.0 Jul 7 00:22:26.697897 tar[1533]: linux-amd64/LICENSE Jul 7 00:22:26.699314 tar[1533]: linux-amd64/README.md Jul 7 00:22:26.719306 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 00:22:27.506116 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:22:27.507108 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 00:22:27.508672 systemd[1]: Startup finished in 3.478s (kernel) + 5.722s (initrd) + 6.217s (userspace) = 15.417s. Jul 7 00:22:27.517353 (kubelet)[1682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:22:28.207279 kubelet[1682]: E0707 00:22:28.207210 1682 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:22:28.210368 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:22:28.210604 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:22:28.211115 systemd[1]: kubelet.service: Consumed 1.289s CPU time, 263.6M memory peak. Jul 7 00:22:29.140119 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 00:22:29.142018 systemd[1]: Started sshd@0-146.190.122.157:22-139.178.68.195:36846.service - OpenSSH per-connection server daemon (139.178.68.195:36846). Jul 7 00:22:29.232847 sshd[1694]: Accepted publickey for core from 139.178.68.195 port 36846 ssh2: RSA SHA256:YTcSFuhTbyqZDVTaIAnK2RLjCV77bWlMJOmuLtNHqI8 Jul 7 00:22:29.234870 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:22:29.249435 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 00:22:29.251613 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 00:22:29.255547 systemd-logind[1519]: New session 1 of user core. Jul 7 00:22:29.286884 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 00:22:29.291030 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 00:22:29.311010 (systemd)[1698]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 00:22:29.314501 systemd-logind[1519]: New session c1 of user core. Jul 7 00:22:29.480205 systemd[1698]: Queued start job for default target default.target. Jul 7 00:22:29.490229 systemd[1698]: Created slice app.slice - User Application Slice. Jul 7 00:22:29.490465 systemd[1698]: Reached target paths.target - Paths. Jul 7 00:22:29.490653 systemd[1698]: Reached target timers.target - Timers. Jul 7 00:22:29.492644 systemd[1698]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 00:22:29.508430 systemd[1698]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 00:22:29.508610 systemd[1698]: Reached target sockets.target - Sockets. Jul 7 00:22:29.508668 systemd[1698]: Reached target basic.target - Basic System. Jul 7 00:22:29.508728 systemd[1698]: Reached target default.target - Main User Target. Jul 7 00:22:29.508771 systemd[1698]: Startup finished in 186ms. Jul 7 00:22:29.508928 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 00:22:29.521027 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 00:22:29.594024 systemd[1]: Started sshd@1-146.190.122.157:22-139.178.68.195:36860.service - OpenSSH per-connection server daemon (139.178.68.195:36860). Jul 7 00:22:29.656294 sshd[1709]: Accepted publickey for core from 139.178.68.195 port 36860 ssh2: RSA SHA256:YTcSFuhTbyqZDVTaIAnK2RLjCV77bWlMJOmuLtNHqI8 Jul 7 00:22:29.658103 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:22:29.666423 systemd-logind[1519]: New session 2 of user core. Jul 7 00:22:29.674085 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 00:22:29.739048 sshd[1711]: Connection closed by 139.178.68.195 port 36860 Jul 7 00:22:29.739821 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Jul 7 00:22:29.751626 systemd[1]: sshd@1-146.190.122.157:22-139.178.68.195:36860.service: Deactivated successfully. Jul 7 00:22:29.753862 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 00:22:29.755586 systemd-logind[1519]: Session 2 logged out. Waiting for processes to exit. Jul 7 00:22:29.758830 systemd[1]: Started sshd@2-146.190.122.157:22-139.178.68.195:36876.service - OpenSSH per-connection server daemon (139.178.68.195:36876). Jul 7 00:22:29.760509 systemd-logind[1519]: Removed session 2. Jul 7 00:22:29.832095 sshd[1717]: Accepted publickey for core from 139.178.68.195 port 36876 ssh2: RSA SHA256:YTcSFuhTbyqZDVTaIAnK2RLjCV77bWlMJOmuLtNHqI8 Jul 7 00:22:29.834232 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:22:29.841951 systemd-logind[1519]: New session 3 of user core. Jul 7 00:22:29.848042 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 00:22:29.925028 sshd[1719]: Connection closed by 139.178.68.195 port 36876 Jul 7 00:22:29.925852 sshd-session[1717]: pam_unix(sshd:session): session closed for user core Jul 7 00:22:29.939717 systemd[1]: sshd@2-146.190.122.157:22-139.178.68.195:36876.service: Deactivated successfully. Jul 7 00:22:29.942117 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 00:22:29.943314 systemd-logind[1519]: Session 3 logged out. Waiting for processes to exit. Jul 7 00:22:29.948299 systemd[1]: Started sshd@3-146.190.122.157:22-139.178.68.195:36890.service - OpenSSH per-connection server daemon (139.178.68.195:36890). Jul 7 00:22:29.950682 systemd-logind[1519]: Removed session 3. Jul 7 00:22:30.016753 sshd[1725]: Accepted publickey for core from 139.178.68.195 port 36890 ssh2: RSA SHA256:YTcSFuhTbyqZDVTaIAnK2RLjCV77bWlMJOmuLtNHqI8 Jul 7 00:22:30.019034 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:22:30.027968 systemd-logind[1519]: New session 4 of user core. Jul 7 00:22:30.034007 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 00:22:30.099446 sshd[1727]: Connection closed by 139.178.68.195 port 36890 Jul 7 00:22:30.099142 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Jul 7 00:22:30.111766 systemd[1]: sshd@3-146.190.122.157:22-139.178.68.195:36890.service: Deactivated successfully. Jul 7 00:22:30.114712 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 00:22:30.115936 systemd-logind[1519]: Session 4 logged out. Waiting for processes to exit. Jul 7 00:22:30.120054 systemd[1]: Started sshd@4-146.190.122.157:22-139.178.68.195:36902.service - OpenSSH per-connection server daemon (139.178.68.195:36902). Jul 7 00:22:30.121379 systemd-logind[1519]: Removed session 4. Jul 7 00:22:30.180842 sshd[1733]: Accepted publickey for core from 139.178.68.195 port 36902 ssh2: RSA SHA256:YTcSFuhTbyqZDVTaIAnK2RLjCV77bWlMJOmuLtNHqI8 Jul 7 00:22:30.182790 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:22:30.188619 systemd-logind[1519]: New session 5 of user core. Jul 7 00:22:30.200045 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 00:22:30.276169 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 00:22:30.276606 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:22:30.295992 sudo[1736]: pam_unix(sudo:session): session closed for user root Jul 7 00:22:30.299857 sshd[1735]: Connection closed by 139.178.68.195 port 36902 Jul 7 00:22:30.301005 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Jul 7 00:22:30.311980 systemd[1]: sshd@4-146.190.122.157:22-139.178.68.195:36902.service: Deactivated successfully. Jul 7 00:22:30.314057 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 00:22:30.315193 systemd-logind[1519]: Session 5 logged out. Waiting for processes to exit. Jul 7 00:22:30.319051 systemd[1]: Started sshd@5-146.190.122.157:22-139.178.68.195:36916.service - OpenSSH per-connection server daemon (139.178.68.195:36916). Jul 7 00:22:30.321571 systemd-logind[1519]: Removed session 5. Jul 7 00:22:30.384243 sshd[1742]: Accepted publickey for core from 139.178.68.195 port 36916 ssh2: RSA SHA256:YTcSFuhTbyqZDVTaIAnK2RLjCV77bWlMJOmuLtNHqI8 Jul 7 00:22:30.386499 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:22:30.394049 systemd-logind[1519]: New session 6 of user core. Jul 7 00:22:30.402039 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 00:22:30.463107 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 00:22:30.463406 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:22:30.469638 sudo[1746]: pam_unix(sudo:session): session closed for user root Jul 7 00:22:30.477024 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 00:22:30.477682 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:22:30.490264 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:22:30.538740 augenrules[1768]: No rules Jul 7 00:22:30.539540 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:22:30.539809 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:22:30.541549 sudo[1745]: pam_unix(sudo:session): session closed for user root Jul 7 00:22:30.544959 sshd[1744]: Connection closed by 139.178.68.195 port 36916 Jul 7 00:22:30.545898 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Jul 7 00:22:30.556380 systemd[1]: sshd@5-146.190.122.157:22-139.178.68.195:36916.service: Deactivated successfully. Jul 7 00:22:30.558547 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 00:22:30.559782 systemd-logind[1519]: Session 6 logged out. Waiting for processes to exit. Jul 7 00:22:30.564019 systemd[1]: Started sshd@6-146.190.122.157:22-139.178.68.195:36922.service - OpenSSH per-connection server daemon (139.178.68.195:36922). Jul 7 00:22:30.566382 systemd-logind[1519]: Removed session 6. Jul 7 00:22:30.626533 sshd[1777]: Accepted publickey for core from 139.178.68.195 port 36922 ssh2: RSA SHA256:YTcSFuhTbyqZDVTaIAnK2RLjCV77bWlMJOmuLtNHqI8 Jul 7 00:22:30.628308 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:22:30.633509 systemd-logind[1519]: New session 7 of user core. Jul 7 00:22:30.643016 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 00:22:30.702957 sudo[1780]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 00:22:30.703275 sudo[1780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:22:31.223355 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 00:22:31.237748 (dockerd)[1800]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 00:22:31.603672 dockerd[1800]: time="2025-07-07T00:22:31.603041585Z" level=info msg="Starting up" Jul 7 00:22:31.605717 dockerd[1800]: time="2025-07-07T00:22:31.605648447Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 00:22:31.667482 dockerd[1800]: time="2025-07-07T00:22:31.666885125Z" level=info msg="Loading containers: start." Jul 7 00:22:31.679733 kernel: Initializing XFRM netlink socket Jul 7 00:22:31.982536 systemd-networkd[1454]: docker0: Link UP Jul 7 00:22:31.986105 dockerd[1800]: time="2025-07-07T00:22:31.985980833Z" level=info msg="Loading containers: done." Jul 7 00:22:32.004600 dockerd[1800]: time="2025-07-07T00:22:32.004233130Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 00:22:32.004600 dockerd[1800]: time="2025-07-07T00:22:32.004327871Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 00:22:32.004600 dockerd[1800]: time="2025-07-07T00:22:32.004447062Z" level=info msg="Initializing buildkit" Jul 7 00:22:32.027582 dockerd[1800]: time="2025-07-07T00:22:32.027530198Z" level=info msg="Completed buildkit initialization" Jul 7 00:22:32.036345 dockerd[1800]: time="2025-07-07T00:22:32.036268204Z" level=info msg="Daemon has completed initialization" Jul 7 00:22:32.036519 dockerd[1800]: time="2025-07-07T00:22:32.036456343Z" level=info msg="API listen on /run/docker.sock" Jul 7 00:22:32.036887 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 00:22:32.964426 containerd[1539]: time="2025-07-07T00:22:32.964284562Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 7 00:22:33.580266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1744090578.mount: Deactivated successfully. Jul 7 00:22:34.662852 containerd[1539]: time="2025-07-07T00:22:34.662777098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:22:34.664665 containerd[1539]: time="2025-07-07T00:22:34.663998736Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 7 00:22:34.664665 containerd[1539]: time="2025-07-07T00:22:34.664224930Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:22:34.667435 containerd[1539]: time="2025-07-07T00:22:34.667371772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:22:34.669372 containerd[1539]: time="2025-07-07T00:22:34.669202704Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 1.704869314s" Jul 7 00:22:34.669595 containerd[1539]: time="2025-07-07T00:22:34.669574080Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 7 00:22:34.670257 containerd[1539]: time="2025-07-07T00:22:34.670173805Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 7 00:22:36.158706 containerd[1539]: time="2025-07-07T00:22:36.158195044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:22:36.160043 containerd[1539]: time="2025-07-07T00:22:36.160008753Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 7 00:22:36.160774 containerd[1539]: time="2025-07-07T00:22:36.160474826Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:22:36.163188 containerd[1539]: time="2025-07-07T00:22:36.163131068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:22:36.164553 containerd[1539]: time="2025-07-07T00:22:36.164164683Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.493773467s" Jul 7 00:22:36.164553 containerd[1539]: time="2025-07-07T00:22:36.164202279Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 7 00:22:36.164808 containerd[1539]: time="2025-07-07T00:22:36.164790685Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 7 00:22:37.287718 containerd[1539]: time="2025-07-07T00:22:37.287636445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:22:37.289659 containerd[1539]: time="2025-07-07T00:22:37.289591222Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 7 00:22:37.290710 containerd[1539]: time="2025-07-07T00:22:37.290632208Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:22:37.293848 containerd[1539]: time="2025-07-07T00:22:37.293745132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:22:37.294567 containerd[1539]: time="2025-07-07T00:22:37.294369432Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.129491034s" Jul 7 00:22:37.294567 containerd[1539]: time="2025-07-07T00:22:37.294407289Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 7 00:22:37.295011 containerd[1539]: time="2025-07-07T00:22:37.294980620Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 7 00:22:38.323710 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 00:22:38.327590 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:22:38.523952 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:22:38.538844 (kubelet)[2084]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:22:38.605831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2744779629.mount: Deactivated successfully. Jul 7 00:22:38.633755 kubelet[2084]: E0707 00:22:38.633657 2084 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:22:38.642234 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:22:38.643066 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:22:38.643650 systemd[1]: kubelet.service: Consumed 219ms CPU time, 108.7M memory peak. Jul 7 00:22:39.141730 containerd[1539]: time="2025-07-07T00:22:39.140979023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:22:39.142866 containerd[1539]: time="2025-07-07T00:22:39.142825443Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 7 00:22:39.143531 containerd[1539]: time="2025-07-07T00:22:39.143504714Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:22:39.145670 containerd[1539]: time="2025-07-07T00:22:39.145638465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:22:39.146425 containerd[1539]: time="2025-07-07T00:22:39.146172794Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.851164039s" Jul 7 00:22:39.146425 containerd[1539]: time="2025-07-07T00:22:39.146278661Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 7 00:22:39.147696 containerd[1539]: time="2025-07-07T00:22:39.147531602Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 00:22:39.148999 systemd-resolved[1402]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jul 7 00:22:39.645148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount46092656.mount: Deactivated successfully. Jul 7 00:22:40.480062 containerd[1539]: time="2025-07-07T00:22:40.479988812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:22:40.481158 containerd[1539]: time="2025-07-07T00:22:40.481124941Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 7 00:22:40.481590 containerd[1539]: time="2025-07-07T00:22:40.481516347Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:22:40.484341 containerd[1539]: time="2025-07-07T00:22:40.484280875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:22:40.485455 containerd[1539]: time="2025-07-07T00:22:40.485239221Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.337678714s" Jul 7 00:22:40.485455 containerd[1539]: time="2025-07-07T00:22:40.485282884Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 7 00:22:40.485974 containerd[1539]: time="2025-07-07T00:22:40.485949928Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 00:22:40.920050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2705879450.mount: Deactivated successfully. Jul 7 00:22:40.929707 containerd[1539]: time="2025-07-07T00:22:40.929505169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:22:40.930557 containerd[1539]: time="2025-07-07T00:22:40.930503451Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 7 00:22:40.931095 containerd[1539]: time="2025-07-07T00:22:40.931057662Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:22:40.933488 containerd[1539]: time="2025-07-07T00:22:40.933431686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:22:40.934796 containerd[1539]: time="2025-07-07T00:22:40.934749176Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 448.669088ms" Jul 7 00:22:40.934978 containerd[1539]: time="2025-07-07T00:22:40.934952339Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 00:22:40.935880 containerd[1539]: time="2025-07-07T00:22:40.935749856Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 7 00:22:41.427915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1277641793.mount: Deactivated successfully. Jul 7 00:22:42.242930 systemd-resolved[1402]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jul 7 00:22:43.064753 containerd[1539]: time="2025-07-07T00:22:43.064649615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:22:43.067109 containerd[1539]: time="2025-07-07T00:22:43.067050052Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 7 00:22:43.068945 containerd[1539]: time="2025-07-07T00:22:43.068748078Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:22:43.084710 containerd[1539]: time="2025-07-07T00:22:43.083483395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:22:43.086024 containerd[1539]: time="2025-07-07T00:22:43.085975086Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.150178134s" Jul 7 00:22:43.086228 containerd[1539]: time="2025-07-07T00:22:43.086201049Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 7 00:22:46.235376 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:22:46.235837 systemd[1]: kubelet.service: Consumed 219ms CPU time, 108.7M memory peak. Jul 7 00:22:46.239356 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:22:46.285595 systemd[1]: Reload requested from client PID 2232 ('systemctl') (unit session-7.scope)... Jul 7 00:22:46.285622 systemd[1]: Reloading... Jul 7 00:22:46.428709 zram_generator::config[2275]: No configuration found. Jul 7 00:22:46.575127 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:22:46.717544 systemd[1]: Reloading finished in 431 ms. Jul 7 00:22:46.797102 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 00:22:46.797319 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 00:22:46.797755 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:22:46.797833 systemd[1]: kubelet.service: Consumed 123ms CPU time, 97.7M memory peak. Jul 7 00:22:46.800406 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:22:46.993153 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:22:47.006274 (kubelet)[2329]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:22:47.074460 kubelet[2329]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:22:47.074460 kubelet[2329]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 00:22:47.074460 kubelet[2329]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:22:47.074900 kubelet[2329]: I0707 00:22:47.074579 2329 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:22:47.563715 kubelet[2329]: I0707 00:22:47.562969 2329 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 00:22:47.563715 kubelet[2329]: I0707 00:22:47.563025 2329 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:22:47.563715 kubelet[2329]: I0707 00:22:47.563639 2329 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 00:22:47.595901 kubelet[2329]: E0707 00:22:47.595848 2329 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://146.190.122.157:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 146.190.122.157:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:22:47.597208 kubelet[2329]: I0707 00:22:47.596936 2329 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:22:47.606178 kubelet[2329]: I0707 00:22:47.606143 2329 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 00:22:47.611397 kubelet[2329]: I0707 00:22:47.611364 2329 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:22:47.612070 kubelet[2329]: I0707 00:22:47.612037 2329 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 00:22:47.612254 kubelet[2329]: I0707 00:22:47.612211 2329 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:22:47.612453 kubelet[2329]: I0707 00:22:47.612252 2329 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-7-4873e20794","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:22:47.612587 kubelet[2329]: I0707 00:22:47.612461 2329 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:22:47.612587 kubelet[2329]: I0707 00:22:47.612476 2329 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 00:22:47.612645 kubelet[2329]: I0707 00:22:47.612621 2329 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:22:47.617422 kubelet[2329]: I0707 00:22:47.617083 2329 kubelet.go:408] "Attempting to sync node with API server" Jul 7 00:22:47.617422 kubelet[2329]: I0707 00:22:47.617146 2329 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:22:47.617422 kubelet[2329]: I0707 00:22:47.617189 2329 kubelet.go:314] "Adding apiserver pod source" Jul 7 00:22:47.617422 kubelet[2329]: I0707 00:22:47.617212 2329 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:22:47.625298 kubelet[2329]: W0707 00:22:47.625243 2329 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.122.157:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 146.190.122.157:6443: connect: connection refused Jul 7 00:22:47.625298 kubelet[2329]: E0707 00:22:47.625306 2329 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://146.190.122.157:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 146.190.122.157:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:22:47.626076 kubelet[2329]: W0707 00:22:47.625904 2329 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.122.157:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-7-4873e20794&limit=500&resourceVersion=0": dial tcp 146.190.122.157:6443: connect: connection refused Jul 7 00:22:47.626076 kubelet[2329]: I0707 00:22:47.625951 2329 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 00:22:47.626076 kubelet[2329]: E0707 00:22:47.625975 2329 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://146.190.122.157:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-7-4873e20794&limit=500&resourceVersion=0\": dial tcp 146.190.122.157:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:22:47.629165 kubelet[2329]: I0707 00:22:47.629131 2329 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:22:47.629298 kubelet[2329]: W0707 00:22:47.629206 2329 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 00:22:47.629966 kubelet[2329]: I0707 00:22:47.629938 2329 server.go:1274] "Started kubelet" Jul 7 00:22:47.632462 kubelet[2329]: I0707 00:22:47.631874 2329 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:22:47.635383 kubelet[2329]: I0707 00:22:47.635331 2329 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:22:47.638893 kubelet[2329]: I0707 00:22:47.638865 2329 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:22:47.639425 kubelet[2329]: I0707 00:22:47.639390 2329 server.go:449] "Adding debug handlers to kubelet server" Jul 7 00:22:47.642721 kubelet[2329]: I0707 00:22:47.642669 2329 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:22:47.644345 kubelet[2329]: I0707 00:22:47.644302 2329 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:22:47.645750 kubelet[2329]: I0707 00:22:47.645657 2329 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 00:22:47.645827 kubelet[2329]: E0707 00:22:47.645809 2329 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-7-4873e20794\" not found" Jul 7 00:22:47.646268 kubelet[2329]: E0707 00:22:47.639637 2329 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.122.157:6443/api/v1/namespaces/default/events\": dial tcp 146.190.122.157:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.1.1-7-4873e20794.184fd043522458f9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.1-7-4873e20794,UID:ci-4344.1.1-7-4873e20794,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.1-7-4873e20794,},FirstTimestamp:2025-07-07 00:22:47.629912313 +0000 UTC m=+0.618146759,LastTimestamp:2025-07-07 00:22:47.629912313 +0000 UTC m=+0.618146759,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.1-7-4873e20794,}" Jul 7 00:22:47.646619 kubelet[2329]: I0707 00:22:47.646599 2329 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 00:22:47.646729 kubelet[2329]: I0707 00:22:47.646715 2329 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:22:47.649232 kubelet[2329]: E0707 00:22:47.649183 2329 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.122.157:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-7-4873e20794?timeout=10s\": dial tcp 146.190.122.157:6443: connect: connection refused" interval="200ms" Jul 7 00:22:47.650795 kubelet[2329]: I0707 00:22:47.649396 2329 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:22:47.650795 kubelet[2329]: I0707 00:22:47.649470 2329 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:22:47.651237 kubelet[2329]: W0707 00:22:47.651190 2329 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.122.157:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.122.157:6443: connect: connection refused Jul 7 00:22:47.651342 kubelet[2329]: E0707 00:22:47.651325 2329 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://146.190.122.157:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 146.190.122.157:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:22:47.657484 kubelet[2329]: I0707 00:22:47.657440 2329 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:22:47.681752 kubelet[2329]: I0707 00:22:47.681729 2329 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 00:22:47.681979 kubelet[2329]: I0707 00:22:47.681963 2329 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 00:22:47.682021 kubelet[2329]: I0707 00:22:47.681886 2329 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:22:47.682308 kubelet[2329]: I0707 00:22:47.682088 2329 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:22:47.685268 kubelet[2329]: I0707 00:22:47.685223 2329 policy_none.go:49] "None policy: Start" Jul 7 00:22:47.686441 kubelet[2329]: I0707 00:22:47.686409 2329 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 00:22:47.686533 kubelet[2329]: I0707 00:22:47.686449 2329 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:22:47.687861 kubelet[2329]: I0707 00:22:47.687808 2329 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:22:47.687861 kubelet[2329]: I0707 00:22:47.687838 2329 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 00:22:47.688004 kubelet[2329]: I0707 00:22:47.687996 2329 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 00:22:47.688215 kubelet[2329]: E0707 00:22:47.688118 2329 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:22:47.690640 kubelet[2329]: W0707 00:22:47.690542 2329 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.122.157:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.122.157:6443: connect: connection refused Jul 7 00:22:47.690996 kubelet[2329]: E0707 00:22:47.690792 2329 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://146.190.122.157:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 146.190.122.157:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:22:47.700134 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 00:22:47.719809 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 00:22:47.724449 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 00:22:47.735660 kubelet[2329]: I0707 00:22:47.735093 2329 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:22:47.735660 kubelet[2329]: I0707 00:22:47.735305 2329 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:22:47.735660 kubelet[2329]: I0707 00:22:47.735317 2329 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:22:47.737761 kubelet[2329]: I0707 00:22:47.737604 2329 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:22:47.740634 kubelet[2329]: E0707 00:22:47.740604 2329 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.1.1-7-4873e20794\" not found" Jul 7 00:22:47.799830 systemd[1]: Created slice kubepods-burstable-pod52976b737b267838a4bf7e71144c29d7.slice - libcontainer container kubepods-burstable-pod52976b737b267838a4bf7e71144c29d7.slice. Jul 7 00:22:47.820530 systemd[1]: Created slice kubepods-burstable-podbaea36e8d80d48ff8497d3b01ba18872.slice - libcontainer container kubepods-burstable-podbaea36e8d80d48ff8497d3b01ba18872.slice. Jul 7 00:22:47.838305 systemd[1]: Created slice kubepods-burstable-podd7822abcf339ef943330501d502eb662.slice - libcontainer container kubepods-burstable-podd7822abcf339ef943330501d502eb662.slice. Jul 7 00:22:47.839010 kubelet[2329]: I0707 00:22:47.838973 2329 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-7-4873e20794" Jul 7 00:22:47.840972 kubelet[2329]: E0707 00:22:47.840933 2329 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://146.190.122.157:6443/api/v1/nodes\": dial tcp 146.190.122.157:6443: connect: connection refused" node="ci-4344.1.1-7-4873e20794" Jul 7 00:22:47.847825 kubelet[2329]: I0707 00:22:47.847773 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/baea36e8d80d48ff8497d3b01ba18872-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-7-4873e20794\" (UID: \"baea36e8d80d48ff8497d3b01ba18872\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-7-4873e20794" Jul 7 00:22:47.847985 kubelet[2329]: I0707 00:22:47.847812 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/baea36e8d80d48ff8497d3b01ba18872-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-7-4873e20794\" (UID: \"baea36e8d80d48ff8497d3b01ba18872\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-7-4873e20794" Jul 7 00:22:47.847985 kubelet[2329]: I0707 00:22:47.847877 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/baea36e8d80d48ff8497d3b01ba18872-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-7-4873e20794\" (UID: \"baea36e8d80d48ff8497d3b01ba18872\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-7-4873e20794" Jul 7 00:22:47.847985 kubelet[2329]: I0707 00:22:47.847927 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/baea36e8d80d48ff8497d3b01ba18872-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-7-4873e20794\" (UID: \"baea36e8d80d48ff8497d3b01ba18872\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-7-4873e20794" Jul 7 00:22:47.847985 kubelet[2329]: I0707 00:22:47.847943 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/52976b737b267838a4bf7e71144c29d7-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-7-4873e20794\" (UID: \"52976b737b267838a4bf7e71144c29d7\") " pod="kube-system/kube-apiserver-ci-4344.1.1-7-4873e20794" Jul 7 00:22:47.848153 kubelet[2329]: I0707 00:22:47.848014 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/52976b737b267838a4bf7e71144c29d7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-7-4873e20794\" (UID: \"52976b737b267838a4bf7e71144c29d7\") " pod="kube-system/kube-apiserver-ci-4344.1.1-7-4873e20794" Jul 7 00:22:47.848153 kubelet[2329]: I0707 00:22:47.848041 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/baea36e8d80d48ff8497d3b01ba18872-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-7-4873e20794\" (UID: \"baea36e8d80d48ff8497d3b01ba18872\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-7-4873e20794" Jul 7 00:22:47.848153 kubelet[2329]: I0707 00:22:47.848085 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d7822abcf339ef943330501d502eb662-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-7-4873e20794\" (UID: \"d7822abcf339ef943330501d502eb662\") " pod="kube-system/kube-scheduler-ci-4344.1.1-7-4873e20794" Jul 7 00:22:47.848153 kubelet[2329]: I0707 00:22:47.848101 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/52976b737b267838a4bf7e71144c29d7-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-7-4873e20794\" (UID: \"52976b737b267838a4bf7e71144c29d7\") " pod="kube-system/kube-apiserver-ci-4344.1.1-7-4873e20794" Jul 7 00:22:47.850000 kubelet[2329]: E0707 00:22:47.849935 2329 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.122.157:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-7-4873e20794?timeout=10s\": dial tcp 146.190.122.157:6443: connect: connection refused" interval="400ms" Jul 7 00:22:48.043517 kubelet[2329]: I0707 00:22:48.043481 2329 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-7-4873e20794" Jul 7 00:22:48.043929 kubelet[2329]: E0707 00:22:48.043890 2329 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://146.190.122.157:6443/api/v1/nodes\": dial tcp 146.190.122.157:6443: connect: connection refused" node="ci-4344.1.1-7-4873e20794" Jul 7 00:22:48.119179 kubelet[2329]: E0707 00:22:48.119072 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:22:48.120306 containerd[1539]: time="2025-07-07T00:22:48.120262640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-7-4873e20794,Uid:52976b737b267838a4bf7e71144c29d7,Namespace:kube-system,Attempt:0,}" Jul 7 00:22:48.135079 kubelet[2329]: E0707 00:22:48.135022 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:22:48.136090 containerd[1539]: time="2025-07-07T00:22:48.135880229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-7-4873e20794,Uid:baea36e8d80d48ff8497d3b01ba18872,Namespace:kube-system,Attempt:0,}" Jul 7 00:22:48.141710 kubelet[2329]: E0707 00:22:48.141641 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:22:48.156797 containerd[1539]: time="2025-07-07T00:22:48.156742746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-7-4873e20794,Uid:d7822abcf339ef943330501d502eb662,Namespace:kube-system,Attempt:0,}" Jul 7 00:22:48.251493 kubelet[2329]: E0707 00:22:48.251353 2329 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.122.157:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-7-4873e20794?timeout=10s\": dial tcp 146.190.122.157:6443: connect: connection refused" interval="800ms" Jul 7 00:22:48.269667 containerd[1539]: time="2025-07-07T00:22:48.269553525Z" level=info msg="connecting to shim c9eefb9ae4712e07595dcdb1b050a8b10a0dad9a9da69cec66975940c6e40613" address="unix:///run/containerd/s/ca6beeb475acca274be25d3bd74cf26149785ad19c8da6a650e182b96f007ebb" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:22:48.269881 containerd[1539]: time="2025-07-07T00:22:48.269851711Z" level=info msg="connecting to shim 8b2ccce417b6703941a3c1efac103906788f08437d6df317939292d6bf0a86d0" address="unix:///run/containerd/s/7b0e5ee80b4e5f34d8e476a0c619f30037f6996ab3e8c69f4083df0a550868ec" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:22:48.285191 containerd[1539]: time="2025-07-07T00:22:48.284966570Z" level=info msg="connecting to shim 68f31bc13d7f993e29fc3e49e9ca6318b80acab0c0932f0e944fc25d96eb0cbd" address="unix:///run/containerd/s/7b83a0313ed9529567c8851e6b133fa25a596d2a49fa22b55d78835aa5ca1792" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:22:48.400115 systemd[1]: Started cri-containerd-68f31bc13d7f993e29fc3e49e9ca6318b80acab0c0932f0e944fc25d96eb0cbd.scope - libcontainer container 68f31bc13d7f993e29fc3e49e9ca6318b80acab0c0932f0e944fc25d96eb0cbd. Jul 7 00:22:48.408770 systemd[1]: Started cri-containerd-8b2ccce417b6703941a3c1efac103906788f08437d6df317939292d6bf0a86d0.scope - libcontainer container 8b2ccce417b6703941a3c1efac103906788f08437d6df317939292d6bf0a86d0. Jul 7 00:22:48.411359 systemd[1]: Started cri-containerd-c9eefb9ae4712e07595dcdb1b050a8b10a0dad9a9da69cec66975940c6e40613.scope - libcontainer container c9eefb9ae4712e07595dcdb1b050a8b10a0dad9a9da69cec66975940c6e40613. Jul 7 00:22:48.448900 kubelet[2329]: I0707 00:22:48.448800 2329 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-7-4873e20794" Jul 7 00:22:48.449387 kubelet[2329]: E0707 00:22:48.449320 2329 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://146.190.122.157:6443/api/v1/nodes\": dial tcp 146.190.122.157:6443: connect: connection refused" node="ci-4344.1.1-7-4873e20794" Jul 7 00:22:48.523362 containerd[1539]: time="2025-07-07T00:22:48.523305477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-7-4873e20794,Uid:baea36e8d80d48ff8497d3b01ba18872,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b2ccce417b6703941a3c1efac103906788f08437d6df317939292d6bf0a86d0\"" Jul 7 00:22:48.525720 kubelet[2329]: W0707 00:22:48.524068 2329 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.122.157:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.122.157:6443: connect: connection refused Jul 7 00:22:48.525720 kubelet[2329]: E0707 00:22:48.524168 2329 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://146.190.122.157:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 146.190.122.157:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:22:48.538356 kubelet[2329]: E0707 00:22:48.538290 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:22:48.543312 kubelet[2329]: W0707 00:22:48.542942 2329 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.122.157:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.122.157:6443: connect: connection refused Jul 7 00:22:48.543312 kubelet[2329]: E0707 00:22:48.543041 2329 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://146.190.122.157:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 146.190.122.157:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:22:48.551846 containerd[1539]: time="2025-07-07T00:22:48.551657527Z" level=info msg="CreateContainer within sandbox \"8b2ccce417b6703941a3c1efac103906788f08437d6df317939292d6bf0a86d0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 00:22:48.575406 containerd[1539]: time="2025-07-07T00:22:48.575341399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-7-4873e20794,Uid:52976b737b267838a4bf7e71144c29d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"68f31bc13d7f993e29fc3e49e9ca6318b80acab0c0932f0e944fc25d96eb0cbd\"" Jul 7 00:22:48.577272 kubelet[2329]: E0707 00:22:48.577231 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:22:48.580981 containerd[1539]: time="2025-07-07T00:22:48.580821954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-7-4873e20794,Uid:d7822abcf339ef943330501d502eb662,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9eefb9ae4712e07595dcdb1b050a8b10a0dad9a9da69cec66975940c6e40613\"" Jul 7 00:22:48.582007 kubelet[2329]: E0707 00:22:48.581913 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:22:48.583869 containerd[1539]: time="2025-07-07T00:22:48.583764741Z" level=info msg="CreateContainer within sandbox \"68f31bc13d7f993e29fc3e49e9ca6318b80acab0c0932f0e944fc25d96eb0cbd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 00:22:48.585706 containerd[1539]: time="2025-07-07T00:22:48.585011097Z" level=info msg="CreateContainer within sandbox \"c9eefb9ae4712e07595dcdb1b050a8b10a0dad9a9da69cec66975940c6e40613\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 00:22:48.587150 containerd[1539]: time="2025-07-07T00:22:48.587083483Z" level=info msg="Container a25684989b6444c4fe93d5750fc3357de4e66d1922edf9421c95a1d324e01a11: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:22:48.600041 containerd[1539]: time="2025-07-07T00:22:48.599990984Z" level=info msg="CreateContainer within sandbox \"8b2ccce417b6703941a3c1efac103906788f08437d6df317939292d6bf0a86d0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a25684989b6444c4fe93d5750fc3357de4e66d1922edf9421c95a1d324e01a11\"" Jul 7 00:22:48.602447 containerd[1539]: time="2025-07-07T00:22:48.602401454Z" level=info msg="StartContainer for \"a25684989b6444c4fe93d5750fc3357de4e66d1922edf9421c95a1d324e01a11\"" Jul 7 00:22:48.604421 containerd[1539]: time="2025-07-07T00:22:48.603545364Z" level=info msg="Container 21dad1a17276451102ffd4e3b6f941c1f1c35179842562bbab9aec13937dc894: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:22:48.605062 containerd[1539]: time="2025-07-07T00:22:48.602401963Z" level=info msg="Container c1f637da88ba134042da869607aee9689e69aa0aab58d6706e695c9f26254ec5: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:22:48.608925 containerd[1539]: time="2025-07-07T00:22:48.608857393Z" level=info msg="connecting to shim a25684989b6444c4fe93d5750fc3357de4e66d1922edf9421c95a1d324e01a11" address="unix:///run/containerd/s/7b0e5ee80b4e5f34d8e476a0c619f30037f6996ab3e8c69f4083df0a550868ec" protocol=ttrpc version=3 Jul 7 00:22:48.611158 containerd[1539]: time="2025-07-07T00:22:48.611100149Z" level=info msg="CreateContainer within sandbox \"c9eefb9ae4712e07595dcdb1b050a8b10a0dad9a9da69cec66975940c6e40613\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"21dad1a17276451102ffd4e3b6f941c1f1c35179842562bbab9aec13937dc894\"" Jul 7 00:22:48.612745 containerd[1539]: time="2025-07-07T00:22:48.612075406Z" level=info msg="StartContainer for \"21dad1a17276451102ffd4e3b6f941c1f1c35179842562bbab9aec13937dc894\"" Jul 7 00:22:48.613950 containerd[1539]: time="2025-07-07T00:22:48.613899280Z" level=info msg="connecting to shim 21dad1a17276451102ffd4e3b6f941c1f1c35179842562bbab9aec13937dc894" address="unix:///run/containerd/s/ca6beeb475acca274be25d3bd74cf26149785ad19c8da6a650e182b96f007ebb" protocol=ttrpc version=3 Jul 7 00:22:48.635476 containerd[1539]: time="2025-07-07T00:22:48.634931968Z" level=info msg="CreateContainer within sandbox \"68f31bc13d7f993e29fc3e49e9ca6318b80acab0c0932f0e944fc25d96eb0cbd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c1f637da88ba134042da869607aee9689e69aa0aab58d6706e695c9f26254ec5\"" Jul 7 00:22:48.636714 containerd[1539]: time="2025-07-07T00:22:48.636532789Z" level=info msg="StartContainer for \"c1f637da88ba134042da869607aee9689e69aa0aab58d6706e695c9f26254ec5\"" Jul 7 00:22:48.639531 containerd[1539]: time="2025-07-07T00:22:48.639452142Z" level=info msg="connecting to shim c1f637da88ba134042da869607aee9689e69aa0aab58d6706e695c9f26254ec5" address="unix:///run/containerd/s/7b83a0313ed9529567c8851e6b133fa25a596d2a49fa22b55d78835aa5ca1792" protocol=ttrpc version=3 Jul 7 00:22:48.662125 systemd[1]: Started cri-containerd-21dad1a17276451102ffd4e3b6f941c1f1c35179842562bbab9aec13937dc894.scope - libcontainer container 21dad1a17276451102ffd4e3b6f941c1f1c35179842562bbab9aec13937dc894. Jul 7 00:22:48.664737 systemd[1]: Started cri-containerd-a25684989b6444c4fe93d5750fc3357de4e66d1922edf9421c95a1d324e01a11.scope - libcontainer container a25684989b6444c4fe93d5750fc3357de4e66d1922edf9421c95a1d324e01a11. Jul 7 00:22:48.710067 systemd[1]: Started cri-containerd-c1f637da88ba134042da869607aee9689e69aa0aab58d6706e695c9f26254ec5.scope - libcontainer container c1f637da88ba134042da869607aee9689e69aa0aab58d6706e695c9f26254ec5. Jul 7 00:22:48.825173 kubelet[2329]: W0707 00:22:48.825089 2329 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.122.157:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 146.190.122.157:6443: connect: connection refused Jul 7 00:22:48.825862 kubelet[2329]: E0707 00:22:48.825741 2329 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://146.190.122.157:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 146.190.122.157:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:22:48.846573 containerd[1539]: time="2025-07-07T00:22:48.846499449Z" level=info msg="StartContainer for \"a25684989b6444c4fe93d5750fc3357de4e66d1922edf9421c95a1d324e01a11\" returns successfully" Jul 7 00:22:48.860963 containerd[1539]: time="2025-07-07T00:22:48.860898123Z" level=info msg="StartContainer for \"21dad1a17276451102ffd4e3b6f941c1f1c35179842562bbab9aec13937dc894\" returns successfully" Jul 7 00:22:48.875479 containerd[1539]: time="2025-07-07T00:22:48.875369376Z" level=info msg="StartContainer for \"c1f637da88ba134042da869607aee9689e69aa0aab58d6706e695c9f26254ec5\" returns successfully" Jul 7 00:22:49.253174 kubelet[2329]: I0707 00:22:49.253091 2329 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-7-4873e20794" Jul 7 00:22:49.734601 kubelet[2329]: E0707 00:22:49.734356 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:22:49.745043 kubelet[2329]: E0707 00:22:49.744934 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:22:49.748735 kubelet[2329]: E0707 00:22:49.747907 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:22:50.751246 kubelet[2329]: E0707 00:22:50.751161 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:22:50.753109 kubelet[2329]: E0707 00:22:50.752188 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:22:50.753309 kubelet[2329]: E0707 00:22:50.753229 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:22:51.171724 kubelet[2329]: E0707 00:22:51.171568 2329 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.1.1-7-4873e20794\" not found" node="ci-4344.1.1-7-4873e20794" Jul 7 00:22:51.225387 kubelet[2329]: I0707 00:22:51.225338 2329 kubelet_node_status.go:75] "Successfully registered node" node="ci-4344.1.1-7-4873e20794" Jul 7 00:22:51.225595 kubelet[2329]: E0707 00:22:51.225420 2329 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4344.1.1-7-4873e20794\": node \"ci-4344.1.1-7-4873e20794\" not found" Jul 7 00:22:51.627497 kubelet[2329]: I0707 00:22:51.627383 2329 apiserver.go:52] "Watching apiserver" Jul 7 00:22:51.647599 kubelet[2329]: I0707 00:22:51.647540 2329 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 00:22:51.759822 kubelet[2329]: E0707 00:22:51.759764 2329 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4344.1.1-7-4873e20794\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.1.1-7-4873e20794" Jul 7 00:22:51.760347 kubelet[2329]: E0707 00:22:51.760044 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:22:52.762991 kubelet[2329]: W0707 00:22:52.762877 2329 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 00:22:52.764059 kubelet[2329]: E0707 00:22:52.763689 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:22:53.465511 systemd[1]: Reload requested from client PID 2603 ('systemctl') (unit session-7.scope)... Jul 7 00:22:53.465532 systemd[1]: Reloading... Jul 7 00:22:53.577719 zram_generator::config[2646]: No configuration found. Jul 7 00:22:53.727244 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:22:53.757788 kubelet[2329]: E0707 00:22:53.757737 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:22:53.952058 systemd[1]: Reloading finished in 486 ms. Jul 7 00:22:53.990542 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:22:53.991020 kubelet[2329]: I0707 00:22:53.990978 2329 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:22:54.010340 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 00:22:54.010795 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:22:54.010900 systemd[1]: kubelet.service: Consumed 1.113s CPU time, 124.1M memory peak. Jul 7 00:22:54.014187 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:22:54.194702 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:22:54.203489 (kubelet)[2697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:22:54.262134 kubelet[2697]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:22:54.262504 kubelet[2697]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 00:22:54.262576 kubelet[2697]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:22:54.262807 kubelet[2697]: I0707 00:22:54.262759 2697 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:22:54.271615 kubelet[2697]: I0707 00:22:54.271557 2697 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 00:22:54.271615 kubelet[2697]: I0707 00:22:54.271589 2697 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:22:54.271914 kubelet[2697]: I0707 00:22:54.271892 2697 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 00:22:54.277727 kubelet[2697]: I0707 00:22:54.277178 2697 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 00:22:54.288347 kubelet[2697]: I0707 00:22:54.288296 2697 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:22:54.297767 kubelet[2697]: I0707 00:22:54.297715 2697 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 00:22:54.303593 kubelet[2697]: I0707 00:22:54.302335 2697 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:22:54.303593 kubelet[2697]: I0707 00:22:54.302496 2697 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 00:22:54.303593 kubelet[2697]: I0707 00:22:54.302729 2697 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:22:54.303593 kubelet[2697]: I0707 00:22:54.302766 2697 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-7-4873e20794","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:22:54.304010 kubelet[2697]: I0707 00:22:54.303000 2697 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:22:54.304010 kubelet[2697]: I0707 00:22:54.303013 2697 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 00:22:54.304010 kubelet[2697]: I0707 00:22:54.303045 2697 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:22:54.304010 kubelet[2697]: I0707 00:22:54.303168 2697 kubelet.go:408] "Attempting to sync node with API server" Jul 7 00:22:54.304010 kubelet[2697]: I0707 00:22:54.303191 2697 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:22:54.304010 kubelet[2697]: I0707 00:22:54.303221 2697 kubelet.go:314] "Adding apiserver pod source" Jul 7 00:22:54.304010 kubelet[2697]: I0707 00:22:54.303237 2697 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:22:54.306391 kubelet[2697]: I0707 00:22:54.306372 2697 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 00:22:54.307600 kubelet[2697]: I0707 00:22:54.307072 2697 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:22:54.308611 kubelet[2697]: I0707 00:22:54.308325 2697 server.go:1274] "Started kubelet" Jul 7 00:22:54.313629 kubelet[2697]: I0707 00:22:54.313582 2697 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:22:54.327398 kubelet[2697]: I0707 00:22:54.327283 2697 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:22:54.330599 kubelet[2697]: I0707 00:22:54.330326 2697 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:22:54.331952 kubelet[2697]: I0707 00:22:54.331917 2697 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:22:54.332868 kubelet[2697]: I0707 00:22:54.332846 2697 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:22:54.339069 kubelet[2697]: I0707 00:22:54.339038 2697 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 00:22:54.339414 kubelet[2697]: I0707 00:22:54.339380 2697 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:22:54.340909 kubelet[2697]: I0707 00:22:54.340876 2697 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:22:54.341020 kubelet[2697]: I0707 00:22:54.340948 2697 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 00:22:54.341020 kubelet[2697]: I0707 00:22:54.340979 2697 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 00:22:54.341073 kubelet[2697]: E0707 00:22:54.341027 2697 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:22:54.344002 kubelet[2697]: I0707 00:22:54.343968 2697 server.go:449] "Adding debug handlers to kubelet server" Jul 7 00:22:54.345694 kubelet[2697]: I0707 00:22:54.345657 2697 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 00:22:54.346121 kubelet[2697]: I0707 00:22:54.346107 2697 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:22:54.355224 kubelet[2697]: I0707 00:22:54.355198 2697 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:22:54.355401 kubelet[2697]: I0707 00:22:54.355385 2697 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:22:54.355507 kubelet[2697]: E0707 00:22:54.355486 2697 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:22:54.356113 kubelet[2697]: I0707 00:22:54.355620 2697 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:22:54.412375 kubelet[2697]: I0707 00:22:54.412344 2697 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 00:22:54.412661 kubelet[2697]: I0707 00:22:54.412618 2697 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 00:22:54.413186 kubelet[2697]: I0707 00:22:54.412787 2697 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:22:54.413186 kubelet[2697]: I0707 00:22:54.413015 2697 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 00:22:54.413186 kubelet[2697]: I0707 00:22:54.413032 2697 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 00:22:54.413186 kubelet[2697]: I0707 00:22:54.413059 2697 policy_none.go:49] "None policy: Start" Jul 7 00:22:54.414219 kubelet[2697]: I0707 00:22:54.414197 2697 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 00:22:54.414360 kubelet[2697]: I0707 00:22:54.414348 2697 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:22:54.414702 kubelet[2697]: I0707 00:22:54.414656 2697 state_mem.go:75] "Updated machine memory state" Jul 7 00:22:54.420174 kubelet[2697]: I0707 00:22:54.420142 2697 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:22:54.420368 kubelet[2697]: I0707 00:22:54.420352 2697 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:22:54.420420 kubelet[2697]: I0707 00:22:54.420367 2697 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:22:54.421295 kubelet[2697]: I0707 00:22:54.421066 2697 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:22:54.457949 kubelet[2697]: W0707 00:22:54.457765 2697 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 00:22:54.457949 kubelet[2697]: E0707 00:22:54.457853 2697 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4344.1.1-7-4873e20794\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.1-7-4873e20794" Jul 7 00:22:54.464086 kubelet[2697]: W0707 00:22:54.464045 2697 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 00:22:54.464927 kubelet[2697]: W0707 00:22:54.464903 2697 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 00:22:54.533348 kubelet[2697]: I0707 00:22:54.532472 2697 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-7-4873e20794" Jul 7 00:22:54.546985 kubelet[2697]: I0707 00:22:54.546821 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/baea36e8d80d48ff8497d3b01ba18872-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-7-4873e20794\" (UID: \"baea36e8d80d48ff8497d3b01ba18872\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-7-4873e20794" Jul 7 00:22:54.547708 kubelet[2697]: I0707 00:22:54.547108 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/52976b737b267838a4bf7e71144c29d7-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-7-4873e20794\" (UID: \"52976b737b267838a4bf7e71144c29d7\") " pod="kube-system/kube-apiserver-ci-4344.1.1-7-4873e20794" Jul 7 00:22:54.547708 kubelet[2697]: I0707 00:22:54.547267 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/52976b737b267838a4bf7e71144c29d7-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-7-4873e20794\" (UID: \"52976b737b267838a4bf7e71144c29d7\") " pod="kube-system/kube-apiserver-ci-4344.1.1-7-4873e20794" Jul 7 00:22:54.547708 kubelet[2697]: I0707 00:22:54.547423 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/baea36e8d80d48ff8497d3b01ba18872-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-7-4873e20794\" (UID: \"baea36e8d80d48ff8497d3b01ba18872\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-7-4873e20794" Jul 7 00:22:54.547708 kubelet[2697]: I0707 00:22:54.547621 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/baea36e8d80d48ff8497d3b01ba18872-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-7-4873e20794\" (UID: \"baea36e8d80d48ff8497d3b01ba18872\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-7-4873e20794" Jul 7 00:22:54.547968 kubelet[2697]: I0707 00:22:54.547809 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/baea36e8d80d48ff8497d3b01ba18872-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-7-4873e20794\" (UID: \"baea36e8d80d48ff8497d3b01ba18872\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-7-4873e20794" Jul 7 00:22:54.548024 kubelet[2697]: I0707 00:22:54.547978 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d7822abcf339ef943330501d502eb662-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-7-4873e20794\" (UID: \"d7822abcf339ef943330501d502eb662\") " pod="kube-system/kube-scheduler-ci-4344.1.1-7-4873e20794" Jul 7 00:22:54.548477 kubelet[2697]: I0707 00:22:54.548010 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/52976b737b267838a4bf7e71144c29d7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-7-4873e20794\" (UID: \"52976b737b267838a4bf7e71144c29d7\") " pod="kube-system/kube-apiserver-ci-4344.1.1-7-4873e20794" Jul 7 00:22:54.548477 kubelet[2697]: I0707 00:22:54.548164 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/baea36e8d80d48ff8497d3b01ba18872-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-7-4873e20794\" (UID: \"baea36e8d80d48ff8497d3b01ba18872\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-7-4873e20794" Jul 7 00:22:54.550936 kubelet[2697]: I0707 00:22:54.550893 2697 kubelet_node_status.go:111] "Node was previously registered" node="ci-4344.1.1-7-4873e20794" Jul 7 00:22:54.551065 kubelet[2697]: I0707 00:22:54.551004 2697 kubelet_node_status.go:75] "Successfully registered node" node="ci-4344.1.1-7-4873e20794" Jul 7 00:22:54.759462 kubelet[2697]: E0707 00:22:54.759369 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:22:54.767394 kubelet[2697]: E0707 00:22:54.765664 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:22:54.767394 kubelet[2697]: E0707 00:22:54.765855 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:22:55.312484 kubelet[2697]: I0707 00:22:55.312356 2697 apiserver.go:52] "Watching apiserver" Jul 7 00:22:55.346301 kubelet[2697]: I0707 00:22:55.346230 2697 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 00:22:55.383092 kubelet[2697]: E0707 00:22:55.383053 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:22:55.383622 kubelet[2697]: E0707 00:22:55.383581 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:22:55.395510 kubelet[2697]: W0707 00:22:55.395468 2697 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 00:22:55.395667 kubelet[2697]: E0707 00:22:55.395553 2697 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4344.1.1-7-4873e20794\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.1-7-4873e20794" Jul 7 00:22:55.395771 kubelet[2697]: E0707 00:22:55.395755 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:22:55.425060 kubelet[2697]: I0707 00:22:55.424809 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.1.1-7-4873e20794" podStartSLOduration=3.424770309 podStartE2EDuration="3.424770309s" podCreationTimestamp="2025-07-07 00:22:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:22:55.423641065 +0000 UTC m=+1.213555521" watchObservedRunningTime="2025-07-07 00:22:55.424770309 +0000 UTC m=+1.214684760" Jul 7 00:22:55.455099 kubelet[2697]: I0707 00:22:55.455031 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.1.1-7-4873e20794" podStartSLOduration=1.454995275 podStartE2EDuration="1.454995275s" podCreationTimestamp="2025-07-07 00:22:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:22:55.443285191 +0000 UTC m=+1.233199641" watchObservedRunningTime="2025-07-07 00:22:55.454995275 +0000 UTC m=+1.244909722" Jul 7 00:22:55.469581 kubelet[2697]: I0707 00:22:55.469493 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.1.1-7-4873e20794" podStartSLOduration=1.469475142 podStartE2EDuration="1.469475142s" podCreationTimestamp="2025-07-07 00:22:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:22:55.456990559 +0000 UTC m=+1.246905003" watchObservedRunningTime="2025-07-07 00:22:55.469475142 +0000 UTC m=+1.259389588" Jul 7 00:22:56.384283 kubelet[2697]: E0707 00:22:56.384233 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:22:57.233721 kubelet[2697]: E0707 00:22:57.233581 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:22:57.877914 kubelet[2697]: I0707 00:22:57.877867 2697 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 00:22:57.879769 containerd[1539]: time="2025-07-07T00:22:57.879694071Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 00:22:57.880626 kubelet[2697]: I0707 00:22:57.880582 2697 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 00:22:58.772587 kubelet[2697]: E0707 00:22:58.772549 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:22:58.824995 systemd[1]: Created slice kubepods-besteffort-podb2609dd8_590d_405d_8ff3_882614746510.slice - libcontainer container kubepods-besteffort-podb2609dd8_590d_405d_8ff3_882614746510.slice. Jul 7 00:22:58.878289 kubelet[2697]: I0707 00:22:58.878229 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2609dd8-590d-405d-8ff3-882614746510-xtables-lock\") pod \"kube-proxy-mtv2p\" (UID: \"b2609dd8-590d-405d-8ff3-882614746510\") " pod="kube-system/kube-proxy-mtv2p" Jul 7 00:22:58.878289 kubelet[2697]: I0707 00:22:58.878283 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b2609dd8-590d-405d-8ff3-882614746510-kube-proxy\") pod \"kube-proxy-mtv2p\" (UID: \"b2609dd8-590d-405d-8ff3-882614746510\") " pod="kube-system/kube-proxy-mtv2p" Jul 7 00:22:58.879106 kubelet[2697]: I0707 00:22:58.878321 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2609dd8-590d-405d-8ff3-882614746510-lib-modules\") pod \"kube-proxy-mtv2p\" (UID: \"b2609dd8-590d-405d-8ff3-882614746510\") " pod="kube-system/kube-proxy-mtv2p" Jul 7 00:22:58.879106 kubelet[2697]: I0707 00:22:58.878351 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbb6p\" (UniqueName: \"kubernetes.io/projected/b2609dd8-590d-405d-8ff3-882614746510-kube-api-access-qbb6p\") pod \"kube-proxy-mtv2p\" (UID: \"b2609dd8-590d-405d-8ff3-882614746510\") " pod="kube-system/kube-proxy-mtv2p" Jul 7 00:22:59.003811 kubelet[2697]: W0707 00:22:59.003560 2697 reflector.go:561] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4344.1.1-7-4873e20794" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4344.1.1-7-4873e20794' and this object Jul 7 00:22:59.003811 kubelet[2697]: E0707 00:22:59.003637 2697 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:ci-4344.1.1-7-4873e20794\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4344.1.1-7-4873e20794' and this object" logger="UnhandledError" Jul 7 00:22:59.005264 kubelet[2697]: W0707 00:22:59.005227 2697 reflector.go:561] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4344.1.1-7-4873e20794" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4344.1.1-7-4873e20794' and this object Jul 7 00:22:59.005497 kubelet[2697]: E0707 00:22:59.005465 2697 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4344.1.1-7-4873e20794\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4344.1.1-7-4873e20794' and this object" logger="UnhandledError" Jul 7 00:22:59.024627 systemd[1]: Created slice kubepods-besteffort-pod42d4f6ba_96d2_442a_bafa_fb93b07d26f0.slice - libcontainer container kubepods-besteffort-pod42d4f6ba_96d2_442a_bafa_fb93b07d26f0.slice. Jul 7 00:22:59.079387 kubelet[2697]: I0707 00:22:59.079292 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncjb5\" (UniqueName: \"kubernetes.io/projected/42d4f6ba-96d2-442a-bafa-fb93b07d26f0-kube-api-access-ncjb5\") pod \"tigera-operator-5bf8dfcb4-sz4rd\" (UID: \"42d4f6ba-96d2-442a-bafa-fb93b07d26f0\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-sz4rd" Jul 7 00:22:59.079901 kubelet[2697]: I0707 00:22:59.079861 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/42d4f6ba-96d2-442a-bafa-fb93b07d26f0-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-sz4rd\" (UID: \"42d4f6ba-96d2-442a-bafa-fb93b07d26f0\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-sz4rd" Jul 7 00:22:59.137628 kubelet[2697]: E0707 00:22:59.137533 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:22:59.138366 containerd[1539]: time="2025-07-07T00:22:59.138327982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mtv2p,Uid:b2609dd8-590d-405d-8ff3-882614746510,Namespace:kube-system,Attempt:0,}" Jul 7 00:22:59.167415 containerd[1539]: time="2025-07-07T00:22:59.167314467Z" level=info msg="connecting to shim 14a661d88486f8ac682370b05699318eafa74d21db42fe8f9be8a06f711695af" address="unix:///run/containerd/s/0830f1a75f962b7be38f6ac1fa348fa02d1fe4c5e756c65f87363a8a3f36cd62" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:22:59.214017 systemd[1]: Started cri-containerd-14a661d88486f8ac682370b05699318eafa74d21db42fe8f9be8a06f711695af.scope - libcontainer container 14a661d88486f8ac682370b05699318eafa74d21db42fe8f9be8a06f711695af. Jul 7 00:22:59.254024 containerd[1539]: time="2025-07-07T00:22:59.253939918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mtv2p,Uid:b2609dd8-590d-405d-8ff3-882614746510,Namespace:kube-system,Attempt:0,} returns sandbox id \"14a661d88486f8ac682370b05699318eafa74d21db42fe8f9be8a06f711695af\"" Jul 7 00:22:59.255836 kubelet[2697]: E0707 00:22:59.255669 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:22:59.264356 containerd[1539]: time="2025-07-07T00:22:59.264286336Z" level=info msg="CreateContainer within sandbox \"14a661d88486f8ac682370b05699318eafa74d21db42fe8f9be8a06f711695af\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 00:22:59.286183 containerd[1539]: time="2025-07-07T00:22:59.286008405Z" level=info msg="Container ad39dad1c6cf96c91b813b47f3ce9b2237a6cf12291217fe244433b50946e425: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:22:59.291305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount555590779.mount: Deactivated successfully. Jul 7 00:22:59.299843 containerd[1539]: time="2025-07-07T00:22:59.299471801Z" level=info msg="CreateContainer within sandbox \"14a661d88486f8ac682370b05699318eafa74d21db42fe8f9be8a06f711695af\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ad39dad1c6cf96c91b813b47f3ce9b2237a6cf12291217fe244433b50946e425\"" Jul 7 00:22:59.301517 containerd[1539]: time="2025-07-07T00:22:59.301466583Z" level=info msg="StartContainer for \"ad39dad1c6cf96c91b813b47f3ce9b2237a6cf12291217fe244433b50946e425\"" Jul 7 00:22:59.303712 containerd[1539]: time="2025-07-07T00:22:59.303625417Z" level=info msg="connecting to shim ad39dad1c6cf96c91b813b47f3ce9b2237a6cf12291217fe244433b50946e425" address="unix:///run/containerd/s/0830f1a75f962b7be38f6ac1fa348fa02d1fe4c5e756c65f87363a8a3f36cd62" protocol=ttrpc version=3 Jul 7 00:22:59.332252 systemd[1]: Started cri-containerd-ad39dad1c6cf96c91b813b47f3ce9b2237a6cf12291217fe244433b50946e425.scope - libcontainer container ad39dad1c6cf96c91b813b47f3ce9b2237a6cf12291217fe244433b50946e425. Jul 7 00:22:59.383336 containerd[1539]: time="2025-07-07T00:22:59.383292078Z" level=info msg="StartContainer for \"ad39dad1c6cf96c91b813b47f3ce9b2237a6cf12291217fe244433b50946e425\" returns successfully" Jul 7 00:22:59.394689 kubelet[2697]: E0707 00:22:59.394631 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:22:59.395641 kubelet[2697]: E0707 00:22:59.395600 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:23:00.192637 kubelet[2697]: E0707 00:23:00.192431 2697 projected.go:288] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 7 00:23:00.192637 kubelet[2697]: E0707 00:23:00.192504 2697 projected.go:194] Error preparing data for projected volume kube-api-access-ncjb5 for pod tigera-operator/tigera-operator-5bf8dfcb4-sz4rd: failed to sync configmap cache: timed out waiting for the condition Jul 7 00:23:00.192637 kubelet[2697]: E0707 00:23:00.192618 2697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/42d4f6ba-96d2-442a-bafa-fb93b07d26f0-kube-api-access-ncjb5 podName:42d4f6ba-96d2-442a-bafa-fb93b07d26f0 nodeName:}" failed. No retries permitted until 2025-07-07 00:23:00.692584079 +0000 UTC m=+6.482498530 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ncjb5" (UniqueName: "kubernetes.io/projected/42d4f6ba-96d2-442a-bafa-fb93b07d26f0-kube-api-access-ncjb5") pod "tigera-operator-5bf8dfcb4-sz4rd" (UID: "42d4f6ba-96d2-442a-bafa-fb93b07d26f0") : failed to sync configmap cache: timed out waiting for the condition Jul 7 00:23:00.265980 kubelet[2697]: E0707 00:23:00.264833 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:23:00.292612 kubelet[2697]: I0707 00:23:00.292341 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mtv2p" podStartSLOduration=2.292314365 podStartE2EDuration="2.292314365s" podCreationTimestamp="2025-07-07 00:22:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:22:59.448095262 +0000 UTC m=+5.238009712" watchObservedRunningTime="2025-07-07 00:23:00.292314365 +0000 UTC m=+6.082228819" Jul 7 00:23:00.396445 kubelet[2697]: E0707 00:23:00.396389 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:23:00.398587 kubelet[2697]: E0707 00:23:00.398538 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:23:00.831019 containerd[1539]: time="2025-07-07T00:23:00.830889783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-sz4rd,Uid:42d4f6ba-96d2-442a-bafa-fb93b07d26f0,Namespace:tigera-operator,Attempt:0,}" Jul 7 00:23:00.859803 containerd[1539]: time="2025-07-07T00:23:00.859640263Z" level=info msg="connecting to shim d94ec74201ffcff1fe76b04253cce735daf32fca2dd07f7b84acb2a960978416" address="unix:///run/containerd/s/0b93496546c50e8c24e67388b83a3d700b3daeb50e21ed49f7931ac93b68eaa7" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:23:00.904035 systemd[1]: Started cri-containerd-d94ec74201ffcff1fe76b04253cce735daf32fca2dd07f7b84acb2a960978416.scope - libcontainer container d94ec74201ffcff1fe76b04253cce735daf32fca2dd07f7b84acb2a960978416. Jul 7 00:23:00.971736 containerd[1539]: time="2025-07-07T00:23:00.971633495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-sz4rd,Uid:42d4f6ba-96d2-442a-bafa-fb93b07d26f0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d94ec74201ffcff1fe76b04253cce735daf32fca2dd07f7b84acb2a960978416\"" Jul 7 00:23:00.974773 containerd[1539]: time="2025-07-07T00:23:00.974640574Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 00:23:00.976998 systemd-resolved[1402]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Jul 7 00:23:02.159311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3663632993.mount: Deactivated successfully. Jul 7 00:23:02.961089 containerd[1539]: time="2025-07-07T00:23:02.961010695Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:02.962053 containerd[1539]: time="2025-07-07T00:23:02.961953660Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 7 00:23:02.963145 containerd[1539]: time="2025-07-07T00:23:02.962726924Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:02.964922 containerd[1539]: time="2025-07-07T00:23:02.964881996Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:02.965928 containerd[1539]: time="2025-07-07T00:23:02.965886033Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.990878196s" Jul 7 00:23:02.966110 containerd[1539]: time="2025-07-07T00:23:02.966079296Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 7 00:23:02.972614 containerd[1539]: time="2025-07-07T00:23:02.972431721Z" level=info msg="CreateContainer within sandbox \"d94ec74201ffcff1fe76b04253cce735daf32fca2dd07f7b84acb2a960978416\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 00:23:02.983331 containerd[1539]: time="2025-07-07T00:23:02.983121525Z" level=info msg="Container 3e0ee3e648c17e8d9337c1a194c8bff361343a7f6efa40634f141444049f14bf: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:23:02.992024 containerd[1539]: time="2025-07-07T00:23:02.991974307Z" level=info msg="CreateContainer within sandbox \"d94ec74201ffcff1fe76b04253cce735daf32fca2dd07f7b84acb2a960978416\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3e0ee3e648c17e8d9337c1a194c8bff361343a7f6efa40634f141444049f14bf\"" Jul 7 00:23:02.992875 containerd[1539]: time="2025-07-07T00:23:02.992838600Z" level=info msg="StartContainer for \"3e0ee3e648c17e8d9337c1a194c8bff361343a7f6efa40634f141444049f14bf\"" Jul 7 00:23:02.993961 containerd[1539]: time="2025-07-07T00:23:02.993921999Z" level=info msg="connecting to shim 3e0ee3e648c17e8d9337c1a194c8bff361343a7f6efa40634f141444049f14bf" address="unix:///run/containerd/s/0b93496546c50e8c24e67388b83a3d700b3daeb50e21ed49f7931ac93b68eaa7" protocol=ttrpc version=3 Jul 7 00:23:03.027033 systemd[1]: Started cri-containerd-3e0ee3e648c17e8d9337c1a194c8bff361343a7f6efa40634f141444049f14bf.scope - libcontainer container 3e0ee3e648c17e8d9337c1a194c8bff361343a7f6efa40634f141444049f14bf. Jul 7 00:23:03.081820 containerd[1539]: time="2025-07-07T00:23:03.081735687Z" level=info msg="StartContainer for \"3e0ee3e648c17e8d9337c1a194c8bff361343a7f6efa40634f141444049f14bf\" returns successfully" Jul 7 00:23:03.421552 kubelet[2697]: I0707 00:23:03.421451 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-sz4rd" podStartSLOduration=3.427451685 podStartE2EDuration="5.421277116s" podCreationTimestamp="2025-07-07 00:22:58 +0000 UTC" firstStartedPulling="2025-07-07 00:23:00.973799363 +0000 UTC m=+6.763713793" lastFinishedPulling="2025-07-07 00:23:02.967624782 +0000 UTC m=+8.757539224" observedRunningTime="2025-07-07 00:23:03.420544389 +0000 UTC m=+9.210458844" watchObservedRunningTime="2025-07-07 00:23:03.421277116 +0000 UTC m=+9.211191569" Jul 7 00:23:07.240408 kubelet[2697]: E0707 00:23:07.240054 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:23:10.095965 sudo[1780]: pam_unix(sudo:session): session closed for user root Jul 7 00:23:10.098322 update_engine[1520]: I20250707 00:23:10.097736 1520 update_attempter.cc:509] Updating boot flags... Jul 7 00:23:10.099839 sshd[1779]: Connection closed by 139.178.68.195 port 36922 Jul 7 00:23:10.101216 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Jul 7 00:23:10.110875 systemd-logind[1519]: Session 7 logged out. Waiting for processes to exit. Jul 7 00:23:10.111315 systemd[1]: sshd@6-146.190.122.157:22-139.178.68.195:36922.service: Deactivated successfully. Jul 7 00:23:10.121277 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 00:23:10.122542 systemd[1]: session-7.scope: Consumed 5.645s CPU time, 157.9M memory peak. Jul 7 00:23:10.134031 systemd-logind[1519]: Removed session 7. Jul 7 00:23:15.304945 systemd[1]: Created slice kubepods-besteffort-podc057fc50_feec_4de1_8a9b_171daa297c37.slice - libcontainer container kubepods-besteffort-podc057fc50_feec_4de1_8a9b_171daa297c37.slice. Jul 7 00:23:15.402239 kubelet[2697]: I0707 00:23:15.402057 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd9mh\" (UniqueName: \"kubernetes.io/projected/c057fc50-feec-4de1-8a9b-171daa297c37-kube-api-access-wd9mh\") pod \"calico-typha-9ddc77688-nfhjc\" (UID: \"c057fc50-feec-4de1-8a9b-171daa297c37\") " pod="calico-system/calico-typha-9ddc77688-nfhjc" Jul 7 00:23:15.402239 kubelet[2697]: I0707 00:23:15.402101 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c057fc50-feec-4de1-8a9b-171daa297c37-tigera-ca-bundle\") pod \"calico-typha-9ddc77688-nfhjc\" (UID: \"c057fc50-feec-4de1-8a9b-171daa297c37\") " pod="calico-system/calico-typha-9ddc77688-nfhjc" Jul 7 00:23:15.402239 kubelet[2697]: I0707 00:23:15.402121 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c057fc50-feec-4de1-8a9b-171daa297c37-typha-certs\") pod \"calico-typha-9ddc77688-nfhjc\" (UID: \"c057fc50-feec-4de1-8a9b-171daa297c37\") " pod="calico-system/calico-typha-9ddc77688-nfhjc" Jul 7 00:23:15.610114 kubelet[2697]: E0707 00:23:15.609991 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:23:15.611268 containerd[1539]: time="2025-07-07T00:23:15.610933039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9ddc77688-nfhjc,Uid:c057fc50-feec-4de1-8a9b-171daa297c37,Namespace:calico-system,Attempt:0,}" Jul 7 00:23:15.644147 containerd[1539]: time="2025-07-07T00:23:15.644069455Z" level=info msg="connecting to shim 669d0d6022e581335cc78e76f64c0945d808a15aeddcbb6c8acd951d1c6777b5" address="unix:///run/containerd/s/a92f2dc25d93c0989dfc6f7f98e0929c113806803777ed035d8426cf3269e6b1" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:23:15.688055 systemd[1]: Started cri-containerd-669d0d6022e581335cc78e76f64c0945d808a15aeddcbb6c8acd951d1c6777b5.scope - libcontainer container 669d0d6022e581335cc78e76f64c0945d808a15aeddcbb6c8acd951d1c6777b5. Jul 7 00:23:15.723641 systemd[1]: Created slice kubepods-besteffort-pode92d7d14_b0a3_4aa3_b72e_06dfcd9266be.slice - libcontainer container kubepods-besteffort-pode92d7d14_b0a3_4aa3_b72e_06dfcd9266be.slice. Jul 7 00:23:15.809389 kubelet[2697]: I0707 00:23:15.808152 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e92d7d14-b0a3-4aa3-b72e-06dfcd9266be-cni-log-dir\") pod \"calico-node-g8rdb\" (UID: \"e92d7d14-b0a3-4aa3-b72e-06dfcd9266be\") " pod="calico-system/calico-node-g8rdb" Jul 7 00:23:15.809389 kubelet[2697]: I0707 00:23:15.808218 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e92d7d14-b0a3-4aa3-b72e-06dfcd9266be-policysync\") pod \"calico-node-g8rdb\" (UID: \"e92d7d14-b0a3-4aa3-b72e-06dfcd9266be\") " pod="calico-system/calico-node-g8rdb" Jul 7 00:23:15.809389 kubelet[2697]: I0707 00:23:15.808258 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg4zp\" (UniqueName: \"kubernetes.io/projected/e92d7d14-b0a3-4aa3-b72e-06dfcd9266be-kube-api-access-zg4zp\") pod \"calico-node-g8rdb\" (UID: \"e92d7d14-b0a3-4aa3-b72e-06dfcd9266be\") " pod="calico-system/calico-node-g8rdb" Jul 7 00:23:15.809389 kubelet[2697]: I0707 00:23:15.808293 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e92d7d14-b0a3-4aa3-b72e-06dfcd9266be-cni-bin-dir\") pod \"calico-node-g8rdb\" (UID: \"e92d7d14-b0a3-4aa3-b72e-06dfcd9266be\") " pod="calico-system/calico-node-g8rdb" Jul 7 00:23:15.809389 kubelet[2697]: I0707 00:23:15.808317 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e92d7d14-b0a3-4aa3-b72e-06dfcd9266be-cni-net-dir\") pod \"calico-node-g8rdb\" (UID: \"e92d7d14-b0a3-4aa3-b72e-06dfcd9266be\") " pod="calico-system/calico-node-g8rdb" Jul 7 00:23:15.810983 kubelet[2697]: I0707 00:23:15.808343 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e92d7d14-b0a3-4aa3-b72e-06dfcd9266be-flexvol-driver-host\") pod \"calico-node-g8rdb\" (UID: \"e92d7d14-b0a3-4aa3-b72e-06dfcd9266be\") " pod="calico-system/calico-node-g8rdb" Jul 7 00:23:15.810983 kubelet[2697]: I0707 00:23:15.808369 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e92d7d14-b0a3-4aa3-b72e-06dfcd9266be-var-lib-calico\") pod \"calico-node-g8rdb\" (UID: \"e92d7d14-b0a3-4aa3-b72e-06dfcd9266be\") " pod="calico-system/calico-node-g8rdb" Jul 7 00:23:15.810983 kubelet[2697]: I0707 00:23:15.808398 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e92d7d14-b0a3-4aa3-b72e-06dfcd9266be-tigera-ca-bundle\") pod \"calico-node-g8rdb\" (UID: \"e92d7d14-b0a3-4aa3-b72e-06dfcd9266be\") " pod="calico-system/calico-node-g8rdb" Jul 7 00:23:15.810983 kubelet[2697]: I0707 00:23:15.808423 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e92d7d14-b0a3-4aa3-b72e-06dfcd9266be-var-run-calico\") pod \"calico-node-g8rdb\" (UID: \"e92d7d14-b0a3-4aa3-b72e-06dfcd9266be\") " pod="calico-system/calico-node-g8rdb" Jul 7 00:23:15.810983 kubelet[2697]: I0707 00:23:15.808519 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e92d7d14-b0a3-4aa3-b72e-06dfcd9266be-lib-modules\") pod \"calico-node-g8rdb\" (UID: \"e92d7d14-b0a3-4aa3-b72e-06dfcd9266be\") " pod="calico-system/calico-node-g8rdb" Jul 7 00:23:15.811228 kubelet[2697]: I0707 00:23:15.808552 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e92d7d14-b0a3-4aa3-b72e-06dfcd9266be-node-certs\") pod \"calico-node-g8rdb\" (UID: \"e92d7d14-b0a3-4aa3-b72e-06dfcd9266be\") " pod="calico-system/calico-node-g8rdb" Jul 7 00:23:15.811228 kubelet[2697]: I0707 00:23:15.808577 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e92d7d14-b0a3-4aa3-b72e-06dfcd9266be-xtables-lock\") pod \"calico-node-g8rdb\" (UID: \"e92d7d14-b0a3-4aa3-b72e-06dfcd9266be\") " pod="calico-system/calico-node-g8rdb" Jul 7 00:23:15.912228 kubelet[2697]: E0707 00:23:15.911274 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:15.913124 kubelet[2697]: W0707 00:23:15.912735 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:15.913124 kubelet[2697]: E0707 00:23:15.912795 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:15.913464 kubelet[2697]: E0707 00:23:15.913240 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:15.913464 kubelet[2697]: W0707 00:23:15.913258 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:15.913464 kubelet[2697]: E0707 00:23:15.913300 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:15.913635 kubelet[2697]: E0707 00:23:15.913482 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:15.913635 kubelet[2697]: W0707 00:23:15.913494 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:15.913635 kubelet[2697]: E0707 00:23:15.913525 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:15.914026 kubelet[2697]: E0707 00:23:15.914005 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:15.914026 kubelet[2697]: W0707 00:23:15.914022 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:15.914180 kubelet[2697]: E0707 00:23:15.914091 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:15.915006 kubelet[2697]: E0707 00:23:15.914984 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:15.915006 kubelet[2697]: W0707 00:23:15.915002 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:15.915165 kubelet[2697]: E0707 00:23:15.915040 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:15.916925 kubelet[2697]: E0707 00:23:15.916896 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:15.916925 kubelet[2697]: W0707 00:23:15.916923 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:15.917132 kubelet[2697]: E0707 00:23:15.916999 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:15.917174 kubelet[2697]: E0707 00:23:15.917160 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:15.917240 kubelet[2697]: W0707 00:23:15.917173 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:15.917240 kubelet[2697]: E0707 00:23:15.917200 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:15.917602 kubelet[2697]: E0707 00:23:15.917578 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:15.917602 kubelet[2697]: W0707 00:23:15.917598 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:15.917775 kubelet[2697]: E0707 00:23:15.917622 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:15.918024 kubelet[2697]: E0707 00:23:15.918007 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:15.918082 kubelet[2697]: W0707 00:23:15.918025 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:15.918082 kubelet[2697]: E0707 00:23:15.918048 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:15.918317 kubelet[2697]: E0707 00:23:15.918301 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:15.918360 kubelet[2697]: W0707 00:23:15.918320 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:15.918360 kubelet[2697]: E0707 00:23:15.918336 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:15.919824 kubelet[2697]: E0707 00:23:15.919801 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:15.919824 kubelet[2697]: W0707 00:23:15.919820 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:15.919953 kubelet[2697]: E0707 00:23:15.919838 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:15.930050 kubelet[2697]: E0707 00:23:15.929963 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:15.930050 kubelet[2697]: W0707 00:23:15.929994 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:15.931403 kubelet[2697]: E0707 00:23:15.931025 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:15.934796 containerd[1539]: time="2025-07-07T00:23:15.934746249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9ddc77688-nfhjc,Uid:c057fc50-feec-4de1-8a9b-171daa297c37,Namespace:calico-system,Attempt:0,} returns sandbox id \"669d0d6022e581335cc78e76f64c0945d808a15aeddcbb6c8acd951d1c6777b5\"" Jul 7 00:23:15.936438 kubelet[2697]: E0707 00:23:15.935835 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:23:15.938598 containerd[1539]: time="2025-07-07T00:23:15.938554170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 00:23:15.961596 kubelet[2697]: E0707 00:23:15.961436 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:15.961596 kubelet[2697]: W0707 00:23:15.961489 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:15.961596 kubelet[2697]: E0707 00:23:15.961517 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.030696 containerd[1539]: time="2025-07-07T00:23:16.030591162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g8rdb,Uid:e92d7d14-b0a3-4aa3-b72e-06dfcd9266be,Namespace:calico-system,Attempt:0,}" Jul 7 00:23:16.072166 containerd[1539]: time="2025-07-07T00:23:16.072114264Z" level=info msg="connecting to shim 913b2317e981c283d2b4f04fe4b6b51b31a03946b32022fe67145304f6fbfb3c" address="unix:///run/containerd/s/9b01503bc6edf5daa5492cc1e3f29c8410accd014c2a37c21a949f083360572a" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:23:16.078565 kubelet[2697]: E0707 00:23:16.078479 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8ldv" podUID="dba08fd4-9512-4c11-aac8-02de403331df" Jul 7 00:23:16.090741 kubelet[2697]: E0707 00:23:16.089197 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.091128 kubelet[2697]: W0707 00:23:16.090894 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.091128 kubelet[2697]: E0707 00:23:16.090928 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.091523 kubelet[2697]: E0707 00:23:16.091442 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.091523 kubelet[2697]: W0707 00:23:16.091459 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.091523 kubelet[2697]: E0707 00:23:16.091474 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.093026 kubelet[2697]: E0707 00:23:16.092910 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.093026 kubelet[2697]: W0707 00:23:16.092937 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.093026 kubelet[2697]: E0707 00:23:16.092958 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.093549 kubelet[2697]: E0707 00:23:16.093455 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.093549 kubelet[2697]: W0707 00:23:16.093471 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.093549 kubelet[2697]: E0707 00:23:16.093487 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.093968 kubelet[2697]: E0707 00:23:16.093906 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.093968 kubelet[2697]: W0707 00:23:16.093920 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.093968 kubelet[2697]: E0707 00:23:16.093931 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.094303 kubelet[2697]: E0707 00:23:16.094290 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.094450 kubelet[2697]: W0707 00:23:16.094364 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.094450 kubelet[2697]: E0707 00:23:16.094379 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.095216 kubelet[2697]: E0707 00:23:16.095196 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.095377 kubelet[2697]: W0707 00:23:16.095297 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.095377 kubelet[2697]: E0707 00:23:16.095314 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.096139 kubelet[2697]: E0707 00:23:16.096047 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.096139 kubelet[2697]: W0707 00:23:16.096065 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.096139 kubelet[2697]: E0707 00:23:16.096080 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.097149 kubelet[2697]: E0707 00:23:16.097064 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.097149 kubelet[2697]: W0707 00:23:16.097089 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.097149 kubelet[2697]: E0707 00:23:16.097111 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.097658 kubelet[2697]: E0707 00:23:16.097580 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.097658 kubelet[2697]: W0707 00:23:16.097593 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.097658 kubelet[2697]: E0707 00:23:16.097609 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.098012 kubelet[2697]: E0707 00:23:16.097948 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.098012 kubelet[2697]: W0707 00:23:16.097959 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.098012 kubelet[2697]: E0707 00:23:16.097970 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.098324 kubelet[2697]: E0707 00:23:16.098230 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.098324 kubelet[2697]: W0707 00:23:16.098244 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.098324 kubelet[2697]: E0707 00:23:16.098256 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.098661 kubelet[2697]: E0707 00:23:16.098603 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.098661 kubelet[2697]: W0707 00:23:16.098615 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.098661 kubelet[2697]: E0707 00:23:16.098626 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.099365 kubelet[2697]: E0707 00:23:16.099297 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.099365 kubelet[2697]: W0707 00:23:16.099311 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.099365 kubelet[2697]: E0707 00:23:16.099322 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.099724 kubelet[2697]: E0707 00:23:16.099642 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.099724 kubelet[2697]: W0707 00:23:16.099653 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.099724 kubelet[2697]: E0707 00:23:16.099664 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.100053 kubelet[2697]: E0707 00:23:16.099980 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.100053 kubelet[2697]: W0707 00:23:16.099992 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.100053 kubelet[2697]: E0707 00:23:16.100002 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.100341 kubelet[2697]: E0707 00:23:16.100287 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.100341 kubelet[2697]: W0707 00:23:16.100298 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.100341 kubelet[2697]: E0707 00:23:16.100308 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.101064 kubelet[2697]: E0707 00:23:16.101046 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.101219 kubelet[2697]: W0707 00:23:16.101152 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.101219 kubelet[2697]: E0707 00:23:16.101169 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.101521 kubelet[2697]: E0707 00:23:16.101436 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.101521 kubelet[2697]: W0707 00:23:16.101457 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.101521 kubelet[2697]: E0707 00:23:16.101470 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.102206 kubelet[2697]: E0707 00:23:16.102188 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.102359 kubelet[2697]: W0707 00:23:16.102269 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.102359 kubelet[2697]: E0707 00:23:16.102285 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.110324 kubelet[2697]: E0707 00:23:16.110235 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.110324 kubelet[2697]: W0707 00:23:16.110270 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.110798 kubelet[2697]: E0707 00:23:16.110297 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.110798 kubelet[2697]: I0707 00:23:16.110643 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/dba08fd4-9512-4c11-aac8-02de403331df-varrun\") pod \"csi-node-driver-z8ldv\" (UID: \"dba08fd4-9512-4c11-aac8-02de403331df\") " pod="calico-system/csi-node-driver-z8ldv" Jul 7 00:23:16.111226 kubelet[2697]: E0707 00:23:16.111179 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.111226 kubelet[2697]: W0707 00:23:16.111200 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.111663 kubelet[2697]: E0707 00:23:16.111495 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.111829 kubelet[2697]: E0707 00:23:16.111817 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.111892 kubelet[2697]: W0707 00:23:16.111875 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.111949 kubelet[2697]: E0707 00:23:16.111940 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.112476 kubelet[2697]: E0707 00:23:16.112459 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.112691 kubelet[2697]: W0707 00:23:16.112620 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.112989 kubelet[2697]: E0707 00:23:16.112828 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.113123 kubelet[2697]: I0707 00:23:16.113106 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dba08fd4-9512-4c11-aac8-02de403331df-kubelet-dir\") pod \"csi-node-driver-z8ldv\" (UID: \"dba08fd4-9512-4c11-aac8-02de403331df\") " pod="calico-system/csi-node-driver-z8ldv" Jul 7 00:23:16.113262 kubelet[2697]: E0707 00:23:16.113253 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.113401 kubelet[2697]: W0707 00:23:16.113362 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.113557 kubelet[2697]: E0707 00:23:16.113512 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.116667 kubelet[2697]: E0707 00:23:16.115989 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.116964 kubelet[2697]: W0707 00:23:16.116861 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.116964 kubelet[2697]: E0707 00:23:16.116916 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.117366 kubelet[2697]: E0707 00:23:16.117317 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.117366 kubelet[2697]: W0707 00:23:16.117332 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.117366 kubelet[2697]: E0707 00:23:16.117347 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.117606 kubelet[2697]: I0707 00:23:16.117504 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dba08fd4-9512-4c11-aac8-02de403331df-socket-dir\") pod \"csi-node-driver-z8ldv\" (UID: \"dba08fd4-9512-4c11-aac8-02de403331df\") " pod="calico-system/csi-node-driver-z8ldv" Jul 7 00:23:16.117866 kubelet[2697]: E0707 00:23:16.117852 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.117941 kubelet[2697]: W0707 00:23:16.117917 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.118085 kubelet[2697]: E0707 00:23:16.118002 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.118898 kubelet[2697]: E0707 00:23:16.118876 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.119065 kubelet[2697]: W0707 00:23:16.118971 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.119065 kubelet[2697]: E0707 00:23:16.119015 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.119406 kubelet[2697]: E0707 00:23:16.119389 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.119506 kubelet[2697]: W0707 00:23:16.119473 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.119506 kubelet[2697]: E0707 00:23:16.119489 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.119609 kubelet[2697]: I0707 00:23:16.119596 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dba08fd4-9512-4c11-aac8-02de403331df-registration-dir\") pod \"csi-node-driver-z8ldv\" (UID: \"dba08fd4-9512-4c11-aac8-02de403331df\") " pod="calico-system/csi-node-driver-z8ldv" Jul 7 00:23:16.119928 kubelet[2697]: E0707 00:23:16.119916 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.119983 kubelet[2697]: W0707 00:23:16.119974 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.120045 kubelet[2697]: E0707 00:23:16.120037 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.120255 kubelet[2697]: E0707 00:23:16.120244 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.120309 kubelet[2697]: W0707 00:23:16.120301 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.120720 kubelet[2697]: E0707 00:23:16.120641 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.121012 kubelet[2697]: I0707 00:23:16.120982 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcrgf\" (UniqueName: \"kubernetes.io/projected/dba08fd4-9512-4c11-aac8-02de403331df-kube-api-access-tcrgf\") pod \"csi-node-driver-z8ldv\" (UID: \"dba08fd4-9512-4c11-aac8-02de403331df\") " pod="calico-system/csi-node-driver-z8ldv" Jul 7 00:23:16.121311 kubelet[2697]: E0707 00:23:16.121260 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.121311 kubelet[2697]: W0707 00:23:16.121276 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.121311 kubelet[2697]: E0707 00:23:16.121291 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.121985 kubelet[2697]: E0707 00:23:16.121966 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.122166 kubelet[2697]: W0707 00:23:16.122085 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.122166 kubelet[2697]: E0707 00:23:16.122109 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.122925 kubelet[2697]: E0707 00:23:16.122853 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.122925 kubelet[2697]: W0707 00:23:16.122872 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.122925 kubelet[2697]: E0707 00:23:16.122888 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.149002 systemd[1]: Started cri-containerd-913b2317e981c283d2b4f04fe4b6b51b31a03946b32022fe67145304f6fbfb3c.scope - libcontainer container 913b2317e981c283d2b4f04fe4b6b51b31a03946b32022fe67145304f6fbfb3c. Jul 7 00:23:16.212307 containerd[1539]: time="2025-07-07T00:23:16.211844585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g8rdb,Uid:e92d7d14-b0a3-4aa3-b72e-06dfcd9266be,Namespace:calico-system,Attempt:0,} returns sandbox id \"913b2317e981c283d2b4f04fe4b6b51b31a03946b32022fe67145304f6fbfb3c\"" Jul 7 00:23:16.222723 kubelet[2697]: E0707 00:23:16.222673 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.222873 kubelet[2697]: W0707 00:23:16.222858 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.222940 kubelet[2697]: E0707 00:23:16.222926 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.223508 kubelet[2697]: E0707 00:23:16.223397 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.223508 kubelet[2697]: W0707 00:23:16.223421 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.223508 kubelet[2697]: E0707 00:23:16.223449 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.224901 kubelet[2697]: E0707 00:23:16.224873 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.225006 kubelet[2697]: W0707 00:23:16.224898 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.225006 kubelet[2697]: E0707 00:23:16.224958 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.225312 kubelet[2697]: E0707 00:23:16.225295 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.225420 kubelet[2697]: W0707 00:23:16.225313 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.225420 kubelet[2697]: E0707 00:23:16.225394 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.225972 kubelet[2697]: E0707 00:23:16.225823 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.225972 kubelet[2697]: W0707 00:23:16.225839 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.226148 kubelet[2697]: E0707 00:23:16.226058 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.226839 kubelet[2697]: E0707 00:23:16.226816 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.226927 kubelet[2697]: W0707 00:23:16.226837 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.227019 kubelet[2697]: E0707 00:23:16.226970 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.227168 kubelet[2697]: E0707 00:23:16.227149 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.227205 kubelet[2697]: W0707 00:23:16.227171 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.227290 kubelet[2697]: E0707 00:23:16.227243 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.227456 kubelet[2697]: E0707 00:23:16.227440 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.227771 kubelet[2697]: W0707 00:23:16.227739 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.227958 kubelet[2697]: E0707 00:23:16.227856 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.228140 kubelet[2697]: E0707 00:23:16.228115 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.228290 kubelet[2697]: W0707 00:23:16.228268 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.228390 kubelet[2697]: E0707 00:23:16.228310 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.228916 kubelet[2697]: E0707 00:23:16.228894 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.228916 kubelet[2697]: W0707 00:23:16.228914 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.229192 kubelet[2697]: E0707 00:23:16.229177 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.229458 kubelet[2697]: E0707 00:23:16.229402 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.229919 kubelet[2697]: W0707 00:23:16.229877 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.230541 kubelet[2697]: E0707 00:23:16.230227 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.230541 kubelet[2697]: E0707 00:23:16.230392 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.230541 kubelet[2697]: W0707 00:23:16.230410 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.231876 kubelet[2697]: E0707 00:23:16.231853 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.232044 kubelet[2697]: E0707 00:23:16.231873 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.232294 kubelet[2697]: W0707 00:23:16.232135 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.232580 kubelet[2697]: E0707 00:23:16.232558 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.232852 kubelet[2697]: E0707 00:23:16.232730 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.232852 kubelet[2697]: W0707 00:23:16.232786 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.232970 kubelet[2697]: E0707 00:23:16.232845 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.233315 kubelet[2697]: E0707 00:23:16.233235 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.233315 kubelet[2697]: W0707 00:23:16.233253 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.233315 kubelet[2697]: E0707 00:23:16.233304 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.233842 kubelet[2697]: E0707 00:23:16.233684 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.233842 kubelet[2697]: W0707 00:23:16.233747 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.233842 kubelet[2697]: E0707 00:23:16.233774 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.234386 kubelet[2697]: E0707 00:23:16.234239 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.234386 kubelet[2697]: W0707 00:23:16.234264 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.234386 kubelet[2697]: E0707 00:23:16.234282 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.234761 kubelet[2697]: E0707 00:23:16.234695 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.234761 kubelet[2697]: W0707 00:23:16.234714 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.235145 kubelet[2697]: E0707 00:23:16.235074 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.236017 kubelet[2697]: E0707 00:23:16.235863 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.236017 kubelet[2697]: W0707 00:23:16.235885 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.236193 kubelet[2697]: E0707 00:23:16.236174 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.236448 kubelet[2697]: E0707 00:23:16.236320 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.236448 kubelet[2697]: W0707 00:23:16.236335 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.236448 kubelet[2697]: E0707 00:23:16.236379 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.238150 kubelet[2697]: E0707 00:23:16.238127 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.238386 kubelet[2697]: W0707 00:23:16.238250 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.238386 kubelet[2697]: E0707 00:23:16.238325 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.238725 kubelet[2697]: E0707 00:23:16.238705 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.238852 kubelet[2697]: W0707 00:23:16.238834 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.238976 kubelet[2697]: E0707 00:23:16.238945 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.239361 kubelet[2697]: E0707 00:23:16.239214 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.239361 kubelet[2697]: W0707 00:23:16.239230 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.239361 kubelet[2697]: E0707 00:23:16.239273 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.239713 kubelet[2697]: E0707 00:23:16.239659 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.239880 kubelet[2697]: W0707 00:23:16.239811 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.239880 kubelet[2697]: E0707 00:23:16.239838 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.241032 kubelet[2697]: E0707 00:23:16.240995 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.241032 kubelet[2697]: W0707 00:23:16.241024 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.241181 kubelet[2697]: E0707 00:23:16.241046 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:16.254412 kubelet[2697]: E0707 00:23:16.254341 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:16.254765 kubelet[2697]: W0707 00:23:16.254647 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:16.254765 kubelet[2697]: E0707 00:23:16.254707 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:17.264049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount139783839.mount: Deactivated successfully. Jul 7 00:23:17.345920 kubelet[2697]: E0707 00:23:17.342517 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8ldv" podUID="dba08fd4-9512-4c11-aac8-02de403331df" Jul 7 00:23:18.584565 containerd[1539]: time="2025-07-07T00:23:18.584487849Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:18.585564 containerd[1539]: time="2025-07-07T00:23:18.585516084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 7 00:23:18.587385 containerd[1539]: time="2025-07-07T00:23:18.586964928Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:18.589774 containerd[1539]: time="2025-07-07T00:23:18.589650244Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:18.590724 containerd[1539]: time="2025-07-07T00:23:18.590640853Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.65203789s" Jul 7 00:23:18.590724 containerd[1539]: time="2025-07-07T00:23:18.590707007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 7 00:23:18.593916 containerd[1539]: time="2025-07-07T00:23:18.593750139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 00:23:18.626714 containerd[1539]: time="2025-07-07T00:23:18.625152774Z" level=info msg="CreateContainer within sandbox \"669d0d6022e581335cc78e76f64c0945d808a15aeddcbb6c8acd951d1c6777b5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 00:23:18.646991 containerd[1539]: time="2025-07-07T00:23:18.646933878Z" level=info msg="Container 60abdf4f2bccd471b551b28fe6e98f3a344782dd3fd479ec6d3d65d6291ec921: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:23:18.661121 containerd[1539]: time="2025-07-07T00:23:18.660596561Z" level=info msg="CreateContainer within sandbox \"669d0d6022e581335cc78e76f64c0945d808a15aeddcbb6c8acd951d1c6777b5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"60abdf4f2bccd471b551b28fe6e98f3a344782dd3fd479ec6d3d65d6291ec921\"" Jul 7 00:23:18.665249 containerd[1539]: time="2025-07-07T00:23:18.663529696Z" level=info msg="StartContainer for \"60abdf4f2bccd471b551b28fe6e98f3a344782dd3fd479ec6d3d65d6291ec921\"" Jul 7 00:23:18.666342 containerd[1539]: time="2025-07-07T00:23:18.666243211Z" level=info msg="connecting to shim 60abdf4f2bccd471b551b28fe6e98f3a344782dd3fd479ec6d3d65d6291ec921" address="unix:///run/containerd/s/a92f2dc25d93c0989dfc6f7f98e0929c113806803777ed035d8426cf3269e6b1" protocol=ttrpc version=3 Jul 7 00:23:18.711039 systemd[1]: Started cri-containerd-60abdf4f2bccd471b551b28fe6e98f3a344782dd3fd479ec6d3d65d6291ec921.scope - libcontainer container 60abdf4f2bccd471b551b28fe6e98f3a344782dd3fd479ec6d3d65d6291ec921. Jul 7 00:23:18.795189 containerd[1539]: time="2025-07-07T00:23:18.795111468Z" level=info msg="StartContainer for \"60abdf4f2bccd471b551b28fe6e98f3a344782dd3fd479ec6d3d65d6291ec921\" returns successfully" Jul 7 00:23:19.342435 kubelet[2697]: E0707 00:23:19.341513 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8ldv" podUID="dba08fd4-9512-4c11-aac8-02de403331df" Jul 7 00:23:19.457089 kubelet[2697]: E0707 00:23:19.457026 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:23:19.485964 kubelet[2697]: I0707 00:23:19.485839 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-9ddc77688-nfhjc" podStartSLOduration=1.8299233990000001 podStartE2EDuration="4.485813866s" podCreationTimestamp="2025-07-07 00:23:15 +0000 UTC" firstStartedPulling="2025-07-07 00:23:15.937546783 +0000 UTC m=+21.727461225" lastFinishedPulling="2025-07-07 00:23:18.593437216 +0000 UTC m=+24.383351692" observedRunningTime="2025-07-07 00:23:19.478791208 +0000 UTC m=+25.268705655" watchObservedRunningTime="2025-07-07 00:23:19.485813866 +0000 UTC m=+25.275728315" Jul 7 00:23:19.529780 kubelet[2697]: E0707 00:23:19.529562 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.529780 kubelet[2697]: W0707 00:23:19.529769 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.530175 kubelet[2697]: E0707 00:23:19.529800 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.530643 kubelet[2697]: E0707 00:23:19.530614 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.530643 kubelet[2697]: W0707 00:23:19.530635 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.530955 kubelet[2697]: E0707 00:23:19.530655 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.531177 kubelet[2697]: E0707 00:23:19.531158 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.531177 kubelet[2697]: W0707 00:23:19.531173 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.531556 kubelet[2697]: E0707 00:23:19.531193 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.531639 kubelet[2697]: E0707 00:23:19.531623 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.531740 kubelet[2697]: W0707 00:23:19.531639 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.531740 kubelet[2697]: E0707 00:23:19.531658 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.532326 kubelet[2697]: E0707 00:23:19.532239 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.532326 kubelet[2697]: W0707 00:23:19.532257 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.532326 kubelet[2697]: E0707 00:23:19.532272 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.532613 kubelet[2697]: E0707 00:23:19.532587 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.532613 kubelet[2697]: W0707 00:23:19.532599 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.532613 kubelet[2697]: E0707 00:23:19.532610 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.533419 kubelet[2697]: E0707 00:23:19.533043 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.533419 kubelet[2697]: W0707 00:23:19.533052 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.533419 kubelet[2697]: E0707 00:23:19.533080 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.533419 kubelet[2697]: E0707 00:23:19.533395 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.533419 kubelet[2697]: W0707 00:23:19.533406 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.533419 kubelet[2697]: E0707 00:23:19.533417 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.534594 kubelet[2697]: E0707 00:23:19.534563 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.534594 kubelet[2697]: W0707 00:23:19.534588 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.534897 kubelet[2697]: E0707 00:23:19.534605 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.535544 kubelet[2697]: E0707 00:23:19.535503 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.536425 kubelet[2697]: W0707 00:23:19.535540 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.536425 kubelet[2697]: E0707 00:23:19.535824 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.536425 kubelet[2697]: E0707 00:23:19.536232 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.536425 kubelet[2697]: W0707 00:23:19.536247 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.536425 kubelet[2697]: E0707 00:23:19.536381 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.536912 kubelet[2697]: E0707 00:23:19.536878 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.536912 kubelet[2697]: W0707 00:23:19.536894 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.537343 kubelet[2697]: E0707 00:23:19.537320 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.537839 kubelet[2697]: E0707 00:23:19.537819 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.537839 kubelet[2697]: W0707 00:23:19.537833 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.537839 kubelet[2697]: E0707 00:23:19.537845 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.538263 kubelet[2697]: E0707 00:23:19.538247 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.538263 kubelet[2697]: W0707 00:23:19.538260 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.538525 kubelet[2697]: E0707 00:23:19.538490 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.539773 kubelet[2697]: E0707 00:23:19.539697 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.539773 kubelet[2697]: W0707 00:23:19.539719 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.539773 kubelet[2697]: E0707 00:23:19.539738 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.560523 kubelet[2697]: E0707 00:23:19.560413 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.560523 kubelet[2697]: W0707 00:23:19.560452 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.560523 kubelet[2697]: E0707 00:23:19.560482 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.561384 kubelet[2697]: E0707 00:23:19.561326 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.561384 kubelet[2697]: W0707 00:23:19.561373 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.561720 kubelet[2697]: E0707 00:23:19.561404 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.561720 kubelet[2697]: E0707 00:23:19.561658 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.561720 kubelet[2697]: W0707 00:23:19.561698 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.561720 kubelet[2697]: E0707 00:23:19.561718 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.561955 kubelet[2697]: E0707 00:23:19.561941 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.561955 kubelet[2697]: W0707 00:23:19.561954 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.562070 kubelet[2697]: E0707 00:23:19.561971 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.562252 kubelet[2697]: E0707 00:23:19.562238 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.562252 kubelet[2697]: W0707 00:23:19.562249 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.562567 kubelet[2697]: E0707 00:23:19.562325 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.562567 kubelet[2697]: E0707 00:23:19.562423 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.562567 kubelet[2697]: W0707 00:23:19.562434 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.562567 kubelet[2697]: E0707 00:23:19.562449 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.563230 kubelet[2697]: E0707 00:23:19.562915 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.563230 kubelet[2697]: W0707 00:23:19.562931 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.563230 kubelet[2697]: E0707 00:23:19.562948 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.563230 kubelet[2697]: E0707 00:23:19.563192 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.563230 kubelet[2697]: W0707 00:23:19.563202 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.563230 kubelet[2697]: E0707 00:23:19.563220 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.563478 kubelet[2697]: E0707 00:23:19.563397 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.563478 kubelet[2697]: W0707 00:23:19.563411 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.563478 kubelet[2697]: E0707 00:23:19.563423 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.563591 kubelet[2697]: E0707 00:23:19.563568 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.563591 kubelet[2697]: W0707 00:23:19.563580 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.563591 kubelet[2697]: E0707 00:23:19.563589 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.563852 kubelet[2697]: E0707 00:23:19.563798 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.563852 kubelet[2697]: W0707 00:23:19.563805 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.563852 kubelet[2697]: E0707 00:23:19.563818 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.564227 kubelet[2697]: E0707 00:23:19.564208 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.564302 kubelet[2697]: W0707 00:23:19.564287 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.564432 kubelet[2697]: E0707 00:23:19.564381 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.564741 kubelet[2697]: E0707 00:23:19.564659 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.564741 kubelet[2697]: W0707 00:23:19.564707 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.564741 kubelet[2697]: E0707 00:23:19.564738 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.564965 kubelet[2697]: E0707 00:23:19.564953 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.565000 kubelet[2697]: W0707 00:23:19.564966 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.565000 kubelet[2697]: E0707 00:23:19.564983 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.565293 kubelet[2697]: E0707 00:23:19.565276 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.565293 kubelet[2697]: W0707 00:23:19.565292 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.565419 kubelet[2697]: E0707 00:23:19.565363 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.565489 kubelet[2697]: E0707 00:23:19.565479 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.565520 kubelet[2697]: W0707 00:23:19.565488 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.565520 kubelet[2697]: E0707 00:23:19.565504 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.565782 kubelet[2697]: E0707 00:23:19.565767 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.565782 kubelet[2697]: W0707 00:23:19.565779 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.565894 kubelet[2697]: E0707 00:23:19.565789 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.566159 kubelet[2697]: E0707 00:23:19.566143 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:23:19.566159 kubelet[2697]: W0707 00:23:19.566158 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:23:19.566238 kubelet[2697]: E0707 00:23:19.566173 2697 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:23:19.944025 containerd[1539]: time="2025-07-07T00:23:19.943787036Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:19.944906 containerd[1539]: time="2025-07-07T00:23:19.944639204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 7 00:23:19.945312 containerd[1539]: time="2025-07-07T00:23:19.945275163Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:19.947723 containerd[1539]: time="2025-07-07T00:23:19.947271063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:19.948120 containerd[1539]: time="2025-07-07T00:23:19.948062343Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.354256263s" Jul 7 00:23:19.948218 containerd[1539]: time="2025-07-07T00:23:19.948125365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 7 00:23:19.952441 containerd[1539]: time="2025-07-07T00:23:19.952401492Z" level=info msg="CreateContainer within sandbox \"913b2317e981c283d2b4f04fe4b6b51b31a03946b32022fe67145304f6fbfb3c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 00:23:19.973938 containerd[1539]: time="2025-07-07T00:23:19.972923754Z" level=info msg="Container 88cda9a8754d82b1fe73809893fc69e914dcd4106c7f8427b74a30cd68dba3d4: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:23:19.995081 containerd[1539]: time="2025-07-07T00:23:19.995018759Z" level=info msg="CreateContainer within sandbox \"913b2317e981c283d2b4f04fe4b6b51b31a03946b32022fe67145304f6fbfb3c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"88cda9a8754d82b1fe73809893fc69e914dcd4106c7f8427b74a30cd68dba3d4\"" Jul 7 00:23:19.996032 containerd[1539]: time="2025-07-07T00:23:19.995981819Z" level=info msg="StartContainer for \"88cda9a8754d82b1fe73809893fc69e914dcd4106c7f8427b74a30cd68dba3d4\"" Jul 7 00:23:19.998149 containerd[1539]: time="2025-07-07T00:23:19.998089287Z" level=info msg="connecting to shim 88cda9a8754d82b1fe73809893fc69e914dcd4106c7f8427b74a30cd68dba3d4" address="unix:///run/containerd/s/9b01503bc6edf5daa5492cc1e3f29c8410accd014c2a37c21a949f083360572a" protocol=ttrpc version=3 Jul 7 00:23:20.025985 systemd[1]: Started cri-containerd-88cda9a8754d82b1fe73809893fc69e914dcd4106c7f8427b74a30cd68dba3d4.scope - libcontainer container 88cda9a8754d82b1fe73809893fc69e914dcd4106c7f8427b74a30cd68dba3d4. Jul 7 00:23:20.095210 containerd[1539]: time="2025-07-07T00:23:20.095163624Z" level=info msg="StartContainer for \"88cda9a8754d82b1fe73809893fc69e914dcd4106c7f8427b74a30cd68dba3d4\" returns successfully" Jul 7 00:23:20.115832 systemd[1]: cri-containerd-88cda9a8754d82b1fe73809893fc69e914dcd4106c7f8427b74a30cd68dba3d4.scope: Deactivated successfully. Jul 7 00:23:20.221125 containerd[1539]: time="2025-07-07T00:23:20.220952303Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88cda9a8754d82b1fe73809893fc69e914dcd4106c7f8427b74a30cd68dba3d4\" id:\"88cda9a8754d82b1fe73809893fc69e914dcd4106c7f8427b74a30cd68dba3d4\" pid:3401 exited_at:{seconds:1751847800 nanos:122600804}" Jul 7 00:23:20.231697 containerd[1539]: time="2025-07-07T00:23:20.230309325Z" level=info msg="received exit event container_id:\"88cda9a8754d82b1fe73809893fc69e914dcd4106c7f8427b74a30cd68dba3d4\" id:\"88cda9a8754d82b1fe73809893fc69e914dcd4106c7f8427b74a30cd68dba3d4\" pid:3401 exited_at:{seconds:1751847800 nanos:122600804}" Jul 7 00:23:20.273776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88cda9a8754d82b1fe73809893fc69e914dcd4106c7f8427b74a30cd68dba3d4-rootfs.mount: Deactivated successfully. Jul 7 00:23:20.462881 kubelet[2697]: E0707 00:23:20.462333 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:23:20.468938 containerd[1539]: time="2025-07-07T00:23:20.468895899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 00:23:21.342373 kubelet[2697]: E0707 00:23:21.342267 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8ldv" podUID="dba08fd4-9512-4c11-aac8-02de403331df" Jul 7 00:23:21.464960 kubelet[2697]: E0707 00:23:21.464889 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:23:23.342126 kubelet[2697]: E0707 00:23:23.342038 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8ldv" podUID="dba08fd4-9512-4c11-aac8-02de403331df" Jul 7 00:23:24.220995 containerd[1539]: time="2025-07-07T00:23:24.220910555Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:24.222166 containerd[1539]: time="2025-07-07T00:23:24.222122865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 7 00:23:24.222962 containerd[1539]: time="2025-07-07T00:23:24.222916371Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:24.225534 containerd[1539]: time="2025-07-07T00:23:24.225475581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:24.226637 containerd[1539]: time="2025-07-07T00:23:24.226583524Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.757648237s" Jul 7 00:23:24.226637 containerd[1539]: time="2025-07-07T00:23:24.226634854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 7 00:23:24.230448 containerd[1539]: time="2025-07-07T00:23:24.230362776Z" level=info msg="CreateContainer within sandbox \"913b2317e981c283d2b4f04fe4b6b51b31a03946b32022fe67145304f6fbfb3c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 00:23:24.243720 containerd[1539]: time="2025-07-07T00:23:24.243447167Z" level=info msg="Container 968e58e302989978f88ac3289877e1c46b845627ba149374ff1107c3a743ca6e: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:23:24.248326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2037868865.mount: Deactivated successfully. Jul 7 00:23:24.265670 containerd[1539]: time="2025-07-07T00:23:24.265588056Z" level=info msg="CreateContainer within sandbox \"913b2317e981c283d2b4f04fe4b6b51b31a03946b32022fe67145304f6fbfb3c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"968e58e302989978f88ac3289877e1c46b845627ba149374ff1107c3a743ca6e\"" Jul 7 00:23:24.266849 containerd[1539]: time="2025-07-07T00:23:24.266795711Z" level=info msg="StartContainer for \"968e58e302989978f88ac3289877e1c46b845627ba149374ff1107c3a743ca6e\"" Jul 7 00:23:24.269030 containerd[1539]: time="2025-07-07T00:23:24.268947591Z" level=info msg="connecting to shim 968e58e302989978f88ac3289877e1c46b845627ba149374ff1107c3a743ca6e" address="unix:///run/containerd/s/9b01503bc6edf5daa5492cc1e3f29c8410accd014c2a37c21a949f083360572a" protocol=ttrpc version=3 Jul 7 00:23:24.304976 systemd[1]: Started cri-containerd-968e58e302989978f88ac3289877e1c46b845627ba149374ff1107c3a743ca6e.scope - libcontainer container 968e58e302989978f88ac3289877e1c46b845627ba149374ff1107c3a743ca6e. Jul 7 00:23:24.372310 containerd[1539]: time="2025-07-07T00:23:24.372240323Z" level=info msg="StartContainer for \"968e58e302989978f88ac3289877e1c46b845627ba149374ff1107c3a743ca6e\" returns successfully" Jul 7 00:23:25.040378 systemd[1]: cri-containerd-968e58e302989978f88ac3289877e1c46b845627ba149374ff1107c3a743ca6e.scope: Deactivated successfully. Jul 7 00:23:25.041418 systemd[1]: cri-containerd-968e58e302989978f88ac3289877e1c46b845627ba149374ff1107c3a743ca6e.scope: Consumed 624ms CPU time, 168M memory peak, 14M read from disk, 171.2M written to disk. Jul 7 00:23:25.043159 containerd[1539]: time="2025-07-07T00:23:25.041927457Z" level=info msg="received exit event container_id:\"968e58e302989978f88ac3289877e1c46b845627ba149374ff1107c3a743ca6e\" id:\"968e58e302989978f88ac3289877e1c46b845627ba149374ff1107c3a743ca6e\" pid:3458 exited_at:{seconds:1751847805 nanos:41611899}" Jul 7 00:23:25.049793 containerd[1539]: time="2025-07-07T00:23:25.048875477Z" level=info msg="TaskExit event in podsandbox handler container_id:\"968e58e302989978f88ac3289877e1c46b845627ba149374ff1107c3a743ca6e\" id:\"968e58e302989978f88ac3289877e1c46b845627ba149374ff1107c3a743ca6e\" pid:3458 exited_at:{seconds:1751847805 nanos:41611899}" Jul 7 00:23:25.109365 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-968e58e302989978f88ac3289877e1c46b845627ba149374ff1107c3a743ca6e-rootfs.mount: Deactivated successfully. Jul 7 00:23:25.131993 kubelet[2697]: I0707 00:23:25.131877 2697 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 7 00:23:25.204214 systemd[1]: Created slice kubepods-burstable-pod41794743_531f_45a9_880c_f07b5eb8bc43.slice - libcontainer container kubepods-burstable-pod41794743_531f_45a9_880c_f07b5eb8bc43.slice. Jul 7 00:23:25.208095 kubelet[2697]: I0707 00:23:25.207979 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btnq7\" (UniqueName: \"kubernetes.io/projected/4ab792d8-2ac7-47ae-8618-31a2f17ae776-kube-api-access-btnq7\") pod \"goldmane-58fd7646b9-gns4s\" (UID: \"4ab792d8-2ac7-47ae-8618-31a2f17ae776\") " pod="calico-system/goldmane-58fd7646b9-gns4s" Jul 7 00:23:25.208095 kubelet[2697]: I0707 00:23:25.208020 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41794743-531f-45a9-880c-f07b5eb8bc43-config-volume\") pod \"coredns-7c65d6cfc9-q95zr\" (UID: \"41794743-531f-45a9-880c-f07b5eb8bc43\") " pod="kube-system/coredns-7c65d6cfc9-q95zr" Jul 7 00:23:25.208095 kubelet[2697]: I0707 00:23:25.208039 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxwph\" (UniqueName: \"kubernetes.io/projected/41794743-531f-45a9-880c-f07b5eb8bc43-kube-api-access-hxwph\") pod \"coredns-7c65d6cfc9-q95zr\" (UID: \"41794743-531f-45a9-880c-f07b5eb8bc43\") " pod="kube-system/coredns-7c65d6cfc9-q95zr" Jul 7 00:23:25.208095 kubelet[2697]: I0707 00:23:25.208059 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrzzm\" (UniqueName: \"kubernetes.io/projected/bc0753b2-8f52-45cd-b135-8205c86199e7-kube-api-access-jrzzm\") pod \"calico-kube-controllers-7b7cc78f44-gxnd4\" (UID: \"bc0753b2-8f52-45cd-b135-8205c86199e7\") " pod="calico-system/calico-kube-controllers-7b7cc78f44-gxnd4" Jul 7 00:23:25.208095 kubelet[2697]: I0707 00:23:25.208075 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvb29\" (UniqueName: \"kubernetes.io/projected/ee3f96cb-ad32-4cbc-bfdc-11ac89b39a73-kube-api-access-rvb29\") pod \"calico-apiserver-6f56d9bbdd-rm6r9\" (UID: \"ee3f96cb-ad32-4cbc-bfdc-11ac89b39a73\") " pod="calico-apiserver/calico-apiserver-6f56d9bbdd-rm6r9" Jul 7 00:23:25.208330 kubelet[2697]: I0707 00:23:25.208093 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab792d8-2ac7-47ae-8618-31a2f17ae776-config\") pod \"goldmane-58fd7646b9-gns4s\" (UID: \"4ab792d8-2ac7-47ae-8618-31a2f17ae776\") " pod="calico-system/goldmane-58fd7646b9-gns4s" Jul 7 00:23:25.208330 kubelet[2697]: I0707 00:23:25.208108 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4ab792d8-2ac7-47ae-8618-31a2f17ae776-goldmane-key-pair\") pod \"goldmane-58fd7646b9-gns4s\" (UID: \"4ab792d8-2ac7-47ae-8618-31a2f17ae776\") " pod="calico-system/goldmane-58fd7646b9-gns4s" Jul 7 00:23:25.208330 kubelet[2697]: I0707 00:23:25.208125 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ee3f96cb-ad32-4cbc-bfdc-11ac89b39a73-calico-apiserver-certs\") pod \"calico-apiserver-6f56d9bbdd-rm6r9\" (UID: \"ee3f96cb-ad32-4cbc-bfdc-11ac89b39a73\") " pod="calico-apiserver/calico-apiserver-6f56d9bbdd-rm6r9" Jul 7 00:23:25.208330 kubelet[2697]: I0707 00:23:25.208141 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1f38a8f0-3f6d-4d3b-ac4a-4494601c3dd4-calico-apiserver-certs\") pod \"calico-apiserver-6f56d9bbdd-88gl4\" (UID: \"1f38a8f0-3f6d-4d3b-ac4a-4494601c3dd4\") " pod="calico-apiserver/calico-apiserver-6f56d9bbdd-88gl4" Jul 7 00:23:25.208330 kubelet[2697]: I0707 00:23:25.208160 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc0753b2-8f52-45cd-b135-8205c86199e7-tigera-ca-bundle\") pod \"calico-kube-controllers-7b7cc78f44-gxnd4\" (UID: \"bc0753b2-8f52-45cd-b135-8205c86199e7\") " pod="calico-system/calico-kube-controllers-7b7cc78f44-gxnd4" Jul 7 00:23:25.208491 kubelet[2697]: I0707 00:23:25.208178 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss8w4\" (UniqueName: \"kubernetes.io/projected/1f38a8f0-3f6d-4d3b-ac4a-4494601c3dd4-kube-api-access-ss8w4\") pod \"calico-apiserver-6f56d9bbdd-88gl4\" (UID: \"1f38a8f0-3f6d-4d3b-ac4a-4494601c3dd4\") " pod="calico-apiserver/calico-apiserver-6f56d9bbdd-88gl4" Jul 7 00:23:25.208491 kubelet[2697]: I0707 00:23:25.208198 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ab792d8-2ac7-47ae-8618-31a2f17ae776-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-gns4s\" (UID: \"4ab792d8-2ac7-47ae-8618-31a2f17ae776\") " pod="calico-system/goldmane-58fd7646b9-gns4s" Jul 7 00:23:25.220721 systemd[1]: Created slice kubepods-besteffort-podbc0753b2_8f52_45cd_b135_8205c86199e7.slice - libcontainer container kubepods-besteffort-podbc0753b2_8f52_45cd_b135_8205c86199e7.slice. Jul 7 00:23:25.235737 systemd[1]: Created slice kubepods-besteffort-pod4ab792d8_2ac7_47ae_8618_31a2f17ae776.slice - libcontainer container kubepods-besteffort-pod4ab792d8_2ac7_47ae_8618_31a2f17ae776.slice. Jul 7 00:23:25.247737 systemd[1]: Created slice kubepods-besteffort-podee3f96cb_ad32_4cbc_bfdc_11ac89b39a73.slice - libcontainer container kubepods-besteffort-podee3f96cb_ad32_4cbc_bfdc_11ac89b39a73.slice. Jul 7 00:23:25.273503 systemd[1]: Created slice kubepods-besteffort-pod1f38a8f0_3f6d_4d3b_ac4a_4494601c3dd4.slice - libcontainer container kubepods-besteffort-pod1f38a8f0_3f6d_4d3b_ac4a_4494601c3dd4.slice. Jul 7 00:23:25.283318 systemd[1]: Created slice kubepods-besteffort-podc0f88e51_b094_448a_8cba_5aadfe68ed5e.slice - libcontainer container kubepods-besteffort-podc0f88e51_b094_448a_8cba_5aadfe68ed5e.slice. Jul 7 00:23:25.292829 systemd[1]: Created slice kubepods-burstable-pod880d1335_8d37_4be4_92d9_2b04accdacc0.slice - libcontainer container kubepods-burstable-pod880d1335_8d37_4be4_92d9_2b04accdacc0.slice. Jul 7 00:23:25.311649 kubelet[2697]: I0707 00:23:25.310657 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c0f88e51-b094-448a-8cba-5aadfe68ed5e-whisker-backend-key-pair\") pod \"whisker-644cd9b6d9-bvqgr\" (UID: \"c0f88e51-b094-448a-8cba-5aadfe68ed5e\") " pod="calico-system/whisker-644cd9b6d9-bvqgr" Jul 7 00:23:25.311649 kubelet[2697]: I0707 00:23:25.310874 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/880d1335-8d37-4be4-92d9-2b04accdacc0-config-volume\") pod \"coredns-7c65d6cfc9-2b857\" (UID: \"880d1335-8d37-4be4-92d9-2b04accdacc0\") " pod="kube-system/coredns-7c65d6cfc9-2b857" Jul 7 00:23:25.311649 kubelet[2697]: I0707 00:23:25.310957 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0f88e51-b094-448a-8cba-5aadfe68ed5e-whisker-ca-bundle\") pod \"whisker-644cd9b6d9-bvqgr\" (UID: \"c0f88e51-b094-448a-8cba-5aadfe68ed5e\") " pod="calico-system/whisker-644cd9b6d9-bvqgr" Jul 7 00:23:25.311649 kubelet[2697]: I0707 00:23:25.310985 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wwlg\" (UniqueName: \"kubernetes.io/projected/c0f88e51-b094-448a-8cba-5aadfe68ed5e-kube-api-access-7wwlg\") pod \"whisker-644cd9b6d9-bvqgr\" (UID: \"c0f88e51-b094-448a-8cba-5aadfe68ed5e\") " pod="calico-system/whisker-644cd9b6d9-bvqgr" Jul 7 00:23:25.311649 kubelet[2697]: I0707 00:23:25.311064 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kwvk\" (UniqueName: \"kubernetes.io/projected/880d1335-8d37-4be4-92d9-2b04accdacc0-kube-api-access-7kwvk\") pod \"coredns-7c65d6cfc9-2b857\" (UID: \"880d1335-8d37-4be4-92d9-2b04accdacc0\") " pod="kube-system/coredns-7c65d6cfc9-2b857" Jul 7 00:23:25.376239 systemd[1]: Created slice kubepods-besteffort-poddba08fd4_9512_4c11_aac8_02de403331df.slice - libcontainer container kubepods-besteffort-poddba08fd4_9512_4c11_aac8_02de403331df.slice. Jul 7 00:23:25.399261 containerd[1539]: time="2025-07-07T00:23:25.399218814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z8ldv,Uid:dba08fd4-9512-4c11-aac8-02de403331df,Namespace:calico-system,Attempt:0,}" Jul 7 00:23:25.515469 containerd[1539]: time="2025-07-07T00:23:25.514856425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 00:23:25.518782 kubelet[2697]: E0707 00:23:25.518403 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:23:25.520576 containerd[1539]: time="2025-07-07T00:23:25.519878971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q95zr,Uid:41794743-531f-45a9-880c-f07b5eb8bc43,Namespace:kube-system,Attempt:0,}" Jul 7 00:23:25.532493 containerd[1539]: time="2025-07-07T00:23:25.531537582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b7cc78f44-gxnd4,Uid:bc0753b2-8f52-45cd-b135-8205c86199e7,Namespace:calico-system,Attempt:0,}" Jul 7 00:23:25.540710 containerd[1539]: time="2025-07-07T00:23:25.540356519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-gns4s,Uid:4ab792d8-2ac7-47ae-8618-31a2f17ae776,Namespace:calico-system,Attempt:0,}" Jul 7 00:23:25.559087 containerd[1539]: time="2025-07-07T00:23:25.558905695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f56d9bbdd-rm6r9,Uid:ee3f96cb-ad32-4cbc-bfdc-11ac89b39a73,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:23:25.590328 containerd[1539]: time="2025-07-07T00:23:25.590206243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-644cd9b6d9-bvqgr,Uid:c0f88e51-b094-448a-8cba-5aadfe68ed5e,Namespace:calico-system,Attempt:0,}" Jul 7 00:23:25.593297 containerd[1539]: time="2025-07-07T00:23:25.593221391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f56d9bbdd-88gl4,Uid:1f38a8f0-3f6d-4d3b-ac4a-4494601c3dd4,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:23:25.601400 kubelet[2697]: E0707 00:23:25.601157 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:23:25.603269 containerd[1539]: time="2025-07-07T00:23:25.602987963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2b857,Uid:880d1335-8d37-4be4-92d9-2b04accdacc0,Namespace:kube-system,Attempt:0,}" Jul 7 00:23:25.738522 containerd[1539]: time="2025-07-07T00:23:25.738471421Z" level=error msg="Failed to destroy network for sandbox \"08a0dc039441ffa712d6cac8e9b5df79b2aac61428144fb03689c00105b99d16\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:23:25.744340 containerd[1539]: time="2025-07-07T00:23:25.744160560Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-gns4s,Uid:4ab792d8-2ac7-47ae-8618-31a2f17ae776,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"08a0dc039441ffa712d6cac8e9b5df79b2aac61428144fb03689c00105b99d16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:23:25.745324 kubelet[2697]: E0707 00:23:25.745262 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08a0dc039441ffa712d6cac8e9b5df79b2aac61428144fb03689c00105b99d16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:23:25.745514 kubelet[2697]: E0707 00:23:25.745378 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08a0dc039441ffa712d6cac8e9b5df79b2aac61428144fb03689c00105b99d16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-gns4s" Jul 7 00:23:25.745514 kubelet[2697]: E0707 00:23:25.745406 2697 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08a0dc039441ffa712d6cac8e9b5df79b2aac61428144fb03689c00105b99d16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-gns4s" Jul 7 00:23:25.745514 kubelet[2697]: E0707 00:23:25.745462 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-gns4s_calico-system(4ab792d8-2ac7-47ae-8618-31a2f17ae776)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-gns4s_calico-system(4ab792d8-2ac7-47ae-8618-31a2f17ae776)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08a0dc039441ffa712d6cac8e9b5df79b2aac61428144fb03689c00105b99d16\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-gns4s" podUID="4ab792d8-2ac7-47ae-8618-31a2f17ae776" Jul 7 00:23:25.803272 containerd[1539]: time="2025-07-07T00:23:25.803108786Z" level=error msg="Failed to destroy network for sandbox \"c32e99a7d9c6bf36157b32fa041990a4bdd1108bc32eddf7b3621f343d73ecb8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:23:25.810201 containerd[1539]: time="2025-07-07T00:23:25.809860965Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z8ldv,Uid:dba08fd4-9512-4c11-aac8-02de403331df,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c32e99a7d9c6bf36157b32fa041990a4bdd1108bc32eddf7b3621f343d73ecb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:23:25.810352 kubelet[2697]: E0707 00:23:25.810172 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c32e99a7d9c6bf36157b32fa041990a4bdd1108bc32eddf7b3621f343d73ecb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:23:25.811125 kubelet[2697]: E0707 00:23:25.810350 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c32e99a7d9c6bf36157b32fa041990a4bdd1108bc32eddf7b3621f343d73ecb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z8ldv" Jul 7 00:23:25.811125 kubelet[2697]: E0707 00:23:25.810720 2697 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c32e99a7d9c6bf36157b32fa041990a4bdd1108bc32eddf7b3621f343d73ecb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z8ldv" Jul 7 00:23:25.811125 kubelet[2697]: E0707 00:23:25.810802 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z8ldv_calico-system(dba08fd4-9512-4c11-aac8-02de403331df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z8ldv_calico-system(dba08fd4-9512-4c11-aac8-02de403331df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c32e99a7d9c6bf36157b32fa041990a4bdd1108bc32eddf7b3621f343d73ecb8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z8ldv" podUID="dba08fd4-9512-4c11-aac8-02de403331df" Jul 7 00:23:25.841762 containerd[1539]: time="2025-07-07T00:23:25.841716075Z" level=error msg="Failed to destroy network for sandbox \"c98b51a1447e96fc02622af1f860b804e40904274c33c12a85d0b3dda7db9031\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:23:25.847703 containerd[1539]: time="2025-07-07T00:23:25.847616340Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b7cc78f44-gxnd4,Uid:bc0753b2-8f52-45cd-b135-8205c86199e7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c98b51a1447e96fc02622af1f860b804e40904274c33c12a85d0b3dda7db9031\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:23:25.848207 kubelet[2697]: E0707 00:23:25.848160 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c98b51a1447e96fc02622af1f860b804e40904274c33c12a85d0b3dda7db9031\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:23:25.848308 kubelet[2697]: E0707 00:23:25.848232 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c98b51a1447e96fc02622af1f860b804e40904274c33c12a85d0b3dda7db9031\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b7cc78f44-gxnd4" Jul 7 00:23:25.848308 kubelet[2697]: E0707 00:23:25.848253 2697 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c98b51a1447e96fc02622af1f860b804e40904274c33c12a85d0b3dda7db9031\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b7cc78f44-gxnd4" Jul 7 00:23:25.848373 kubelet[2697]: E0707 00:23:25.848302 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b7cc78f44-gxnd4_calico-system(bc0753b2-8f52-45cd-b135-8205c86199e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b7cc78f44-gxnd4_calico-system(bc0753b2-8f52-45cd-b135-8205c86199e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c98b51a1447e96fc02622af1f860b804e40904274c33c12a85d0b3dda7db9031\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b7cc78f44-gxnd4" podUID="bc0753b2-8f52-45cd-b135-8205c86199e7" Jul 7 00:23:25.858483 containerd[1539]: time="2025-07-07T00:23:25.858265485Z" level=error msg="Failed to destroy network for sandbox \"c38fdbcdfea5a4e715ac491716499b0a4671f182821a8d1348d6a3a16ba2beea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:23:25.859098 containerd[1539]: time="2025-07-07T00:23:25.859032159Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2b857,Uid:880d1335-8d37-4be4-92d9-2b04accdacc0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c38fdbcdfea5a4e715ac491716499b0a4671f182821a8d1348d6a3a16ba2beea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:23:25.859473 kubelet[2697]: E0707 00:23:25.859313 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c38fdbcdfea5a4e715ac491716499b0a4671f182821a8d1348d6a3a16ba2beea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:23:25.859473 kubelet[2697]: E0707 00:23:25.859375 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c38fdbcdfea5a4e715ac491716499b0a4671f182821a8d1348d6a3a16ba2beea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2b857" Jul 7 00:23:25.859473 kubelet[2697]: E0707 00:23:25.859395 2697 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c38fdbcdfea5a4e715ac491716499b0a4671f182821a8d1348d6a3a16ba2beea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2b857" Jul 7 00:23:25.859858 kubelet[2697]: E0707 00:23:25.859496 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-2b857_kube-system(880d1335-8d37-4be4-92d9-2b04accdacc0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-2b857_kube-system(880d1335-8d37-4be4-92d9-2b04accdacc0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c38fdbcdfea5a4e715ac491716499b0a4671f182821a8d1348d6a3a16ba2beea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-2b857" podUID="880d1335-8d37-4be4-92d9-2b04accdacc0" Jul 7 00:23:25.868546 containerd[1539]: time="2025-07-07T00:23:25.868493517Z" level=error msg="Failed to destroy network for sandbox \"71d80f97d2c60d2e41f46a312ef2a55b070de97e25e53e395e7cd37e5f55bac7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:23:25.869374 containerd[1539]: time="2025-07-07T00:23:25.869318028Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q95zr,Uid:41794743-531f-45a9-880c-f07b5eb8bc43,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"71d80f97d2c60d2e41f46a312ef2a55b070de97e25e53e395e7cd37e5f55bac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:23:25.870006 kubelet[2697]: E0707 00:23:25.869886 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71d80f97d2c60d2e41f46a312ef2a55b070de97e25e53e395e7cd37e5f55bac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:23:25.870006 kubelet[2697]: E0707 00:23:25.869981 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71d80f97d2c60d2e41f46a312ef2a55b070de97e25e53e395e7cd37e5f55bac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-q95zr" Jul 7 00:23:25.870131 kubelet[2697]: E0707 00:23:25.870010 2697 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71d80f97d2c60d2e41f46a312ef2a55b070de97e25e53e395e7cd37e5f55bac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-q95zr" Jul 7 00:23:25.870131 kubelet[2697]: E0707 00:23:25.870059 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-q95zr_kube-system(41794743-531f-45a9-880c-f07b5eb8bc43)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-q95zr_kube-system(41794743-531f-45a9-880c-f07b5eb8bc43)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71d80f97d2c60d2e41f46a312ef2a55b070de97e25e53e395e7cd37e5f55bac7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-q95zr" podUID="41794743-531f-45a9-880c-f07b5eb8bc43" Jul 7 00:23:25.893296 containerd[1539]: time="2025-07-07T00:23:25.893146408Z" level=error msg="Failed to destroy network for sandbox \"b6552f5b76695ab4997e15ab23cdeb61322a21baa6ab9d74b7adbaddeb18d2be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:23:25.900702 containerd[1539]: time="2025-07-07T00:23:25.900555217Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f56d9bbdd-rm6r9,Uid:ee3f96cb-ad32-4cbc-bfdc-11ac89b39a73,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6552f5b76695ab4997e15ab23cdeb61322a21baa6ab9d74b7adbaddeb18d2be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:23:25.901278 kubelet[2697]: E0707 00:23:25.901068 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6552f5b76695ab4997e15ab23cdeb61322a21baa6ab9d74b7adbaddeb18d2be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:23:25.901278 kubelet[2697]: E0707 00:23:25.901144 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6552f5b76695ab4997e15ab23cdeb61322a21baa6ab9d74b7adbaddeb18d2be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f56d9bbdd-rm6r9" Jul 7 00:23:25.901278 kubelet[2697]: E0707 00:23:25.901165 2697 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6552f5b76695ab4997e15ab23cdeb61322a21baa6ab9d74b7adbaddeb18d2be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f56d9bbdd-rm6r9" Jul 7 00:23:25.901530 kubelet[2697]: E0707 00:23:25.901230 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f56d9bbdd-rm6r9_calico-apiserver(ee3f96cb-ad32-4cbc-bfdc-11ac89b39a73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f56d9bbdd-rm6r9_calico-apiserver(ee3f96cb-ad32-4cbc-bfdc-11ac89b39a73)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6552f5b76695ab4997e15ab23cdeb61322a21baa6ab9d74b7adbaddeb18d2be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f56d9bbdd-rm6r9" podUID="ee3f96cb-ad32-4cbc-bfdc-11ac89b39a73" Jul 7 00:23:25.912667 containerd[1539]: time="2025-07-07T00:23:25.912529362Z" level=error msg="Failed to destroy network for sandbox \"bcae23ac8b78c85ad00622eb68044e2c8b3ee619a1a13b15fa4f3693c1af45fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:23:25.914059 containerd[1539]: time="2025-07-07T00:23:25.913655572Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-644cd9b6d9-bvqgr,Uid:c0f88e51-b094-448a-8cba-5aadfe68ed5e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcae23ac8b78c85ad00622eb68044e2c8b3ee619a1a13b15fa4f3693c1af45fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:23:25.914494 containerd[1539]: time="2025-07-07T00:23:25.914469512Z" level=error msg="Failed to destroy network for sandbox \"688e3b14854e02a612f346345c1de7eb5386180f2745a6580bc51714f1cd43f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:23:25.915741 kubelet[2697]: E0707 00:23:25.914905 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcae23ac8b78c85ad00622eb68044e2c8b3ee619a1a13b15fa4f3693c1af45fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:23:25.915741 kubelet[2697]: E0707 00:23:25.914975 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcae23ac8b78c85ad00622eb68044e2c8b3ee619a1a13b15fa4f3693c1af45fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-644cd9b6d9-bvqgr" Jul 7 00:23:25.915741 kubelet[2697]: E0707 00:23:25.915010 2697 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcae23ac8b78c85ad00622eb68044e2c8b3ee619a1a13b15fa4f3693c1af45fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-644cd9b6d9-bvqgr" Jul 7 00:23:25.916071 kubelet[2697]: E0707 00:23:25.915098 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-644cd9b6d9-bvqgr_calico-system(c0f88e51-b094-448a-8cba-5aadfe68ed5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-644cd9b6d9-bvqgr_calico-system(c0f88e51-b094-448a-8cba-5aadfe68ed5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bcae23ac8b78c85ad00622eb68044e2c8b3ee619a1a13b15fa4f3693c1af45fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-644cd9b6d9-bvqgr" podUID="c0f88e51-b094-448a-8cba-5aadfe68ed5e" Jul 7 00:23:25.917015 containerd[1539]: time="2025-07-07T00:23:25.916920556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f56d9bbdd-88gl4,Uid:1f38a8f0-3f6d-4d3b-ac4a-4494601c3dd4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"688e3b14854e02a612f346345c1de7eb5386180f2745a6580bc51714f1cd43f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:23:25.917662 kubelet[2697]: E0707 00:23:25.917596 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"688e3b14854e02a612f346345c1de7eb5386180f2745a6580bc51714f1cd43f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:23:25.917854 kubelet[2697]: E0707 00:23:25.917787 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"688e3b14854e02a612f346345c1de7eb5386180f2745a6580bc51714f1cd43f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f56d9bbdd-88gl4" Jul 7 00:23:25.917854 kubelet[2697]: E0707 00:23:25.917819 2697 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"688e3b14854e02a612f346345c1de7eb5386180f2745a6580bc51714f1cd43f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f56d9bbdd-88gl4" Jul 7 00:23:25.918327 kubelet[2697]: E0707 00:23:25.918136 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f56d9bbdd-88gl4_calico-apiserver(1f38a8f0-3f6d-4d3b-ac4a-4494601c3dd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f56d9bbdd-88gl4_calico-apiserver(1f38a8f0-3f6d-4d3b-ac4a-4494601c3dd4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"688e3b14854e02a612f346345c1de7eb5386180f2745a6580bc51714f1cd43f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f56d9bbdd-88gl4" podUID="1f38a8f0-3f6d-4d3b-ac4a-4494601c3dd4" Jul 7 00:23:31.533292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount934705317.mount: Deactivated successfully. Jul 7 00:23:31.730757 containerd[1539]: time="2025-07-07T00:23:31.654742974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 7 00:23:31.730757 containerd[1539]: time="2025-07-07T00:23:31.730470963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:31.742087 containerd[1539]: time="2025-07-07T00:23:31.742012279Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:31.744377 containerd[1539]: time="2025-07-07T00:23:31.744298109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:31.745719 containerd[1539]: time="2025-07-07T00:23:31.744719780Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 6.229816357s" Jul 7 00:23:31.745719 containerd[1539]: time="2025-07-07T00:23:31.744755572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 7 00:23:31.772773 containerd[1539]: time="2025-07-07T00:23:31.772696599Z" level=info msg="CreateContainer within sandbox \"913b2317e981c283d2b4f04fe4b6b51b31a03946b32022fe67145304f6fbfb3c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 00:23:31.858206 containerd[1539]: time="2025-07-07T00:23:31.857903344Z" level=info msg="Container ca52f319811763ede2a95c592fbec1211769fe0189090260552f2514fe4f0d4a: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:23:31.861095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4222252142.mount: Deactivated successfully. Jul 7 00:23:31.901062 containerd[1539]: time="2025-07-07T00:23:31.900989185Z" level=info msg="CreateContainer within sandbox \"913b2317e981c283d2b4f04fe4b6b51b31a03946b32022fe67145304f6fbfb3c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ca52f319811763ede2a95c592fbec1211769fe0189090260552f2514fe4f0d4a\"" Jul 7 00:23:31.903063 containerd[1539]: time="2025-07-07T00:23:31.902880256Z" level=info msg="StartContainer for \"ca52f319811763ede2a95c592fbec1211769fe0189090260552f2514fe4f0d4a\"" Jul 7 00:23:31.913009 containerd[1539]: time="2025-07-07T00:23:31.912853749Z" level=info msg="connecting to shim ca52f319811763ede2a95c592fbec1211769fe0189090260552f2514fe4f0d4a" address="unix:///run/containerd/s/9b01503bc6edf5daa5492cc1e3f29c8410accd014c2a37c21a949f083360572a" protocol=ttrpc version=3 Jul 7 00:23:32.187043 systemd[1]: Started cri-containerd-ca52f319811763ede2a95c592fbec1211769fe0189090260552f2514fe4f0d4a.scope - libcontainer container ca52f319811763ede2a95c592fbec1211769fe0189090260552f2514fe4f0d4a. Jul 7 00:23:32.308605 containerd[1539]: time="2025-07-07T00:23:32.308559319Z" level=info msg="StartContainer for \"ca52f319811763ede2a95c592fbec1211769fe0189090260552f2514fe4f0d4a\" returns successfully" Jul 7 00:23:32.436757 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 00:23:32.436906 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 00:23:32.634789 kubelet[2697]: I0707 00:23:32.631431 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-g8rdb" podStartSLOduration=2.105225438 podStartE2EDuration="17.631406012s" podCreationTimestamp="2025-07-07 00:23:15 +0000 UTC" firstStartedPulling="2025-07-07 00:23:16.220256079 +0000 UTC m=+22.010170510" lastFinishedPulling="2025-07-07 00:23:31.746436655 +0000 UTC m=+37.536351084" observedRunningTime="2025-07-07 00:23:32.62793997 +0000 UTC m=+38.417854430" watchObservedRunningTime="2025-07-07 00:23:32.631406012 +0000 UTC m=+38.421320462" Jul 7 00:23:32.773566 kubelet[2697]: I0707 00:23:32.773512 2697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0f88e51-b094-448a-8cba-5aadfe68ed5e-whisker-ca-bundle\") pod \"c0f88e51-b094-448a-8cba-5aadfe68ed5e\" (UID: \"c0f88e51-b094-448a-8cba-5aadfe68ed5e\") " Jul 7 00:23:32.773566 kubelet[2697]: I0707 00:23:32.773564 2697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c0f88e51-b094-448a-8cba-5aadfe68ed5e-whisker-backend-key-pair\") pod \"c0f88e51-b094-448a-8cba-5aadfe68ed5e\" (UID: \"c0f88e51-b094-448a-8cba-5aadfe68ed5e\") " Jul 7 00:23:32.774255 kubelet[2697]: I0707 00:23:32.773594 2697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wwlg\" (UniqueName: \"kubernetes.io/projected/c0f88e51-b094-448a-8cba-5aadfe68ed5e-kube-api-access-7wwlg\") pod \"c0f88e51-b094-448a-8cba-5aadfe68ed5e\" (UID: \"c0f88e51-b094-448a-8cba-5aadfe68ed5e\") " Jul 7 00:23:32.777076 kubelet[2697]: I0707 00:23:32.776618 2697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0f88e51-b094-448a-8cba-5aadfe68ed5e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c0f88e51-b094-448a-8cba-5aadfe68ed5e" (UID: "c0f88e51-b094-448a-8cba-5aadfe68ed5e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 00:23:32.783765 kubelet[2697]: I0707 00:23:32.783662 2697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0f88e51-b094-448a-8cba-5aadfe68ed5e-kube-api-access-7wwlg" (OuterVolumeSpecName: "kube-api-access-7wwlg") pod "c0f88e51-b094-448a-8cba-5aadfe68ed5e" (UID: "c0f88e51-b094-448a-8cba-5aadfe68ed5e"). InnerVolumeSpecName "kube-api-access-7wwlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 00:23:32.787470 kubelet[2697]: I0707 00:23:32.787286 2697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0f88e51-b094-448a-8cba-5aadfe68ed5e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c0f88e51-b094-448a-8cba-5aadfe68ed5e" (UID: "c0f88e51-b094-448a-8cba-5aadfe68ed5e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 7 00:23:32.789534 systemd[1]: var-lib-kubelet-pods-c0f88e51\x2db094\x2d448a\x2d8cba\x2d5aadfe68ed5e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7wwlg.mount: Deactivated successfully. Jul 7 00:23:32.799989 systemd[1]: var-lib-kubelet-pods-c0f88e51\x2db094\x2d448a\x2d8cba\x2d5aadfe68ed5e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 00:23:32.875268 kubelet[2697]: I0707 00:23:32.875156 2697 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c0f88e51-b094-448a-8cba-5aadfe68ed5e-whisker-backend-key-pair\") on node \"ci-4344.1.1-7-4873e20794\" DevicePath \"\"" Jul 7 00:23:32.875268 kubelet[2697]: I0707 00:23:32.875218 2697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wwlg\" (UniqueName: \"kubernetes.io/projected/c0f88e51-b094-448a-8cba-5aadfe68ed5e-kube-api-access-7wwlg\") on node \"ci-4344.1.1-7-4873e20794\" DevicePath \"\"" Jul 7 00:23:32.875980 kubelet[2697]: I0707 00:23:32.875235 2697 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0f88e51-b094-448a-8cba-5aadfe68ed5e-whisker-ca-bundle\") on node \"ci-4344.1.1-7-4873e20794\" DevicePath \"\"" Jul 7 00:23:32.991142 containerd[1539]: time="2025-07-07T00:23:32.990552418Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca52f319811763ede2a95c592fbec1211769fe0189090260552f2514fe4f0d4a\" id:\"22efb7dd6faf6af4d71679aafec94aeb65e4bd0cc26200599056fc6e138094f6\" pid:3783 exit_status:1 exited_at:{seconds:1751847812 nanos:968946728}" Jul 7 00:23:33.592051 systemd[1]: Removed slice kubepods-besteffort-podc0f88e51_b094_448a_8cba_5aadfe68ed5e.slice - libcontainer container kubepods-besteffort-podc0f88e51_b094_448a_8cba_5aadfe68ed5e.slice. Jul 7 00:23:33.739643 systemd[1]: Created slice kubepods-besteffort-pod40b28672_0b41_413f_8700_a3653e2a699f.slice - libcontainer container kubepods-besteffort-pod40b28672_0b41_413f_8700_a3653e2a699f.slice. Jul 7 00:23:33.764098 containerd[1539]: time="2025-07-07T00:23:33.764043261Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca52f319811763ede2a95c592fbec1211769fe0189090260552f2514fe4f0d4a\" id:\"9083f4d2b66ddd60510eea086531403ce344bf415138b80fafe98d7d9fe32bae\" pid:3821 exit_status:1 exited_at:{seconds:1751847813 nanos:755482168}" Jul 7 00:23:33.783916 kubelet[2697]: I0707 00:23:33.783813 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40b28672-0b41-413f-8700-a3653e2a699f-whisker-ca-bundle\") pod \"whisker-b84578f67-q67vb\" (UID: \"40b28672-0b41-413f-8700-a3653e2a699f\") " pod="calico-system/whisker-b84578f67-q67vb" Jul 7 00:23:33.783916 kubelet[2697]: I0707 00:23:33.783882 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/40b28672-0b41-413f-8700-a3653e2a699f-whisker-backend-key-pair\") pod \"whisker-b84578f67-q67vb\" (UID: \"40b28672-0b41-413f-8700-a3653e2a699f\") " pod="calico-system/whisker-b84578f67-q67vb" Jul 7 00:23:33.783916 kubelet[2697]: I0707 00:23:33.783917 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjt2g\" (UniqueName: \"kubernetes.io/projected/40b28672-0b41-413f-8700-a3653e2a699f-kube-api-access-wjt2g\") pod \"whisker-b84578f67-q67vb\" (UID: \"40b28672-0b41-413f-8700-a3653e2a699f\") " pod="calico-system/whisker-b84578f67-q67vb" Jul 7 00:23:34.044559 containerd[1539]: time="2025-07-07T00:23:34.044062910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b84578f67-q67vb,Uid:40b28672-0b41-413f-8700-a3653e2a699f,Namespace:calico-system,Attempt:0,}" Jul 7 00:23:34.349331 kubelet[2697]: I0707 00:23:34.348166 2697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0f88e51-b094-448a-8cba-5aadfe68ed5e" path="/var/lib/kubelet/pods/c0f88e51-b094-448a-8cba-5aadfe68ed5e/volumes" Jul 7 00:23:34.432423 systemd-networkd[1454]: cali20cf8d96448: Link UP Jul 7 00:23:34.435238 systemd-networkd[1454]: cali20cf8d96448: Gained carrier Jul 7 00:23:34.483746 containerd[1539]: 2025-07-07 00:23:34.095 [INFO][3834] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:23:34.483746 containerd[1539]: 2025-07-07 00:23:34.129 [INFO][3834] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--7--4873e20794-k8s-whisker--b84578f67--q67vb-eth0 whisker-b84578f67- calico-system 40b28672-0b41-413f-8700-a3653e2a699f 900 0 2025-07-07 00:23:33 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:b84578f67 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4344.1.1-7-4873e20794 whisker-b84578f67-q67vb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali20cf8d96448 [] [] }} ContainerID="cf8f01bc73c68f1b9be824db0b9eb0ef5153b3e35eb7c0448cdffb598742a74f" Namespace="calico-system" Pod="whisker-b84578f67-q67vb" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-whisker--b84578f67--q67vb-" Jul 7 00:23:34.483746 containerd[1539]: 2025-07-07 00:23:34.129 [INFO][3834] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cf8f01bc73c68f1b9be824db0b9eb0ef5153b3e35eb7c0448cdffb598742a74f" Namespace="calico-system" Pod="whisker-b84578f67-q67vb" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-whisker--b84578f67--q67vb-eth0" Jul 7 00:23:34.483746 containerd[1539]: 2025-07-07 00:23:34.328 [INFO][3846] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf8f01bc73c68f1b9be824db0b9eb0ef5153b3e35eb7c0448cdffb598742a74f" HandleID="k8s-pod-network.cf8f01bc73c68f1b9be824db0b9eb0ef5153b3e35eb7c0448cdffb598742a74f" Workload="ci--4344.1.1--7--4873e20794-k8s-whisker--b84578f67--q67vb-eth0" Jul 7 00:23:34.484888 containerd[1539]: 2025-07-07 00:23:34.330 [INFO][3846] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cf8f01bc73c68f1b9be824db0b9eb0ef5153b3e35eb7c0448cdffb598742a74f" HandleID="k8s-pod-network.cf8f01bc73c68f1b9be824db0b9eb0ef5153b3e35eb7c0448cdffb598742a74f" Workload="ci--4344.1.1--7--4873e20794-k8s-whisker--b84578f67--q67vb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0006081c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-7-4873e20794", "pod":"whisker-b84578f67-q67vb", "timestamp":"2025-07-07 00:23:34.328362074 +0000 UTC"}, Hostname:"ci-4344.1.1-7-4873e20794", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:23:34.484888 containerd[1539]: 2025-07-07 00:23:34.330 [INFO][3846] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:23:34.484888 containerd[1539]: 2025-07-07 00:23:34.331 [INFO][3846] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:23:34.484888 containerd[1539]: 2025-07-07 00:23:34.331 [INFO][3846] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-7-4873e20794' Jul 7 00:23:34.484888 containerd[1539]: 2025-07-07 00:23:34.349 [INFO][3846] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cf8f01bc73c68f1b9be824db0b9eb0ef5153b3e35eb7c0448cdffb598742a74f" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:34.484888 containerd[1539]: 2025-07-07 00:23:34.367 [INFO][3846] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:34.484888 containerd[1539]: 2025-07-07 00:23:34.379 [INFO][3846] ipam/ipam.go 511: Trying affinity for 192.168.93.128/26 host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:34.484888 containerd[1539]: 2025-07-07 00:23:34.382 [INFO][3846] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.128/26 host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:34.484888 containerd[1539]: 2025-07-07 00:23:34.386 [INFO][3846] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:34.485980 containerd[1539]: 2025-07-07 00:23:34.387 [INFO][3846] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.cf8f01bc73c68f1b9be824db0b9eb0ef5153b3e35eb7c0448cdffb598742a74f" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:34.485980 containerd[1539]: 2025-07-07 00:23:34.390 [INFO][3846] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cf8f01bc73c68f1b9be824db0b9eb0ef5153b3e35eb7c0448cdffb598742a74f Jul 7 00:23:34.485980 containerd[1539]: 2025-07-07 00:23:34.396 [INFO][3846] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.cf8f01bc73c68f1b9be824db0b9eb0ef5153b3e35eb7c0448cdffb598742a74f" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:34.485980 containerd[1539]: 2025-07-07 00:23:34.406 [INFO][3846] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.129/26] block=192.168.93.128/26 handle="k8s-pod-network.cf8f01bc73c68f1b9be824db0b9eb0ef5153b3e35eb7c0448cdffb598742a74f" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:34.485980 containerd[1539]: 2025-07-07 00:23:34.406 [INFO][3846] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.129/26] handle="k8s-pod-network.cf8f01bc73c68f1b9be824db0b9eb0ef5153b3e35eb7c0448cdffb598742a74f" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:34.485980 containerd[1539]: 2025-07-07 00:23:34.406 [INFO][3846] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:23:34.485980 containerd[1539]: 2025-07-07 00:23:34.406 [INFO][3846] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.129/26] IPv6=[] ContainerID="cf8f01bc73c68f1b9be824db0b9eb0ef5153b3e35eb7c0448cdffb598742a74f" HandleID="k8s-pod-network.cf8f01bc73c68f1b9be824db0b9eb0ef5153b3e35eb7c0448cdffb598742a74f" Workload="ci--4344.1.1--7--4873e20794-k8s-whisker--b84578f67--q67vb-eth0" Jul 7 00:23:34.486912 containerd[1539]: 2025-07-07 00:23:34.410 [INFO][3834] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cf8f01bc73c68f1b9be824db0b9eb0ef5153b3e35eb7c0448cdffb598742a74f" Namespace="calico-system" Pod="whisker-b84578f67-q67vb" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-whisker--b84578f67--q67vb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--7--4873e20794-k8s-whisker--b84578f67--q67vb-eth0", GenerateName:"whisker-b84578f67-", Namespace:"calico-system", SelfLink:"", UID:"40b28672-0b41-413f-8700-a3653e2a699f", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 23, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b84578f67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-7-4873e20794", ContainerID:"", Pod:"whisker-b84578f67-q67vb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.93.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali20cf8d96448", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:23:34.486912 containerd[1539]: 2025-07-07 00:23:34.411 [INFO][3834] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.129/32] ContainerID="cf8f01bc73c68f1b9be824db0b9eb0ef5153b3e35eb7c0448cdffb598742a74f" Namespace="calico-system" Pod="whisker-b84578f67-q67vb" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-whisker--b84578f67--q67vb-eth0" Jul 7 00:23:34.487066 containerd[1539]: 2025-07-07 00:23:34.411 [INFO][3834] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali20cf8d96448 ContainerID="cf8f01bc73c68f1b9be824db0b9eb0ef5153b3e35eb7c0448cdffb598742a74f" Namespace="calico-system" Pod="whisker-b84578f67-q67vb" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-whisker--b84578f67--q67vb-eth0" Jul 7 00:23:34.487066 containerd[1539]: 2025-07-07 00:23:34.438 [INFO][3834] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf8f01bc73c68f1b9be824db0b9eb0ef5153b3e35eb7c0448cdffb598742a74f" Namespace="calico-system" Pod="whisker-b84578f67-q67vb" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-whisker--b84578f67--q67vb-eth0" Jul 7 00:23:34.487122 containerd[1539]: 2025-07-07 00:23:34.440 [INFO][3834] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cf8f01bc73c68f1b9be824db0b9eb0ef5153b3e35eb7c0448cdffb598742a74f" Namespace="calico-system" Pod="whisker-b84578f67-q67vb" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-whisker--b84578f67--q67vb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--7--4873e20794-k8s-whisker--b84578f67--q67vb-eth0", GenerateName:"whisker-b84578f67-", Namespace:"calico-system", SelfLink:"", UID:"40b28672-0b41-413f-8700-a3653e2a699f", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 23, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b84578f67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-7-4873e20794", ContainerID:"cf8f01bc73c68f1b9be824db0b9eb0ef5153b3e35eb7c0448cdffb598742a74f", Pod:"whisker-b84578f67-q67vb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.93.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali20cf8d96448", MAC:"a2:d6:df:1e:87:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:23:34.487181 containerd[1539]: 2025-07-07 00:23:34.477 [INFO][3834] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cf8f01bc73c68f1b9be824db0b9eb0ef5153b3e35eb7c0448cdffb598742a74f" Namespace="calico-system" Pod="whisker-b84578f67-q67vb" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-whisker--b84578f67--q67vb-eth0" Jul 7 00:23:34.680779 containerd[1539]: time="2025-07-07T00:23:34.680612037Z" level=info msg="connecting to shim cf8f01bc73c68f1b9be824db0b9eb0ef5153b3e35eb7c0448cdffb598742a74f" address="unix:///run/containerd/s/f6cf1b198af29f152f48303ddce78f28d515fe5a5eeb3f687bfddb122337da81" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:23:34.771016 systemd[1]: Started cri-containerd-cf8f01bc73c68f1b9be824db0b9eb0ef5153b3e35eb7c0448cdffb598742a74f.scope - libcontainer container cf8f01bc73c68f1b9be824db0b9eb0ef5153b3e35eb7c0448cdffb598742a74f. Jul 7 00:23:34.880846 containerd[1539]: time="2025-07-07T00:23:34.880218707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b84578f67-q67vb,Uid:40b28672-0b41-413f-8700-a3653e2a699f,Namespace:calico-system,Attempt:0,} returns sandbox id \"cf8f01bc73c68f1b9be824db0b9eb0ef5153b3e35eb7c0448cdffb598742a74f\"" Jul 7 00:23:34.888928 containerd[1539]: time="2025-07-07T00:23:34.888876582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 00:23:35.059804 containerd[1539]: time="2025-07-07T00:23:35.059617004Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca52f319811763ede2a95c592fbec1211769fe0189090260552f2514fe4f0d4a\" id:\"660b2747014be5f4f1d90582bb7e14365aee31aa79559d88311fa807f489346a\" pid:3965 exit_status:1 exited_at:{seconds:1751847815 nanos:57037953}" Jul 7 00:23:35.414262 systemd-networkd[1454]: vxlan.calico: Link UP Jul 7 00:23:35.414269 systemd-networkd[1454]: vxlan.calico: Gained carrier Jul 7 00:23:35.555217 systemd-networkd[1454]: cali20cf8d96448: Gained IPv6LL Jul 7 00:23:36.234810 containerd[1539]: time="2025-07-07T00:23:36.234756440Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:36.236563 containerd[1539]: time="2025-07-07T00:23:36.235834079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 7 00:23:36.237837 containerd[1539]: time="2025-07-07T00:23:36.237788443Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:36.240789 containerd[1539]: time="2025-07-07T00:23:36.240569369Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.351637788s" Jul 7 00:23:36.240789 containerd[1539]: time="2025-07-07T00:23:36.240610840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 7 00:23:36.242839 containerd[1539]: time="2025-07-07T00:23:36.242105979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:36.247351 containerd[1539]: time="2025-07-07T00:23:36.247298666Z" level=info msg="CreateContainer within sandbox \"cf8f01bc73c68f1b9be824db0b9eb0ef5153b3e35eb7c0448cdffb598742a74f\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 00:23:36.257787 containerd[1539]: time="2025-07-07T00:23:36.253688565Z" level=info msg="Container 15c408f910f7d2fa6adec72d0b6e1435797d832d05d9c883d74559bd2265d78c: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:23:36.269761 containerd[1539]: time="2025-07-07T00:23:36.269647924Z" level=info msg="CreateContainer within sandbox \"cf8f01bc73c68f1b9be824db0b9eb0ef5153b3e35eb7c0448cdffb598742a74f\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"15c408f910f7d2fa6adec72d0b6e1435797d832d05d9c883d74559bd2265d78c\"" Jul 7 00:23:36.270753 containerd[1539]: time="2025-07-07T00:23:36.270574764Z" level=info msg="StartContainer for \"15c408f910f7d2fa6adec72d0b6e1435797d832d05d9c883d74559bd2265d78c\"" Jul 7 00:23:36.272912 containerd[1539]: time="2025-07-07T00:23:36.272872605Z" level=info msg="connecting to shim 15c408f910f7d2fa6adec72d0b6e1435797d832d05d9c883d74559bd2265d78c" address="unix:///run/containerd/s/f6cf1b198af29f152f48303ddce78f28d515fe5a5eeb3f687bfddb122337da81" protocol=ttrpc version=3 Jul 7 00:23:36.307988 systemd[1]: Started cri-containerd-15c408f910f7d2fa6adec72d0b6e1435797d832d05d9c883d74559bd2265d78c.scope - libcontainer container 15c408f910f7d2fa6adec72d0b6e1435797d832d05d9c883d74559bd2265d78c. Jul 7 00:23:36.343632 containerd[1539]: time="2025-07-07T00:23:36.343218064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-gns4s,Uid:4ab792d8-2ac7-47ae-8618-31a2f17ae776,Namespace:calico-system,Attempt:0,}" Jul 7 00:23:36.343901 kubelet[2697]: E0707 00:23:36.343204 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:23:36.347019 containerd[1539]: time="2025-07-07T00:23:36.346834569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2b857,Uid:880d1335-8d37-4be4-92d9-2b04accdacc0,Namespace:kube-system,Attempt:0,}" Jul 7 00:23:36.434630 containerd[1539]: time="2025-07-07T00:23:36.434544808Z" level=info msg="StartContainer for \"15c408f910f7d2fa6adec72d0b6e1435797d832d05d9c883d74559bd2265d78c\" returns successfully" Jul 7 00:23:36.440362 containerd[1539]: time="2025-07-07T00:23:36.440290251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 00:23:36.629532 systemd-networkd[1454]: cali14ad9cefc47: Link UP Jul 7 00:23:36.631663 systemd-networkd[1454]: cali14ad9cefc47: Gained carrier Jul 7 00:23:36.654896 containerd[1539]: 2025-07-07 00:23:36.484 [INFO][4153] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--7--4873e20794-k8s-goldmane--58fd7646b9--gns4s-eth0 goldmane-58fd7646b9- calico-system 4ab792d8-2ac7-47ae-8618-31a2f17ae776 828 0 2025-07-07 00:23:15 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4344.1.1-7-4873e20794 goldmane-58fd7646b9-gns4s eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali14ad9cefc47 [] [] }} ContainerID="32f44e20b703cd1bf4056142f6ff2361d4a87a4133b379313d6c8e602020d767" Namespace="calico-system" Pod="goldmane-58fd7646b9-gns4s" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-goldmane--58fd7646b9--gns4s-" Jul 7 00:23:36.654896 containerd[1539]: 2025-07-07 00:23:36.485 [INFO][4153] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="32f44e20b703cd1bf4056142f6ff2361d4a87a4133b379313d6c8e602020d767" Namespace="calico-system" Pod="goldmane-58fd7646b9-gns4s" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-goldmane--58fd7646b9--gns4s-eth0" Jul 7 00:23:36.654896 containerd[1539]: 2025-07-07 00:23:36.542 [INFO][4188] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="32f44e20b703cd1bf4056142f6ff2361d4a87a4133b379313d6c8e602020d767" HandleID="k8s-pod-network.32f44e20b703cd1bf4056142f6ff2361d4a87a4133b379313d6c8e602020d767" Workload="ci--4344.1.1--7--4873e20794-k8s-goldmane--58fd7646b9--gns4s-eth0" Jul 7 00:23:36.655516 containerd[1539]: 2025-07-07 00:23:36.542 [INFO][4188] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="32f44e20b703cd1bf4056142f6ff2361d4a87a4133b379313d6c8e602020d767" HandleID="k8s-pod-network.32f44e20b703cd1bf4056142f6ff2361d4a87a4133b379313d6c8e602020d767" Workload="ci--4344.1.1--7--4873e20794-k8s-goldmane--58fd7646b9--gns4s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf9a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-7-4873e20794", "pod":"goldmane-58fd7646b9-gns4s", "timestamp":"2025-07-07 00:23:36.54226838 +0000 UTC"}, Hostname:"ci-4344.1.1-7-4873e20794", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:23:36.655516 containerd[1539]: 2025-07-07 00:23:36.542 [INFO][4188] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:23:36.655516 containerd[1539]: 2025-07-07 00:23:36.542 [INFO][4188] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:23:36.655516 containerd[1539]: 2025-07-07 00:23:36.542 [INFO][4188] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-7-4873e20794' Jul 7 00:23:36.655516 containerd[1539]: 2025-07-07 00:23:36.551 [INFO][4188] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.32f44e20b703cd1bf4056142f6ff2361d4a87a4133b379313d6c8e602020d767" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:36.655516 containerd[1539]: 2025-07-07 00:23:36.565 [INFO][4188] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:36.655516 containerd[1539]: 2025-07-07 00:23:36.575 [INFO][4188] ipam/ipam.go 511: Trying affinity for 192.168.93.128/26 host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:36.655516 containerd[1539]: 2025-07-07 00:23:36.579 [INFO][4188] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.128/26 host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:36.655516 containerd[1539]: 2025-07-07 00:23:36.583 [INFO][4188] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:36.656421 containerd[1539]: 2025-07-07 00:23:36.584 [INFO][4188] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.32f44e20b703cd1bf4056142f6ff2361d4a87a4133b379313d6c8e602020d767" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:36.656421 containerd[1539]: 2025-07-07 00:23:36.587 [INFO][4188] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.32f44e20b703cd1bf4056142f6ff2361d4a87a4133b379313d6c8e602020d767 Jul 7 00:23:36.656421 containerd[1539]: 2025-07-07 00:23:36.597 [INFO][4188] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.32f44e20b703cd1bf4056142f6ff2361d4a87a4133b379313d6c8e602020d767" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:36.656421 containerd[1539]: 2025-07-07 00:23:36.610 [INFO][4188] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.130/26] block=192.168.93.128/26 handle="k8s-pod-network.32f44e20b703cd1bf4056142f6ff2361d4a87a4133b379313d6c8e602020d767" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:36.656421 containerd[1539]: 2025-07-07 00:23:36.610 [INFO][4188] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.130/26] handle="k8s-pod-network.32f44e20b703cd1bf4056142f6ff2361d4a87a4133b379313d6c8e602020d767" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:36.656421 containerd[1539]: 2025-07-07 00:23:36.610 [INFO][4188] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:23:36.656421 containerd[1539]: 2025-07-07 00:23:36.610 [INFO][4188] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.130/26] IPv6=[] ContainerID="32f44e20b703cd1bf4056142f6ff2361d4a87a4133b379313d6c8e602020d767" HandleID="k8s-pod-network.32f44e20b703cd1bf4056142f6ff2361d4a87a4133b379313d6c8e602020d767" Workload="ci--4344.1.1--7--4873e20794-k8s-goldmane--58fd7646b9--gns4s-eth0" Jul 7 00:23:36.656630 containerd[1539]: 2025-07-07 00:23:36.619 [INFO][4153] cni-plugin/k8s.go 418: Populated endpoint ContainerID="32f44e20b703cd1bf4056142f6ff2361d4a87a4133b379313d6c8e602020d767" Namespace="calico-system" Pod="goldmane-58fd7646b9-gns4s" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-goldmane--58fd7646b9--gns4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--7--4873e20794-k8s-goldmane--58fd7646b9--gns4s-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"4ab792d8-2ac7-47ae-8618-31a2f17ae776", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 23, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-7-4873e20794", ContainerID:"", Pod:"goldmane-58fd7646b9-gns4s", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.93.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali14ad9cefc47", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:23:36.656630 containerd[1539]: 2025-07-07 00:23:36.619 [INFO][4153] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.130/32] ContainerID="32f44e20b703cd1bf4056142f6ff2361d4a87a4133b379313d6c8e602020d767" Namespace="calico-system" Pod="goldmane-58fd7646b9-gns4s" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-goldmane--58fd7646b9--gns4s-eth0" Jul 7 00:23:36.656767 containerd[1539]: 2025-07-07 00:23:36.619 [INFO][4153] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali14ad9cefc47 ContainerID="32f44e20b703cd1bf4056142f6ff2361d4a87a4133b379313d6c8e602020d767" Namespace="calico-system" Pod="goldmane-58fd7646b9-gns4s" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-goldmane--58fd7646b9--gns4s-eth0" Jul 7 00:23:36.656767 containerd[1539]: 2025-07-07 00:23:36.631 [INFO][4153] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="32f44e20b703cd1bf4056142f6ff2361d4a87a4133b379313d6c8e602020d767" Namespace="calico-system" Pod="goldmane-58fd7646b9-gns4s" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-goldmane--58fd7646b9--gns4s-eth0" Jul 7 00:23:36.658382 containerd[1539]: 2025-07-07 00:23:36.634 [INFO][4153] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="32f44e20b703cd1bf4056142f6ff2361d4a87a4133b379313d6c8e602020d767" Namespace="calico-system" Pod="goldmane-58fd7646b9-gns4s" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-goldmane--58fd7646b9--gns4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--7--4873e20794-k8s-goldmane--58fd7646b9--gns4s-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"4ab792d8-2ac7-47ae-8618-31a2f17ae776", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 23, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-7-4873e20794", ContainerID:"32f44e20b703cd1bf4056142f6ff2361d4a87a4133b379313d6c8e602020d767", Pod:"goldmane-58fd7646b9-gns4s", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.93.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali14ad9cefc47", MAC:"26:78:f4:8d:95:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:23:36.658703 containerd[1539]: 2025-07-07 00:23:36.647 [INFO][4153] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="32f44e20b703cd1bf4056142f6ff2361d4a87a4133b379313d6c8e602020d767" Namespace="calico-system" Pod="goldmane-58fd7646b9-gns4s" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-goldmane--58fd7646b9--gns4s-eth0" Jul 7 00:23:36.694175 containerd[1539]: time="2025-07-07T00:23:36.693796699Z" level=info msg="connecting to shim 32f44e20b703cd1bf4056142f6ff2361d4a87a4133b379313d6c8e602020d767" address="unix:///run/containerd/s/06ffb06becd2f92e66b12bbad205b4f3ff37c4a4969bd5d4589ab01ab3f9388f" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:23:36.729987 systemd[1]: Started cri-containerd-32f44e20b703cd1bf4056142f6ff2361d4a87a4133b379313d6c8e602020d767.scope - libcontainer container 32f44e20b703cd1bf4056142f6ff2361d4a87a4133b379313d6c8e602020d767. Jul 7 00:23:36.738734 systemd-networkd[1454]: calia86a2d40b29: Link UP Jul 7 00:23:36.739802 systemd-networkd[1454]: calia86a2d40b29: Gained carrier Jul 7 00:23:36.769832 containerd[1539]: 2025-07-07 00:23:36.493 [INFO][4151] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--7--4873e20794-k8s-coredns--7c65d6cfc9--2b857-eth0 coredns-7c65d6cfc9- kube-system 880d1335-8d37-4be4-92d9-2b04accdacc0 833 0 2025-07-07 00:22:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.1.1-7-4873e20794 coredns-7c65d6cfc9-2b857 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia86a2d40b29 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="475405e3199eede302ef044cff90f28ce7ab54366fd388b5fef5b5f2065dc88c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2b857" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-coredns--7c65d6cfc9--2b857-" Jul 7 00:23:36.769832 containerd[1539]: 2025-07-07 00:23:36.494 [INFO][4151] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="475405e3199eede302ef044cff90f28ce7ab54366fd388b5fef5b5f2065dc88c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2b857" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-coredns--7c65d6cfc9--2b857-eth0" Jul 7 00:23:36.769832 containerd[1539]: 2025-07-07 00:23:36.568 [INFO][4193] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="475405e3199eede302ef044cff90f28ce7ab54366fd388b5fef5b5f2065dc88c" HandleID="k8s-pod-network.475405e3199eede302ef044cff90f28ce7ab54366fd388b5fef5b5f2065dc88c" Workload="ci--4344.1.1--7--4873e20794-k8s-coredns--7c65d6cfc9--2b857-eth0" Jul 7 00:23:36.770361 containerd[1539]: 2025-07-07 00:23:36.568 [INFO][4193] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="475405e3199eede302ef044cff90f28ce7ab54366fd388b5fef5b5f2065dc88c" HandleID="k8s-pod-network.475405e3199eede302ef044cff90f28ce7ab54366fd388b5fef5b5f2065dc88c" Workload="ci--4344.1.1--7--4873e20794-k8s-coredns--7c65d6cfc9--2b857-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5540), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.1.1-7-4873e20794", "pod":"coredns-7c65d6cfc9-2b857", "timestamp":"2025-07-07 00:23:36.568121661 +0000 UTC"}, Hostname:"ci-4344.1.1-7-4873e20794", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:23:36.770361 containerd[1539]: 2025-07-07 00:23:36.568 [INFO][4193] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:23:36.770361 containerd[1539]: 2025-07-07 00:23:36.611 [INFO][4193] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:23:36.770361 containerd[1539]: 2025-07-07 00:23:36.611 [INFO][4193] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-7-4873e20794' Jul 7 00:23:36.770361 containerd[1539]: 2025-07-07 00:23:36.653 [INFO][4193] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.475405e3199eede302ef044cff90f28ce7ab54366fd388b5fef5b5f2065dc88c" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:36.770361 containerd[1539]: 2025-07-07 00:23:36.665 [INFO][4193] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:36.770361 containerd[1539]: 2025-07-07 00:23:36.682 [INFO][4193] ipam/ipam.go 511: Trying affinity for 192.168.93.128/26 host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:36.770361 containerd[1539]: 2025-07-07 00:23:36.685 [INFO][4193] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.128/26 host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:36.770361 containerd[1539]: 2025-07-07 00:23:36.690 [INFO][4193] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:36.771162 containerd[1539]: 2025-07-07 00:23:36.691 [INFO][4193] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.475405e3199eede302ef044cff90f28ce7ab54366fd388b5fef5b5f2065dc88c" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:36.771162 containerd[1539]: 2025-07-07 00:23:36.696 [INFO][4193] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.475405e3199eede302ef044cff90f28ce7ab54366fd388b5fef5b5f2065dc88c Jul 7 00:23:36.771162 containerd[1539]: 2025-07-07 00:23:36.705 [INFO][4193] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.475405e3199eede302ef044cff90f28ce7ab54366fd388b5fef5b5f2065dc88c" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:36.771162 containerd[1539]: 2025-07-07 00:23:36.720 [INFO][4193] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.131/26] block=192.168.93.128/26 handle="k8s-pod-network.475405e3199eede302ef044cff90f28ce7ab54366fd388b5fef5b5f2065dc88c" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:36.771162 containerd[1539]: 2025-07-07 00:23:36.722 [INFO][4193] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.131/26] handle="k8s-pod-network.475405e3199eede302ef044cff90f28ce7ab54366fd388b5fef5b5f2065dc88c" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:36.771162 containerd[1539]: 2025-07-07 00:23:36.722 [INFO][4193] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:23:36.771162 containerd[1539]: 2025-07-07 00:23:36.723 [INFO][4193] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.131/26] IPv6=[] ContainerID="475405e3199eede302ef044cff90f28ce7ab54366fd388b5fef5b5f2065dc88c" HandleID="k8s-pod-network.475405e3199eede302ef044cff90f28ce7ab54366fd388b5fef5b5f2065dc88c" Workload="ci--4344.1.1--7--4873e20794-k8s-coredns--7c65d6cfc9--2b857-eth0" Jul 7 00:23:36.771409 containerd[1539]: 2025-07-07 00:23:36.734 [INFO][4151] cni-plugin/k8s.go 418: Populated endpoint ContainerID="475405e3199eede302ef044cff90f28ce7ab54366fd388b5fef5b5f2065dc88c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2b857" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-coredns--7c65d6cfc9--2b857-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--7--4873e20794-k8s-coredns--7c65d6cfc9--2b857-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"880d1335-8d37-4be4-92d9-2b04accdacc0", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-7-4873e20794", ContainerID:"", Pod:"coredns-7c65d6cfc9-2b857", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia86a2d40b29", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:23:36.771409 containerd[1539]: 2025-07-07 00:23:36.735 [INFO][4151] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.131/32] ContainerID="475405e3199eede302ef044cff90f28ce7ab54366fd388b5fef5b5f2065dc88c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2b857" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-coredns--7c65d6cfc9--2b857-eth0" Jul 7 00:23:36.771409 containerd[1539]: 2025-07-07 00:23:36.735 [INFO][4151] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia86a2d40b29 ContainerID="475405e3199eede302ef044cff90f28ce7ab54366fd388b5fef5b5f2065dc88c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2b857" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-coredns--7c65d6cfc9--2b857-eth0" Jul 7 00:23:36.771409 containerd[1539]: 2025-07-07 00:23:36.741 [INFO][4151] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="475405e3199eede302ef044cff90f28ce7ab54366fd388b5fef5b5f2065dc88c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2b857" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-coredns--7c65d6cfc9--2b857-eth0" Jul 7 00:23:36.771409 containerd[1539]: 2025-07-07 00:23:36.746 [INFO][4151] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="475405e3199eede302ef044cff90f28ce7ab54366fd388b5fef5b5f2065dc88c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2b857" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-coredns--7c65d6cfc9--2b857-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--7--4873e20794-k8s-coredns--7c65d6cfc9--2b857-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"880d1335-8d37-4be4-92d9-2b04accdacc0", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-7-4873e20794", ContainerID:"475405e3199eede302ef044cff90f28ce7ab54366fd388b5fef5b5f2065dc88c", Pod:"coredns-7c65d6cfc9-2b857", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia86a2d40b29", MAC:"76:61:55:8c:df:83", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:23:36.771409 containerd[1539]: 2025-07-07 00:23:36.763 [INFO][4151] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="475405e3199eede302ef044cff90f28ce7ab54366fd388b5fef5b5f2065dc88c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2b857" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-coredns--7c65d6cfc9--2b857-eth0" Jul 7 00:23:36.802509 containerd[1539]: time="2025-07-07T00:23:36.802429063Z" level=info msg="connecting to shim 475405e3199eede302ef044cff90f28ce7ab54366fd388b5fef5b5f2065dc88c" address="unix:///run/containerd/s/dd7b7e9d9312e8a8b083cdc9449d79fc37252a347de0964bd236d22bc60f57a5" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:23:36.867914 systemd[1]: Started cri-containerd-475405e3199eede302ef044cff90f28ce7ab54366fd388b5fef5b5f2065dc88c.scope - libcontainer container 475405e3199eede302ef044cff90f28ce7ab54366fd388b5fef5b5f2065dc88c. Jul 7 00:23:36.879606 containerd[1539]: time="2025-07-07T00:23:36.879561840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-gns4s,Uid:4ab792d8-2ac7-47ae-8618-31a2f17ae776,Namespace:calico-system,Attempt:0,} returns sandbox id \"32f44e20b703cd1bf4056142f6ff2361d4a87a4133b379313d6c8e602020d767\"" Jul 7 00:23:36.944125 containerd[1539]: time="2025-07-07T00:23:36.943998441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2b857,Uid:880d1335-8d37-4be4-92d9-2b04accdacc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"475405e3199eede302ef044cff90f28ce7ab54366fd388b5fef5b5f2065dc88c\"" Jul 7 00:23:36.945721 kubelet[2697]: E0707 00:23:36.945355 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:23:36.951136 containerd[1539]: time="2025-07-07T00:23:36.951074907Z" level=info msg="CreateContainer within sandbox \"475405e3199eede302ef044cff90f28ce7ab54366fd388b5fef5b5f2065dc88c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:23:36.984753 containerd[1539]: time="2025-07-07T00:23:36.984259715Z" level=info msg="Container b8fa71bcd4688643fc174e6513597f56831accda2a4235d8c7a3391fc50dfaaa: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:23:36.990238 containerd[1539]: time="2025-07-07T00:23:36.990149849Z" level=info msg="CreateContainer within sandbox \"475405e3199eede302ef044cff90f28ce7ab54366fd388b5fef5b5f2065dc88c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b8fa71bcd4688643fc174e6513597f56831accda2a4235d8c7a3391fc50dfaaa\"" Jul 7 00:23:36.991644 containerd[1539]: time="2025-07-07T00:23:36.991604140Z" level=info msg="StartContainer for \"b8fa71bcd4688643fc174e6513597f56831accda2a4235d8c7a3391fc50dfaaa\"" Jul 7 00:23:36.993161 containerd[1539]: time="2025-07-07T00:23:36.993065391Z" level=info msg="connecting to shim b8fa71bcd4688643fc174e6513597f56831accda2a4235d8c7a3391fc50dfaaa" address="unix:///run/containerd/s/dd7b7e9d9312e8a8b083cdc9449d79fc37252a347de0964bd236d22bc60f57a5" protocol=ttrpc version=3 Jul 7 00:23:37.019955 systemd[1]: Started cri-containerd-b8fa71bcd4688643fc174e6513597f56831accda2a4235d8c7a3391fc50dfaaa.scope - libcontainer container b8fa71bcd4688643fc174e6513597f56831accda2a4235d8c7a3391fc50dfaaa. Jul 7 00:23:37.070030 containerd[1539]: time="2025-07-07T00:23:37.069978927Z" level=info msg="StartContainer for \"b8fa71bcd4688643fc174e6513597f56831accda2a4235d8c7a3391fc50dfaaa\" returns successfully" Jul 7 00:23:37.219503 systemd-networkd[1454]: vxlan.calico: Gained IPv6LL Jul 7 00:23:37.342810 containerd[1539]: time="2025-07-07T00:23:37.342739871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f56d9bbdd-88gl4,Uid:1f38a8f0-3f6d-4d3b-ac4a-4494601c3dd4,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:23:37.503237 systemd-networkd[1454]: calide6bc0f1817: Link UP Jul 7 00:23:37.504958 systemd-networkd[1454]: calide6bc0f1817: Gained carrier Jul 7 00:23:37.530876 containerd[1539]: 2025-07-07 00:23:37.395 [INFO][4347] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--7--4873e20794-k8s-calico--apiserver--6f56d9bbdd--88gl4-eth0 calico-apiserver-6f56d9bbdd- calico-apiserver 1f38a8f0-3f6d-4d3b-ac4a-4494601c3dd4 830 0 2025-07-07 00:23:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f56d9bbdd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.1-7-4873e20794 calico-apiserver-6f56d9bbdd-88gl4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calide6bc0f1817 [] [] }} ContainerID="0dcd33c79914b94497a0b9958f2598c586f7ff42653a62c5ec8c4a6084e7dc87" Namespace="calico-apiserver" Pod="calico-apiserver-6f56d9bbdd-88gl4" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-calico--apiserver--6f56d9bbdd--88gl4-" Jul 7 00:23:37.530876 containerd[1539]: 2025-07-07 00:23:37.395 [INFO][4347] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0dcd33c79914b94497a0b9958f2598c586f7ff42653a62c5ec8c4a6084e7dc87" Namespace="calico-apiserver" Pod="calico-apiserver-6f56d9bbdd-88gl4" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-calico--apiserver--6f56d9bbdd--88gl4-eth0" Jul 7 00:23:37.530876 containerd[1539]: 2025-07-07 00:23:37.440 [INFO][4359] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0dcd33c79914b94497a0b9958f2598c586f7ff42653a62c5ec8c4a6084e7dc87" HandleID="k8s-pod-network.0dcd33c79914b94497a0b9958f2598c586f7ff42653a62c5ec8c4a6084e7dc87" Workload="ci--4344.1.1--7--4873e20794-k8s-calico--apiserver--6f56d9bbdd--88gl4-eth0" Jul 7 00:23:37.530876 containerd[1539]: 2025-07-07 00:23:37.440 [INFO][4359] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0dcd33c79914b94497a0b9958f2598c586f7ff42653a62c5ec8c4a6084e7dc87" HandleID="k8s-pod-network.0dcd33c79914b94497a0b9958f2598c586f7ff42653a62c5ec8c4a6084e7dc87" Workload="ci--4344.1.1--7--4873e20794-k8s-calico--apiserver--6f56d9bbdd--88gl4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f890), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.1-7-4873e20794", "pod":"calico-apiserver-6f56d9bbdd-88gl4", "timestamp":"2025-07-07 00:23:37.440297613 +0000 UTC"}, Hostname:"ci-4344.1.1-7-4873e20794", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:23:37.530876 containerd[1539]: 2025-07-07 00:23:37.440 [INFO][4359] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:23:37.530876 containerd[1539]: 2025-07-07 00:23:37.440 [INFO][4359] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:23:37.530876 containerd[1539]: 2025-07-07 00:23:37.440 [INFO][4359] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-7-4873e20794' Jul 7 00:23:37.530876 containerd[1539]: 2025-07-07 00:23:37.449 [INFO][4359] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0dcd33c79914b94497a0b9958f2598c586f7ff42653a62c5ec8c4a6084e7dc87" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:37.530876 containerd[1539]: 2025-07-07 00:23:37.456 [INFO][4359] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:37.530876 containerd[1539]: 2025-07-07 00:23:37.463 [INFO][4359] ipam/ipam.go 511: Trying affinity for 192.168.93.128/26 host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:37.530876 containerd[1539]: 2025-07-07 00:23:37.465 [INFO][4359] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.128/26 host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:37.530876 containerd[1539]: 2025-07-07 00:23:37.470 [INFO][4359] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:37.530876 containerd[1539]: 2025-07-07 00:23:37.470 [INFO][4359] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.0dcd33c79914b94497a0b9958f2598c586f7ff42653a62c5ec8c4a6084e7dc87" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:37.530876 containerd[1539]: 2025-07-07 00:23:37.473 [INFO][4359] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0dcd33c79914b94497a0b9958f2598c586f7ff42653a62c5ec8c4a6084e7dc87 Jul 7 00:23:37.530876 containerd[1539]: 2025-07-07 00:23:37.484 [INFO][4359] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.0dcd33c79914b94497a0b9958f2598c586f7ff42653a62c5ec8c4a6084e7dc87" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:37.530876 containerd[1539]: 2025-07-07 00:23:37.495 [INFO][4359] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.132/26] block=192.168.93.128/26 handle="k8s-pod-network.0dcd33c79914b94497a0b9958f2598c586f7ff42653a62c5ec8c4a6084e7dc87" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:37.530876 containerd[1539]: 2025-07-07 00:23:37.495 [INFO][4359] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.132/26] handle="k8s-pod-network.0dcd33c79914b94497a0b9958f2598c586f7ff42653a62c5ec8c4a6084e7dc87" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:37.530876 containerd[1539]: 2025-07-07 00:23:37.495 [INFO][4359] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:23:37.530876 containerd[1539]: 2025-07-07 00:23:37.495 [INFO][4359] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.132/26] IPv6=[] ContainerID="0dcd33c79914b94497a0b9958f2598c586f7ff42653a62c5ec8c4a6084e7dc87" HandleID="k8s-pod-network.0dcd33c79914b94497a0b9958f2598c586f7ff42653a62c5ec8c4a6084e7dc87" Workload="ci--4344.1.1--7--4873e20794-k8s-calico--apiserver--6f56d9bbdd--88gl4-eth0" Jul 7 00:23:37.532553 containerd[1539]: 2025-07-07 00:23:37.499 [INFO][4347] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0dcd33c79914b94497a0b9958f2598c586f7ff42653a62c5ec8c4a6084e7dc87" Namespace="calico-apiserver" Pod="calico-apiserver-6f56d9bbdd-88gl4" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-calico--apiserver--6f56d9bbdd--88gl4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--7--4873e20794-k8s-calico--apiserver--6f56d9bbdd--88gl4-eth0", GenerateName:"calico-apiserver-6f56d9bbdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1f38a8f0-3f6d-4d3b-ac4a-4494601c3dd4", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 23, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f56d9bbdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-7-4873e20794", ContainerID:"", Pod:"calico-apiserver-6f56d9bbdd-88gl4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calide6bc0f1817", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:23:37.532553 containerd[1539]: 2025-07-07 00:23:37.499 [INFO][4347] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.132/32] ContainerID="0dcd33c79914b94497a0b9958f2598c586f7ff42653a62c5ec8c4a6084e7dc87" Namespace="calico-apiserver" Pod="calico-apiserver-6f56d9bbdd-88gl4" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-calico--apiserver--6f56d9bbdd--88gl4-eth0" Jul 7 00:23:37.532553 containerd[1539]: 2025-07-07 00:23:37.499 [INFO][4347] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide6bc0f1817 ContainerID="0dcd33c79914b94497a0b9958f2598c586f7ff42653a62c5ec8c4a6084e7dc87" Namespace="calico-apiserver" Pod="calico-apiserver-6f56d9bbdd-88gl4" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-calico--apiserver--6f56d9bbdd--88gl4-eth0" Jul 7 00:23:37.532553 containerd[1539]: 2025-07-07 00:23:37.503 [INFO][4347] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0dcd33c79914b94497a0b9958f2598c586f7ff42653a62c5ec8c4a6084e7dc87" Namespace="calico-apiserver" Pod="calico-apiserver-6f56d9bbdd-88gl4" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-calico--apiserver--6f56d9bbdd--88gl4-eth0" Jul 7 00:23:37.532553 containerd[1539]: 2025-07-07 00:23:37.505 [INFO][4347] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0dcd33c79914b94497a0b9958f2598c586f7ff42653a62c5ec8c4a6084e7dc87" Namespace="calico-apiserver" Pod="calico-apiserver-6f56d9bbdd-88gl4" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-calico--apiserver--6f56d9bbdd--88gl4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--7--4873e20794-k8s-calico--apiserver--6f56d9bbdd--88gl4-eth0", GenerateName:"calico-apiserver-6f56d9bbdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1f38a8f0-3f6d-4d3b-ac4a-4494601c3dd4", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 23, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f56d9bbdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-7-4873e20794", ContainerID:"0dcd33c79914b94497a0b9958f2598c586f7ff42653a62c5ec8c4a6084e7dc87", Pod:"calico-apiserver-6f56d9bbdd-88gl4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calide6bc0f1817", MAC:"22:5a:c2:27:03:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:23:37.532553 containerd[1539]: 2025-07-07 00:23:37.526 [INFO][4347] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0dcd33c79914b94497a0b9958f2598c586f7ff42653a62c5ec8c4a6084e7dc87" Namespace="calico-apiserver" Pod="calico-apiserver-6f56d9bbdd-88gl4" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-calico--apiserver--6f56d9bbdd--88gl4-eth0" Jul 7 00:23:37.569367 containerd[1539]: time="2025-07-07T00:23:37.569318323Z" level=info msg="connecting to shim 0dcd33c79914b94497a0b9958f2598c586f7ff42653a62c5ec8c4a6084e7dc87" address="unix:///run/containerd/s/c55058851cdd80ae499e63859f16b724b2e47a76a95f5048a79c0588ff75da88" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:23:37.612969 systemd[1]: Started cri-containerd-0dcd33c79914b94497a0b9958f2598c586f7ff42653a62c5ec8c4a6084e7dc87.scope - libcontainer container 0dcd33c79914b94497a0b9958f2598c586f7ff42653a62c5ec8c4a6084e7dc87. Jul 7 00:23:37.620725 kubelet[2697]: E0707 00:23:37.618969 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:23:37.657990 kubelet[2697]: I0707 00:23:37.655969 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-2b857" podStartSLOduration=39.655927641 podStartE2EDuration="39.655927641s" podCreationTimestamp="2025-07-07 00:22:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:23:37.645141707 +0000 UTC m=+43.435056159" watchObservedRunningTime="2025-07-07 00:23:37.655927641 +0000 UTC m=+43.445842090" Jul 7 00:23:37.733785 containerd[1539]: time="2025-07-07T00:23:37.733729905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f56d9bbdd-88gl4,Uid:1f38a8f0-3f6d-4d3b-ac4a-4494601c3dd4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0dcd33c79914b94497a0b9958f2598c586f7ff42653a62c5ec8c4a6084e7dc87\"" Jul 7 00:23:37.922976 systemd-networkd[1454]: calia86a2d40b29: Gained IPv6LL Jul 7 00:23:38.243367 systemd-networkd[1454]: cali14ad9cefc47: Gained IPv6LL Jul 7 00:23:38.627773 kubelet[2697]: E0707 00:23:38.627722 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:23:38.883403 systemd-networkd[1454]: calide6bc0f1817: Gained IPv6LL Jul 7 00:23:39.112029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3268522661.mount: Deactivated successfully. Jul 7 00:23:39.124925 containerd[1539]: time="2025-07-07T00:23:39.124827277Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:39.126530 containerd[1539]: time="2025-07-07T00:23:39.126067395Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 7 00:23:39.126530 containerd[1539]: time="2025-07-07T00:23:39.126111160Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:39.128791 containerd[1539]: time="2025-07-07T00:23:39.128744750Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:39.130714 containerd[1539]: time="2025-07-07T00:23:39.130637943Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.690276032s" Jul 7 00:23:39.130714 containerd[1539]: time="2025-07-07T00:23:39.130709298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 7 00:23:39.132801 containerd[1539]: time="2025-07-07T00:23:39.132745422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 00:23:39.135254 containerd[1539]: time="2025-07-07T00:23:39.135076438Z" level=info msg="CreateContainer within sandbox \"cf8f01bc73c68f1b9be824db0b9eb0ef5153b3e35eb7c0448cdffb598742a74f\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 00:23:39.145651 containerd[1539]: time="2025-07-07T00:23:39.145211640Z" level=info msg="Container fb9e371fa036957cacea777790c568f065b3b1e33ec561eccea43abbec2b8391: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:23:39.153505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1308733787.mount: Deactivated successfully. Jul 7 00:23:39.161778 containerd[1539]: time="2025-07-07T00:23:39.161659962Z" level=info msg="CreateContainer within sandbox \"cf8f01bc73c68f1b9be824db0b9eb0ef5153b3e35eb7c0448cdffb598742a74f\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"fb9e371fa036957cacea777790c568f065b3b1e33ec561eccea43abbec2b8391\"" Jul 7 00:23:39.162857 containerd[1539]: time="2025-07-07T00:23:39.162791979Z" level=info msg="StartContainer for \"fb9e371fa036957cacea777790c568f065b3b1e33ec561eccea43abbec2b8391\"" Jul 7 00:23:39.165004 containerd[1539]: time="2025-07-07T00:23:39.164924350Z" level=info msg="connecting to shim fb9e371fa036957cacea777790c568f065b3b1e33ec561eccea43abbec2b8391" address="unix:///run/containerd/s/f6cf1b198af29f152f48303ddce78f28d515fe5a5eeb3f687bfddb122337da81" protocol=ttrpc version=3 Jul 7 00:23:39.210941 systemd[1]: Started cri-containerd-fb9e371fa036957cacea777790c568f065b3b1e33ec561eccea43abbec2b8391.scope - libcontainer container fb9e371fa036957cacea777790c568f065b3b1e33ec561eccea43abbec2b8391. Jul 7 00:23:39.278143 containerd[1539]: time="2025-07-07T00:23:39.278028619Z" level=info msg="StartContainer for \"fb9e371fa036957cacea777790c568f065b3b1e33ec561eccea43abbec2b8391\" returns successfully" Jul 7 00:23:39.343367 containerd[1539]: time="2025-07-07T00:23:39.343288970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f56d9bbdd-rm6r9,Uid:ee3f96cb-ad32-4cbc-bfdc-11ac89b39a73,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:23:39.343901 containerd[1539]: time="2025-07-07T00:23:39.343313599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z8ldv,Uid:dba08fd4-9512-4c11-aac8-02de403331df,Namespace:calico-system,Attempt:0,}" Jul 7 00:23:39.552958 systemd-networkd[1454]: cali7776a91dc64: Link UP Jul 7 00:23:39.554557 systemd-networkd[1454]: cali7776a91dc64: Gained carrier Jul 7 00:23:39.582778 containerd[1539]: 2025-07-07 00:23:39.409 [INFO][4467] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--7--4873e20794-k8s-calico--apiserver--6f56d9bbdd--rm6r9-eth0 calico-apiserver-6f56d9bbdd- calico-apiserver ee3f96cb-ad32-4cbc-bfdc-11ac89b39a73 832 0 2025-07-07 00:23:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f56d9bbdd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.1-7-4873e20794 calico-apiserver-6f56d9bbdd-rm6r9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7776a91dc64 [] [] }} ContainerID="940f9e00d7ab4518e6c790db8dc98b62ab8b1be154023ea3ead1e8df21fa57e7" Namespace="calico-apiserver" Pod="calico-apiserver-6f56d9bbdd-rm6r9" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-calico--apiserver--6f56d9bbdd--rm6r9-" Jul 7 00:23:39.582778 containerd[1539]: 2025-07-07 00:23:39.411 [INFO][4467] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="940f9e00d7ab4518e6c790db8dc98b62ab8b1be154023ea3ead1e8df21fa57e7" Namespace="calico-apiserver" Pod="calico-apiserver-6f56d9bbdd-rm6r9" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-calico--apiserver--6f56d9bbdd--rm6r9-eth0" Jul 7 00:23:39.582778 containerd[1539]: 2025-07-07 00:23:39.471 [INFO][4489] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="940f9e00d7ab4518e6c790db8dc98b62ab8b1be154023ea3ead1e8df21fa57e7" HandleID="k8s-pod-network.940f9e00d7ab4518e6c790db8dc98b62ab8b1be154023ea3ead1e8df21fa57e7" Workload="ci--4344.1.1--7--4873e20794-k8s-calico--apiserver--6f56d9bbdd--rm6r9-eth0" Jul 7 00:23:39.582778 containerd[1539]: 2025-07-07 00:23:39.475 [INFO][4489] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="940f9e00d7ab4518e6c790db8dc98b62ab8b1be154023ea3ead1e8df21fa57e7" HandleID="k8s-pod-network.940f9e00d7ab4518e6c790db8dc98b62ab8b1be154023ea3ead1e8df21fa57e7" Workload="ci--4344.1.1--7--4873e20794-k8s-calico--apiserver--6f56d9bbdd--rm6r9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.1-7-4873e20794", "pod":"calico-apiserver-6f56d9bbdd-rm6r9", "timestamp":"2025-07-07 00:23:39.471414991 +0000 UTC"}, Hostname:"ci-4344.1.1-7-4873e20794", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:23:39.582778 containerd[1539]: 2025-07-07 00:23:39.475 [INFO][4489] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:23:39.582778 containerd[1539]: 2025-07-07 00:23:39.476 [INFO][4489] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:23:39.582778 containerd[1539]: 2025-07-07 00:23:39.476 [INFO][4489] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-7-4873e20794' Jul 7 00:23:39.582778 containerd[1539]: 2025-07-07 00:23:39.494 [INFO][4489] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.940f9e00d7ab4518e6c790db8dc98b62ab8b1be154023ea3ead1e8df21fa57e7" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:39.582778 containerd[1539]: 2025-07-07 00:23:39.505 [INFO][4489] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:39.582778 containerd[1539]: 2025-07-07 00:23:39.512 [INFO][4489] ipam/ipam.go 511: Trying affinity for 192.168.93.128/26 host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:39.582778 containerd[1539]: 2025-07-07 00:23:39.515 [INFO][4489] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.128/26 host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:39.582778 containerd[1539]: 2025-07-07 00:23:39.518 [INFO][4489] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:39.582778 containerd[1539]: 2025-07-07 00:23:39.519 [INFO][4489] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.940f9e00d7ab4518e6c790db8dc98b62ab8b1be154023ea3ead1e8df21fa57e7" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:39.582778 containerd[1539]: 2025-07-07 00:23:39.521 [INFO][4489] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.940f9e00d7ab4518e6c790db8dc98b62ab8b1be154023ea3ead1e8df21fa57e7 Jul 7 00:23:39.582778 containerd[1539]: 2025-07-07 00:23:39.527 [INFO][4489] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.940f9e00d7ab4518e6c790db8dc98b62ab8b1be154023ea3ead1e8df21fa57e7" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:39.582778 containerd[1539]: 2025-07-07 00:23:39.537 [INFO][4489] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.133/26] block=192.168.93.128/26 handle="k8s-pod-network.940f9e00d7ab4518e6c790db8dc98b62ab8b1be154023ea3ead1e8df21fa57e7" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:39.582778 containerd[1539]: 2025-07-07 00:23:39.539 [INFO][4489] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.133/26] handle="k8s-pod-network.940f9e00d7ab4518e6c790db8dc98b62ab8b1be154023ea3ead1e8df21fa57e7" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:39.582778 containerd[1539]: 2025-07-07 00:23:39.539 [INFO][4489] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:23:39.582778 containerd[1539]: 2025-07-07 00:23:39.539 [INFO][4489] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.133/26] IPv6=[] ContainerID="940f9e00d7ab4518e6c790db8dc98b62ab8b1be154023ea3ead1e8df21fa57e7" HandleID="k8s-pod-network.940f9e00d7ab4518e6c790db8dc98b62ab8b1be154023ea3ead1e8df21fa57e7" Workload="ci--4344.1.1--7--4873e20794-k8s-calico--apiserver--6f56d9bbdd--rm6r9-eth0" Jul 7 00:23:39.584335 containerd[1539]: 2025-07-07 00:23:39.545 [INFO][4467] cni-plugin/k8s.go 418: Populated endpoint ContainerID="940f9e00d7ab4518e6c790db8dc98b62ab8b1be154023ea3ead1e8df21fa57e7" Namespace="calico-apiserver" Pod="calico-apiserver-6f56d9bbdd-rm6r9" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-calico--apiserver--6f56d9bbdd--rm6r9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--7--4873e20794-k8s-calico--apiserver--6f56d9bbdd--rm6r9-eth0", GenerateName:"calico-apiserver-6f56d9bbdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"ee3f96cb-ad32-4cbc-bfdc-11ac89b39a73", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 23, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f56d9bbdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-7-4873e20794", ContainerID:"", Pod:"calico-apiserver-6f56d9bbdd-rm6r9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7776a91dc64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:23:39.584335 containerd[1539]: 2025-07-07 00:23:39.545 [INFO][4467] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.133/32] ContainerID="940f9e00d7ab4518e6c790db8dc98b62ab8b1be154023ea3ead1e8df21fa57e7" Namespace="calico-apiserver" Pod="calico-apiserver-6f56d9bbdd-rm6r9" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-calico--apiserver--6f56d9bbdd--rm6r9-eth0" Jul 7 00:23:39.584335 containerd[1539]: 2025-07-07 00:23:39.546 [INFO][4467] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7776a91dc64 ContainerID="940f9e00d7ab4518e6c790db8dc98b62ab8b1be154023ea3ead1e8df21fa57e7" Namespace="calico-apiserver" Pod="calico-apiserver-6f56d9bbdd-rm6r9" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-calico--apiserver--6f56d9bbdd--rm6r9-eth0" Jul 7 00:23:39.584335 containerd[1539]: 2025-07-07 00:23:39.551 [INFO][4467] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="940f9e00d7ab4518e6c790db8dc98b62ab8b1be154023ea3ead1e8df21fa57e7" Namespace="calico-apiserver" Pod="calico-apiserver-6f56d9bbdd-rm6r9" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-calico--apiserver--6f56d9bbdd--rm6r9-eth0" Jul 7 00:23:39.584335 containerd[1539]: 2025-07-07 00:23:39.554 [INFO][4467] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="940f9e00d7ab4518e6c790db8dc98b62ab8b1be154023ea3ead1e8df21fa57e7" Namespace="calico-apiserver" Pod="calico-apiserver-6f56d9bbdd-rm6r9" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-calico--apiserver--6f56d9bbdd--rm6r9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--7--4873e20794-k8s-calico--apiserver--6f56d9bbdd--rm6r9-eth0", GenerateName:"calico-apiserver-6f56d9bbdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"ee3f96cb-ad32-4cbc-bfdc-11ac89b39a73", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 23, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f56d9bbdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-7-4873e20794", ContainerID:"940f9e00d7ab4518e6c790db8dc98b62ab8b1be154023ea3ead1e8df21fa57e7", Pod:"calico-apiserver-6f56d9bbdd-rm6r9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7776a91dc64", MAC:"a6:58:63:c8:92:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:23:39.584335 containerd[1539]: 2025-07-07 00:23:39.574 [INFO][4467] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="940f9e00d7ab4518e6c790db8dc98b62ab8b1be154023ea3ead1e8df21fa57e7" Namespace="calico-apiserver" Pod="calico-apiserver-6f56d9bbdd-rm6r9" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-calico--apiserver--6f56d9bbdd--rm6r9-eth0" Jul 7 00:23:39.637454 containerd[1539]: time="2025-07-07T00:23:39.637398765Z" level=info msg="connecting to shim 940f9e00d7ab4518e6c790db8dc98b62ab8b1be154023ea3ead1e8df21fa57e7" address="unix:///run/containerd/s/9bf8901f0cebb34de545ec39760b0a608b406326ed191f8a81d58e922668eef7" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:23:39.643763 kubelet[2697]: E0707 00:23:39.643724 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:23:39.687239 kubelet[2697]: I0707 00:23:39.686446 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-b84578f67-q67vb" podStartSLOduration=2.438453607 podStartE2EDuration="6.686423177s" podCreationTimestamp="2025-07-07 00:23:33 +0000 UTC" firstStartedPulling="2025-07-07 00:23:34.884196377 +0000 UTC m=+40.674110806" lastFinishedPulling="2025-07-07 00:23:39.132165935 +0000 UTC m=+44.922080376" observedRunningTime="2025-07-07 00:23:39.686315586 +0000 UTC m=+45.476230035" watchObservedRunningTime="2025-07-07 00:23:39.686423177 +0000 UTC m=+45.476337626" Jul 7 00:23:39.726116 systemd-networkd[1454]: cali1eef5330366: Link UP Jul 7 00:23:39.729517 systemd-networkd[1454]: cali1eef5330366: Gained carrier Jul 7 00:23:39.741938 systemd[1]: Started cri-containerd-940f9e00d7ab4518e6c790db8dc98b62ab8b1be154023ea3ead1e8df21fa57e7.scope - libcontainer container 940f9e00d7ab4518e6c790db8dc98b62ab8b1be154023ea3ead1e8df21fa57e7. Jul 7 00:23:39.775045 containerd[1539]: 2025-07-07 00:23:39.428 [INFO][4471] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--7--4873e20794-k8s-csi--node--driver--z8ldv-eth0 csi-node-driver- calico-system dba08fd4-9512-4c11-aac8-02de403331df 712 0 2025-07-07 00:23:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4344.1.1-7-4873e20794 csi-node-driver-z8ldv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1eef5330366 [] [] }} ContainerID="497b8953cfd6f2d9e66659bae248c8ba773eceb2e0dbd4bc218ccb3c75be238e" Namespace="calico-system" Pod="csi-node-driver-z8ldv" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-csi--node--driver--z8ldv-" Jul 7 00:23:39.775045 containerd[1539]: 2025-07-07 00:23:39.428 [INFO][4471] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="497b8953cfd6f2d9e66659bae248c8ba773eceb2e0dbd4bc218ccb3c75be238e" Namespace="calico-system" Pod="csi-node-driver-z8ldv" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-csi--node--driver--z8ldv-eth0" Jul 7 00:23:39.775045 containerd[1539]: 2025-07-07 00:23:39.486 [INFO][4494] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="497b8953cfd6f2d9e66659bae248c8ba773eceb2e0dbd4bc218ccb3c75be238e" HandleID="k8s-pod-network.497b8953cfd6f2d9e66659bae248c8ba773eceb2e0dbd4bc218ccb3c75be238e" Workload="ci--4344.1.1--7--4873e20794-k8s-csi--node--driver--z8ldv-eth0" Jul 7 00:23:39.775045 containerd[1539]: 2025-07-07 00:23:39.487 [INFO][4494] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="497b8953cfd6f2d9e66659bae248c8ba773eceb2e0dbd4bc218ccb3c75be238e" HandleID="k8s-pod-network.497b8953cfd6f2d9e66659bae248c8ba773eceb2e0dbd4bc218ccb3c75be238e" Workload="ci--4344.1.1--7--4873e20794-k8s-csi--node--driver--z8ldv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5640), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-7-4873e20794", "pod":"csi-node-driver-z8ldv", "timestamp":"2025-07-07 00:23:39.48691674 +0000 UTC"}, Hostname:"ci-4344.1.1-7-4873e20794", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:23:39.775045 containerd[1539]: 2025-07-07 00:23:39.487 [INFO][4494] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:23:39.775045 containerd[1539]: 2025-07-07 00:23:39.540 [INFO][4494] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:23:39.775045 containerd[1539]: 2025-07-07 00:23:39.541 [INFO][4494] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-7-4873e20794' Jul 7 00:23:39.775045 containerd[1539]: 2025-07-07 00:23:39.595 [INFO][4494] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.497b8953cfd6f2d9e66659bae248c8ba773eceb2e0dbd4bc218ccb3c75be238e" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:39.775045 containerd[1539]: 2025-07-07 00:23:39.611 [INFO][4494] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:39.775045 containerd[1539]: 2025-07-07 00:23:39.625 [INFO][4494] ipam/ipam.go 511: Trying affinity for 192.168.93.128/26 host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:39.775045 containerd[1539]: 2025-07-07 00:23:39.636 [INFO][4494] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.128/26 host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:39.775045 containerd[1539]: 2025-07-07 00:23:39.644 [INFO][4494] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:39.775045 containerd[1539]: 2025-07-07 00:23:39.645 [INFO][4494] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.497b8953cfd6f2d9e66659bae248c8ba773eceb2e0dbd4bc218ccb3c75be238e" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:39.775045 containerd[1539]: 2025-07-07 00:23:39.651 [INFO][4494] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.497b8953cfd6f2d9e66659bae248c8ba773eceb2e0dbd4bc218ccb3c75be238e Jul 7 00:23:39.775045 containerd[1539]: 2025-07-07 00:23:39.665 [INFO][4494] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.497b8953cfd6f2d9e66659bae248c8ba773eceb2e0dbd4bc218ccb3c75be238e" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:39.775045 containerd[1539]: 2025-07-07 00:23:39.682 [INFO][4494] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.134/26] block=192.168.93.128/26 handle="k8s-pod-network.497b8953cfd6f2d9e66659bae248c8ba773eceb2e0dbd4bc218ccb3c75be238e" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:39.775045 containerd[1539]: 2025-07-07 00:23:39.682 [INFO][4494] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.134/26] handle="k8s-pod-network.497b8953cfd6f2d9e66659bae248c8ba773eceb2e0dbd4bc218ccb3c75be238e" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:39.775045 containerd[1539]: 2025-07-07 00:23:39.682 [INFO][4494] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:23:39.775045 containerd[1539]: 2025-07-07 00:23:39.686 [INFO][4494] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.134/26] IPv6=[] ContainerID="497b8953cfd6f2d9e66659bae248c8ba773eceb2e0dbd4bc218ccb3c75be238e" HandleID="k8s-pod-network.497b8953cfd6f2d9e66659bae248c8ba773eceb2e0dbd4bc218ccb3c75be238e" Workload="ci--4344.1.1--7--4873e20794-k8s-csi--node--driver--z8ldv-eth0" Jul 7 00:23:39.776664 containerd[1539]: 2025-07-07 00:23:39.700 [INFO][4471] cni-plugin/k8s.go 418: Populated endpoint ContainerID="497b8953cfd6f2d9e66659bae248c8ba773eceb2e0dbd4bc218ccb3c75be238e" Namespace="calico-system" Pod="csi-node-driver-z8ldv" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-csi--node--driver--z8ldv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--7--4873e20794-k8s-csi--node--driver--z8ldv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dba08fd4-9512-4c11-aac8-02de403331df", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 23, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-7-4873e20794", ContainerID:"", Pod:"csi-node-driver-z8ldv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.93.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1eef5330366", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:23:39.776664 containerd[1539]: 2025-07-07 00:23:39.702 [INFO][4471] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.134/32] ContainerID="497b8953cfd6f2d9e66659bae248c8ba773eceb2e0dbd4bc218ccb3c75be238e" Namespace="calico-system" Pod="csi-node-driver-z8ldv" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-csi--node--driver--z8ldv-eth0" Jul 7 00:23:39.776664 containerd[1539]: 2025-07-07 00:23:39.703 [INFO][4471] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1eef5330366 ContainerID="497b8953cfd6f2d9e66659bae248c8ba773eceb2e0dbd4bc218ccb3c75be238e" Namespace="calico-system" Pod="csi-node-driver-z8ldv" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-csi--node--driver--z8ldv-eth0" Jul 7 00:23:39.776664 containerd[1539]: 2025-07-07 00:23:39.732 [INFO][4471] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="497b8953cfd6f2d9e66659bae248c8ba773eceb2e0dbd4bc218ccb3c75be238e" Namespace="calico-system" Pod="csi-node-driver-z8ldv" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-csi--node--driver--z8ldv-eth0" Jul 7 00:23:39.776664 containerd[1539]: 2025-07-07 00:23:39.735 [INFO][4471] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="497b8953cfd6f2d9e66659bae248c8ba773eceb2e0dbd4bc218ccb3c75be238e" Namespace="calico-system" Pod="csi-node-driver-z8ldv" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-csi--node--driver--z8ldv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--7--4873e20794-k8s-csi--node--driver--z8ldv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dba08fd4-9512-4c11-aac8-02de403331df", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 23, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-7-4873e20794", ContainerID:"497b8953cfd6f2d9e66659bae248c8ba773eceb2e0dbd4bc218ccb3c75be238e", Pod:"csi-node-driver-z8ldv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.93.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1eef5330366", MAC:"8e:ec:fb:b7:ee:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:23:39.776664 containerd[1539]: 2025-07-07 00:23:39.769 [INFO][4471] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="497b8953cfd6f2d9e66659bae248c8ba773eceb2e0dbd4bc218ccb3c75be238e" Namespace="calico-system" Pod="csi-node-driver-z8ldv" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-csi--node--driver--z8ldv-eth0" Jul 7 00:23:39.817832 containerd[1539]: time="2025-07-07T00:23:39.817239480Z" level=info msg="connecting to shim 497b8953cfd6f2d9e66659bae248c8ba773eceb2e0dbd4bc218ccb3c75be238e" address="unix:///run/containerd/s/15d086b4c1112c7eb8002d98eb3676ff9fa0c38bed4666d3fed0e5c66814249d" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:23:39.867601 containerd[1539]: time="2025-07-07T00:23:39.867548092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f56d9bbdd-rm6r9,Uid:ee3f96cb-ad32-4cbc-bfdc-11ac89b39a73,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"940f9e00d7ab4518e6c790db8dc98b62ab8b1be154023ea3ead1e8df21fa57e7\"" Jul 7 00:23:39.879276 systemd[1]: Started cri-containerd-497b8953cfd6f2d9e66659bae248c8ba773eceb2e0dbd4bc218ccb3c75be238e.scope - libcontainer container 497b8953cfd6f2d9e66659bae248c8ba773eceb2e0dbd4bc218ccb3c75be238e. Jul 7 00:23:39.936821 containerd[1539]: time="2025-07-07T00:23:39.936196582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z8ldv,Uid:dba08fd4-9512-4c11-aac8-02de403331df,Namespace:calico-system,Attempt:0,} returns sandbox id \"497b8953cfd6f2d9e66659bae248c8ba773eceb2e0dbd4bc218ccb3c75be238e\"" Jul 7 00:23:40.341918 kubelet[2697]: E0707 00:23:40.341856 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:23:40.343668 containerd[1539]: time="2025-07-07T00:23:40.343606991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q95zr,Uid:41794743-531f-45a9-880c-f07b5eb8bc43,Namespace:kube-system,Attempt:0,}" Jul 7 00:23:40.348930 containerd[1539]: time="2025-07-07T00:23:40.348868562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b7cc78f44-gxnd4,Uid:bc0753b2-8f52-45cd-b135-8205c86199e7,Namespace:calico-system,Attempt:0,}" Jul 7 00:23:40.553277 systemd-networkd[1454]: cali89dde4f318e: Link UP Jul 7 00:23:40.556034 systemd-networkd[1454]: cali89dde4f318e: Gained carrier Jul 7 00:23:40.592223 containerd[1539]: 2025-07-07 00:23:40.401 [INFO][4614] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--7--4873e20794-k8s-coredns--7c65d6cfc9--q95zr-eth0 coredns-7c65d6cfc9- kube-system 41794743-531f-45a9-880c-f07b5eb8bc43 826 0 2025-07-07 00:22:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.1.1-7-4873e20794 coredns-7c65d6cfc9-q95zr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali89dde4f318e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b68d9ee2aba3a3f8144e59f5bdb35336199aea828c241cf1f73607bfc86ef31d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q95zr" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-coredns--7c65d6cfc9--q95zr-" Jul 7 00:23:40.592223 containerd[1539]: 2025-07-07 00:23:40.401 [INFO][4614] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b68d9ee2aba3a3f8144e59f5bdb35336199aea828c241cf1f73607bfc86ef31d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q95zr" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-coredns--7c65d6cfc9--q95zr-eth0" Jul 7 00:23:40.592223 containerd[1539]: 2025-07-07 00:23:40.471 [INFO][4637] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b68d9ee2aba3a3f8144e59f5bdb35336199aea828c241cf1f73607bfc86ef31d" HandleID="k8s-pod-network.b68d9ee2aba3a3f8144e59f5bdb35336199aea828c241cf1f73607bfc86ef31d" Workload="ci--4344.1.1--7--4873e20794-k8s-coredns--7c65d6cfc9--q95zr-eth0" Jul 7 00:23:40.592223 containerd[1539]: 2025-07-07 00:23:40.472 [INFO][4637] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b68d9ee2aba3a3f8144e59f5bdb35336199aea828c241cf1f73607bfc86ef31d" HandleID="k8s-pod-network.b68d9ee2aba3a3f8144e59f5bdb35336199aea828c241cf1f73607bfc86ef31d" Workload="ci--4344.1.1--7--4873e20794-k8s-coredns--7c65d6cfc9--q95zr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103d70), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.1.1-7-4873e20794", "pod":"coredns-7c65d6cfc9-q95zr", "timestamp":"2025-07-07 00:23:40.47195562 +0000 UTC"}, Hostname:"ci-4344.1.1-7-4873e20794", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:23:40.592223 containerd[1539]: 2025-07-07 00:23:40.472 [INFO][4637] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:23:40.592223 containerd[1539]: 2025-07-07 00:23:40.472 [INFO][4637] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:23:40.592223 containerd[1539]: 2025-07-07 00:23:40.472 [INFO][4637] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-7-4873e20794' Jul 7 00:23:40.592223 containerd[1539]: 2025-07-07 00:23:40.486 [INFO][4637] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b68d9ee2aba3a3f8144e59f5bdb35336199aea828c241cf1f73607bfc86ef31d" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:40.592223 containerd[1539]: 2025-07-07 00:23:40.493 [INFO][4637] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:40.592223 containerd[1539]: 2025-07-07 00:23:40.502 [INFO][4637] ipam/ipam.go 511: Trying affinity for 192.168.93.128/26 host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:40.592223 containerd[1539]: 2025-07-07 00:23:40.506 [INFO][4637] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.128/26 host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:40.592223 containerd[1539]: 2025-07-07 00:23:40.511 [INFO][4637] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:40.592223 containerd[1539]: 2025-07-07 00:23:40.511 [INFO][4637] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.b68d9ee2aba3a3f8144e59f5bdb35336199aea828c241cf1f73607bfc86ef31d" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:40.592223 containerd[1539]: 2025-07-07 00:23:40.516 [INFO][4637] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b68d9ee2aba3a3f8144e59f5bdb35336199aea828c241cf1f73607bfc86ef31d Jul 7 00:23:40.592223 containerd[1539]: 2025-07-07 00:23:40.524 [INFO][4637] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.b68d9ee2aba3a3f8144e59f5bdb35336199aea828c241cf1f73607bfc86ef31d" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:40.592223 containerd[1539]: 2025-07-07 00:23:40.537 [INFO][4637] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.135/26] block=192.168.93.128/26 handle="k8s-pod-network.b68d9ee2aba3a3f8144e59f5bdb35336199aea828c241cf1f73607bfc86ef31d" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:40.592223 containerd[1539]: 2025-07-07 00:23:40.538 [INFO][4637] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.135/26] handle="k8s-pod-network.b68d9ee2aba3a3f8144e59f5bdb35336199aea828c241cf1f73607bfc86ef31d" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:40.592223 containerd[1539]: 2025-07-07 00:23:40.538 [INFO][4637] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:23:40.592223 containerd[1539]: 2025-07-07 00:23:40.538 [INFO][4637] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.135/26] IPv6=[] ContainerID="b68d9ee2aba3a3f8144e59f5bdb35336199aea828c241cf1f73607bfc86ef31d" HandleID="k8s-pod-network.b68d9ee2aba3a3f8144e59f5bdb35336199aea828c241cf1f73607bfc86ef31d" Workload="ci--4344.1.1--7--4873e20794-k8s-coredns--7c65d6cfc9--q95zr-eth0" Jul 7 00:23:40.594357 containerd[1539]: 2025-07-07 00:23:40.543 [INFO][4614] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b68d9ee2aba3a3f8144e59f5bdb35336199aea828c241cf1f73607bfc86ef31d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q95zr" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-coredns--7c65d6cfc9--q95zr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--7--4873e20794-k8s-coredns--7c65d6cfc9--q95zr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"41794743-531f-45a9-880c-f07b5eb8bc43", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-7-4873e20794", ContainerID:"", Pod:"coredns-7c65d6cfc9-q95zr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali89dde4f318e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:23:40.594357 containerd[1539]: 2025-07-07 00:23:40.544 [INFO][4614] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.135/32] ContainerID="b68d9ee2aba3a3f8144e59f5bdb35336199aea828c241cf1f73607bfc86ef31d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q95zr" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-coredns--7c65d6cfc9--q95zr-eth0" Jul 7 00:23:40.594357 containerd[1539]: 2025-07-07 00:23:40.544 [INFO][4614] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89dde4f318e ContainerID="b68d9ee2aba3a3f8144e59f5bdb35336199aea828c241cf1f73607bfc86ef31d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q95zr" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-coredns--7c65d6cfc9--q95zr-eth0" Jul 7 00:23:40.594357 containerd[1539]: 2025-07-07 00:23:40.556 [INFO][4614] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b68d9ee2aba3a3f8144e59f5bdb35336199aea828c241cf1f73607bfc86ef31d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q95zr" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-coredns--7c65d6cfc9--q95zr-eth0" Jul 7 00:23:40.594357 containerd[1539]: 2025-07-07 00:23:40.557 [INFO][4614] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b68d9ee2aba3a3f8144e59f5bdb35336199aea828c241cf1f73607bfc86ef31d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q95zr" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-coredns--7c65d6cfc9--q95zr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--7--4873e20794-k8s-coredns--7c65d6cfc9--q95zr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"41794743-531f-45a9-880c-f07b5eb8bc43", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-7-4873e20794", ContainerID:"b68d9ee2aba3a3f8144e59f5bdb35336199aea828c241cf1f73607bfc86ef31d", Pod:"coredns-7c65d6cfc9-q95zr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali89dde4f318e", MAC:"46:a1:37:c3:58:66", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:23:40.594357 containerd[1539]: 2025-07-07 00:23:40.585 [INFO][4614] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b68d9ee2aba3a3f8144e59f5bdb35336199aea828c241cf1f73607bfc86ef31d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q95zr" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-coredns--7c65d6cfc9--q95zr-eth0" Jul 7 00:23:40.648584 containerd[1539]: time="2025-07-07T00:23:40.648294949Z" level=info msg="connecting to shim b68d9ee2aba3a3f8144e59f5bdb35336199aea828c241cf1f73607bfc86ef31d" address="unix:///run/containerd/s/18de8b7be7b5b9abadfd12295a3c28e52ff370461b37d1629471ec8e628d3221" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:23:40.725064 systemd-networkd[1454]: calif398f6c6af6: Link UP Jul 7 00:23:40.726375 systemd-networkd[1454]: calif398f6c6af6: Gained carrier Jul 7 00:23:40.737942 systemd[1]: Started cri-containerd-b68d9ee2aba3a3f8144e59f5bdb35336199aea828c241cf1f73607bfc86ef31d.scope - libcontainer container b68d9ee2aba3a3f8144e59f5bdb35336199aea828c241cf1f73607bfc86ef31d. Jul 7 00:23:40.760300 containerd[1539]: 2025-07-07 00:23:40.420 [INFO][4624] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--7--4873e20794-k8s-calico--kube--controllers--7b7cc78f44--gxnd4-eth0 calico-kube-controllers-7b7cc78f44- calico-system bc0753b2-8f52-45cd-b135-8205c86199e7 820 0 2025-07-07 00:23:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b7cc78f44 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4344.1.1-7-4873e20794 calico-kube-controllers-7b7cc78f44-gxnd4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif398f6c6af6 [] [] }} ContainerID="e17fa888bfa72697b471a7a4e988d331af4046ca21412b9ad1a79b3fefbec564" Namespace="calico-system" Pod="calico-kube-controllers-7b7cc78f44-gxnd4" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-calico--kube--controllers--7b7cc78f44--gxnd4-" Jul 7 00:23:40.760300 containerd[1539]: 2025-07-07 00:23:40.420 [INFO][4624] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e17fa888bfa72697b471a7a4e988d331af4046ca21412b9ad1a79b3fefbec564" Namespace="calico-system" Pod="calico-kube-controllers-7b7cc78f44-gxnd4" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-calico--kube--controllers--7b7cc78f44--gxnd4-eth0" Jul 7 00:23:40.760300 containerd[1539]: 2025-07-07 00:23:40.490 [INFO][4643] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e17fa888bfa72697b471a7a4e988d331af4046ca21412b9ad1a79b3fefbec564" HandleID="k8s-pod-network.e17fa888bfa72697b471a7a4e988d331af4046ca21412b9ad1a79b3fefbec564" Workload="ci--4344.1.1--7--4873e20794-k8s-calico--kube--controllers--7b7cc78f44--gxnd4-eth0" Jul 7 00:23:40.760300 containerd[1539]: 2025-07-07 00:23:40.491 [INFO][4643] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e17fa888bfa72697b471a7a4e988d331af4046ca21412b9ad1a79b3fefbec564" HandleID="k8s-pod-network.e17fa888bfa72697b471a7a4e988d331af4046ca21412b9ad1a79b3fefbec564" Workload="ci--4344.1.1--7--4873e20794-k8s-calico--kube--controllers--7b7cc78f44--gxnd4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f740), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-7-4873e20794", "pod":"calico-kube-controllers-7b7cc78f44-gxnd4", "timestamp":"2025-07-07 00:23:40.490921209 +0000 UTC"}, Hostname:"ci-4344.1.1-7-4873e20794", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:23:40.760300 containerd[1539]: 2025-07-07 00:23:40.491 [INFO][4643] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:23:40.760300 containerd[1539]: 2025-07-07 00:23:40.538 [INFO][4643] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:23:40.760300 containerd[1539]: 2025-07-07 00:23:40.538 [INFO][4643] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-7-4873e20794' Jul 7 00:23:40.760300 containerd[1539]: 2025-07-07 00:23:40.589 [INFO][4643] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e17fa888bfa72697b471a7a4e988d331af4046ca21412b9ad1a79b3fefbec564" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:40.760300 containerd[1539]: 2025-07-07 00:23:40.608 [INFO][4643] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:40.760300 containerd[1539]: 2025-07-07 00:23:40.633 [INFO][4643] ipam/ipam.go 511: Trying affinity for 192.168.93.128/26 host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:40.760300 containerd[1539]: 2025-07-07 00:23:40.639 [INFO][4643] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.128/26 host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:40.760300 containerd[1539]: 2025-07-07 00:23:40.651 [INFO][4643] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:40.760300 containerd[1539]: 2025-07-07 00:23:40.651 [INFO][4643] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.e17fa888bfa72697b471a7a4e988d331af4046ca21412b9ad1a79b3fefbec564" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:40.760300 containerd[1539]: 2025-07-07 00:23:40.657 [INFO][4643] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e17fa888bfa72697b471a7a4e988d331af4046ca21412b9ad1a79b3fefbec564 Jul 7 00:23:40.760300 containerd[1539]: 2025-07-07 00:23:40.690 [INFO][4643] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.e17fa888bfa72697b471a7a4e988d331af4046ca21412b9ad1a79b3fefbec564" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:40.760300 containerd[1539]: 2025-07-07 00:23:40.709 [INFO][4643] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.136/26] block=192.168.93.128/26 handle="k8s-pod-network.e17fa888bfa72697b471a7a4e988d331af4046ca21412b9ad1a79b3fefbec564" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:40.760300 containerd[1539]: 2025-07-07 00:23:40.709 [INFO][4643] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.136/26] handle="k8s-pod-network.e17fa888bfa72697b471a7a4e988d331af4046ca21412b9ad1a79b3fefbec564" host="ci-4344.1.1-7-4873e20794" Jul 7 00:23:40.760300 containerd[1539]: 2025-07-07 00:23:40.709 [INFO][4643] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:23:40.760300 containerd[1539]: 2025-07-07 00:23:40.709 [INFO][4643] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.136/26] IPv6=[] ContainerID="e17fa888bfa72697b471a7a4e988d331af4046ca21412b9ad1a79b3fefbec564" HandleID="k8s-pod-network.e17fa888bfa72697b471a7a4e988d331af4046ca21412b9ad1a79b3fefbec564" Workload="ci--4344.1.1--7--4873e20794-k8s-calico--kube--controllers--7b7cc78f44--gxnd4-eth0" Jul 7 00:23:40.763150 containerd[1539]: 2025-07-07 00:23:40.715 [INFO][4624] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e17fa888bfa72697b471a7a4e988d331af4046ca21412b9ad1a79b3fefbec564" Namespace="calico-system" Pod="calico-kube-controllers-7b7cc78f44-gxnd4" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-calico--kube--controllers--7b7cc78f44--gxnd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--7--4873e20794-k8s-calico--kube--controllers--7b7cc78f44--gxnd4-eth0", GenerateName:"calico-kube-controllers-7b7cc78f44-", Namespace:"calico-system", SelfLink:"", UID:"bc0753b2-8f52-45cd-b135-8205c86199e7", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 23, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b7cc78f44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-7-4873e20794", ContainerID:"", Pod:"calico-kube-controllers-7b7cc78f44-gxnd4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.93.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif398f6c6af6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:23:40.763150 containerd[1539]: 2025-07-07 00:23:40.715 [INFO][4624] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.136/32] ContainerID="e17fa888bfa72697b471a7a4e988d331af4046ca21412b9ad1a79b3fefbec564" Namespace="calico-system" Pod="calico-kube-controllers-7b7cc78f44-gxnd4" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-calico--kube--controllers--7b7cc78f44--gxnd4-eth0" Jul 7 00:23:40.763150 containerd[1539]: 2025-07-07 00:23:40.715 [INFO][4624] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif398f6c6af6 ContainerID="e17fa888bfa72697b471a7a4e988d331af4046ca21412b9ad1a79b3fefbec564" Namespace="calico-system" Pod="calico-kube-controllers-7b7cc78f44-gxnd4" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-calico--kube--controllers--7b7cc78f44--gxnd4-eth0" Jul 7 00:23:40.763150 containerd[1539]: 2025-07-07 00:23:40.727 [INFO][4624] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e17fa888bfa72697b471a7a4e988d331af4046ca21412b9ad1a79b3fefbec564" Namespace="calico-system" Pod="calico-kube-controllers-7b7cc78f44-gxnd4" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-calico--kube--controllers--7b7cc78f44--gxnd4-eth0" Jul 7 00:23:40.763150 containerd[1539]: 2025-07-07 00:23:40.728 [INFO][4624] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e17fa888bfa72697b471a7a4e988d331af4046ca21412b9ad1a79b3fefbec564" Namespace="calico-system" Pod="calico-kube-controllers-7b7cc78f44-gxnd4" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-calico--kube--controllers--7b7cc78f44--gxnd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--7--4873e20794-k8s-calico--kube--controllers--7b7cc78f44--gxnd4-eth0", GenerateName:"calico-kube-controllers-7b7cc78f44-", Namespace:"calico-system", SelfLink:"", UID:"bc0753b2-8f52-45cd-b135-8205c86199e7", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 23, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b7cc78f44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-7-4873e20794", ContainerID:"e17fa888bfa72697b471a7a4e988d331af4046ca21412b9ad1a79b3fefbec564", Pod:"calico-kube-controllers-7b7cc78f44-gxnd4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.93.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif398f6c6af6", MAC:"22:0d:62:88:40:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:23:40.763150 containerd[1539]: 2025-07-07 00:23:40.752 [INFO][4624] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e17fa888bfa72697b471a7a4e988d331af4046ca21412b9ad1a79b3fefbec564" Namespace="calico-system" Pod="calico-kube-controllers-7b7cc78f44-gxnd4" WorkloadEndpoint="ci--4344.1.1--7--4873e20794-k8s-calico--kube--controllers--7b7cc78f44--gxnd4-eth0" Jul 7 00:23:40.825971 containerd[1539]: time="2025-07-07T00:23:40.825906642Z" level=info msg="connecting to shim e17fa888bfa72697b471a7a4e988d331af4046ca21412b9ad1a79b3fefbec564" address="unix:///run/containerd/s/6c25b2957d8ef2d3b4c732bd223b1554e50e1a45ee54135f3dc2a846c7ecc88f" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:23:40.866949 systemd-networkd[1454]: cali1eef5330366: Gained IPv6LL Jul 7 00:23:40.928638 systemd[1]: Started cri-containerd-e17fa888bfa72697b471a7a4e988d331af4046ca21412b9ad1a79b3fefbec564.scope - libcontainer container e17fa888bfa72697b471a7a4e988d331af4046ca21412b9ad1a79b3fefbec564. Jul 7 00:23:40.944309 containerd[1539]: time="2025-07-07T00:23:40.944265407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q95zr,Uid:41794743-531f-45a9-880c-f07b5eb8bc43,Namespace:kube-system,Attempt:0,} returns sandbox id \"b68d9ee2aba3a3f8144e59f5bdb35336199aea828c241cf1f73607bfc86ef31d\"" Jul 7 00:23:40.945569 kubelet[2697]: E0707 00:23:40.945525 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:23:40.953408 containerd[1539]: time="2025-07-07T00:23:40.950935416Z" level=info msg="CreateContainer within sandbox \"b68d9ee2aba3a3f8144e59f5bdb35336199aea828c241cf1f73607bfc86ef31d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:23:41.007316 containerd[1539]: time="2025-07-07T00:23:41.007237713Z" level=info msg="Container e0294f86401a1de6f8912b20a942718c4b9069e7f42e5092050442e2ccd310ad: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:23:41.027338 containerd[1539]: time="2025-07-07T00:23:41.027259632Z" level=info msg="CreateContainer within sandbox \"b68d9ee2aba3a3f8144e59f5bdb35336199aea828c241cf1f73607bfc86ef31d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e0294f86401a1de6f8912b20a942718c4b9069e7f42e5092050442e2ccd310ad\"" Jul 7 00:23:41.077174 containerd[1539]: time="2025-07-07T00:23:41.077054109Z" level=info msg="StartContainer for \"e0294f86401a1de6f8912b20a942718c4b9069e7f42e5092050442e2ccd310ad\"" Jul 7 00:23:41.078593 containerd[1539]: time="2025-07-07T00:23:41.078521822Z" level=info msg="connecting to shim e0294f86401a1de6f8912b20a942718c4b9069e7f42e5092050442e2ccd310ad" address="unix:///run/containerd/s/18de8b7be7b5b9abadfd12295a3c28e52ff370461b37d1629471ec8e628d3221" protocol=ttrpc version=3 Jul 7 00:23:41.134246 systemd[1]: Started cri-containerd-e0294f86401a1de6f8912b20a942718c4b9069e7f42e5092050442e2ccd310ad.scope - libcontainer container e0294f86401a1de6f8912b20a942718c4b9069e7f42e5092050442e2ccd310ad. Jul 7 00:23:41.145945 containerd[1539]: time="2025-07-07T00:23:41.145867447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b7cc78f44-gxnd4,Uid:bc0753b2-8f52-45cd-b135-8205c86199e7,Namespace:calico-system,Attempt:0,} returns sandbox id \"e17fa888bfa72697b471a7a4e988d331af4046ca21412b9ad1a79b3fefbec564\"" Jul 7 00:23:41.207778 containerd[1539]: time="2025-07-07T00:23:41.207741136Z" level=info msg="StartContainer for \"e0294f86401a1de6f8912b20a942718c4b9069e7f42e5092050442e2ccd310ad\" returns successfully" Jul 7 00:23:41.379457 systemd-networkd[1454]: cali7776a91dc64: Gained IPv6LL Jul 7 00:23:41.673794 kubelet[2697]: E0707 00:23:41.673430 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:23:41.730545 kubelet[2697]: I0707 00:23:41.730472 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-q95zr" podStartSLOduration=43.730451407 podStartE2EDuration="43.730451407s" podCreationTimestamp="2025-07-07 00:22:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:23:41.697512224 +0000 UTC m=+47.487426673" watchObservedRunningTime="2025-07-07 00:23:41.730451407 +0000 UTC m=+47.520365856" Jul 7 00:23:41.765261 systemd-networkd[1454]: cali89dde4f318e: Gained IPv6LL Jul 7 00:23:42.331636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1621297446.mount: Deactivated successfully. Jul 7 00:23:42.468167 systemd-networkd[1454]: calif398f6c6af6: Gained IPv6LL Jul 7 00:23:42.687174 kubelet[2697]: E0707 00:23:42.686642 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:23:43.420483 containerd[1539]: time="2025-07-07T00:23:43.418986129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 7 00:23:43.443087 containerd[1539]: time="2025-07-07T00:23:43.442993879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:43.448238 containerd[1539]: time="2025-07-07T00:23:43.446325725Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:43.452602 containerd[1539]: time="2025-07-07T00:23:43.452554381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:43.453617 containerd[1539]: time="2025-07-07T00:23:43.453405153Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.320618714s" Jul 7 00:23:43.453617 containerd[1539]: time="2025-07-07T00:23:43.453454483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 7 00:23:43.454873 containerd[1539]: time="2025-07-07T00:23:43.454837662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 00:23:43.457775 containerd[1539]: time="2025-07-07T00:23:43.457729622Z" level=info msg="CreateContainer within sandbox \"32f44e20b703cd1bf4056142f6ff2361d4a87a4133b379313d6c8e602020d767\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 00:23:43.485095 containerd[1539]: time="2025-07-07T00:23:43.483899606Z" level=info msg="Container 59774bd0c88bf3bd0bd649b639dc51b7eed79aaa09674a5acc8801a79305d947: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:23:43.496312 containerd[1539]: time="2025-07-07T00:23:43.496251193Z" level=info msg="CreateContainer within sandbox \"32f44e20b703cd1bf4056142f6ff2361d4a87a4133b379313d6c8e602020d767\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"59774bd0c88bf3bd0bd649b639dc51b7eed79aaa09674a5acc8801a79305d947\"" Jul 7 00:23:43.498892 containerd[1539]: time="2025-07-07T00:23:43.497312230Z" level=info msg="StartContainer for \"59774bd0c88bf3bd0bd649b639dc51b7eed79aaa09674a5acc8801a79305d947\"" Jul 7 00:23:43.499087 containerd[1539]: time="2025-07-07T00:23:43.498964928Z" level=info msg="connecting to shim 59774bd0c88bf3bd0bd649b639dc51b7eed79aaa09674a5acc8801a79305d947" address="unix:///run/containerd/s/06ffb06becd2f92e66b12bbad205b4f3ff37c4a4969bd5d4589ab01ab3f9388f" protocol=ttrpc version=3 Jul 7 00:23:43.538350 systemd[1]: Started cri-containerd-59774bd0c88bf3bd0bd649b639dc51b7eed79aaa09674a5acc8801a79305d947.scope - libcontainer container 59774bd0c88bf3bd0bd649b639dc51b7eed79aaa09674a5acc8801a79305d947. Jul 7 00:23:43.613961 containerd[1539]: time="2025-07-07T00:23:43.613873419Z" level=info msg="StartContainer for \"59774bd0c88bf3bd0bd649b639dc51b7eed79aaa09674a5acc8801a79305d947\" returns successfully" Jul 7 00:23:43.693649 kubelet[2697]: E0707 00:23:43.693401 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:23:43.717038 kubelet[2697]: I0707 00:23:43.716626 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-gns4s" podStartSLOduration=22.146167218 podStartE2EDuration="28.716597381s" podCreationTimestamp="2025-07-07 00:23:15 +0000 UTC" firstStartedPulling="2025-07-07 00:23:36.884165278 +0000 UTC m=+42.674079721" lastFinishedPulling="2025-07-07 00:23:43.454595433 +0000 UTC m=+49.244509884" observedRunningTime="2025-07-07 00:23:43.714395019 +0000 UTC m=+49.504309470" watchObservedRunningTime="2025-07-07 00:23:43.716597381 +0000 UTC m=+49.506511832" Jul 7 00:23:44.915980 containerd[1539]: time="2025-07-07T00:23:44.915931844Z" level=info msg="TaskExit event in podsandbox handler container_id:\"59774bd0c88bf3bd0bd649b639dc51b7eed79aaa09674a5acc8801a79305d947\" id:\"ecacfae8d4ece6b0432e9f4f96495b6f208651a926a8e984a14533bab489f5a8\" pid:4868 exit_status:1 exited_at:{seconds:1751847824 nanos:912825828}" Jul 7 00:23:45.847849 containerd[1539]: time="2025-07-07T00:23:45.847792699Z" level=info msg="TaskExit event in podsandbox handler container_id:\"59774bd0c88bf3bd0bd649b639dc51b7eed79aaa09674a5acc8801a79305d947\" id:\"9b54bf2429c1ddaf39caf9a91181f58dc9ad442d2f674921c8e57bfdaddd0228\" pid:4894 exited_at:{seconds:1751847825 nanos:832793662}" Jul 7 00:23:47.353620 containerd[1539]: time="2025-07-07T00:23:47.352905635Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:47.354949 containerd[1539]: time="2025-07-07T00:23:47.354914918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 7 00:23:47.356174 containerd[1539]: time="2025-07-07T00:23:47.356132077Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:47.358748 containerd[1539]: time="2025-07-07T00:23:47.357879252Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:47.358748 containerd[1539]: time="2025-07-07T00:23:47.358592891Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.903714853s" Jul 7 00:23:47.358748 containerd[1539]: time="2025-07-07T00:23:47.358627426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 00:23:47.361756 containerd[1539]: time="2025-07-07T00:23:47.360583080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 00:23:47.363870 containerd[1539]: time="2025-07-07T00:23:47.363721568Z" level=info msg="CreateContainer within sandbox \"0dcd33c79914b94497a0b9958f2598c586f7ff42653a62c5ec8c4a6084e7dc87\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:23:47.372713 containerd[1539]: time="2025-07-07T00:23:47.370804744Z" level=info msg="Container 934d0b15d724c3342c0fb2a9d111f4110de4eac243151ff041da5747301ce58b: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:23:47.388308 containerd[1539]: time="2025-07-07T00:23:47.388229504Z" level=info msg="CreateContainer within sandbox \"0dcd33c79914b94497a0b9958f2598c586f7ff42653a62c5ec8c4a6084e7dc87\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"934d0b15d724c3342c0fb2a9d111f4110de4eac243151ff041da5747301ce58b\"" Jul 7 00:23:47.389366 containerd[1539]: time="2025-07-07T00:23:47.389323928Z" level=info msg="StartContainer for \"934d0b15d724c3342c0fb2a9d111f4110de4eac243151ff041da5747301ce58b\"" Jul 7 00:23:47.392545 containerd[1539]: time="2025-07-07T00:23:47.392462114Z" level=info msg="connecting to shim 934d0b15d724c3342c0fb2a9d111f4110de4eac243151ff041da5747301ce58b" address="unix:///run/containerd/s/c55058851cdd80ae499e63859f16b724b2e47a76a95f5048a79c0588ff75da88" protocol=ttrpc version=3 Jul 7 00:23:47.444930 systemd[1]: Started cri-containerd-934d0b15d724c3342c0fb2a9d111f4110de4eac243151ff041da5747301ce58b.scope - libcontainer container 934d0b15d724c3342c0fb2a9d111f4110de4eac243151ff041da5747301ce58b. Jul 7 00:23:47.744386 containerd[1539]: time="2025-07-07T00:23:47.743732776Z" level=info msg="StartContainer for \"934d0b15d724c3342c0fb2a9d111f4110de4eac243151ff041da5747301ce58b\" returns successfully" Jul 7 00:23:48.027529 containerd[1539]: time="2025-07-07T00:23:48.027030003Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:48.028628 containerd[1539]: time="2025-07-07T00:23:48.028582149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 00:23:48.032257 containerd[1539]: time="2025-07-07T00:23:48.032193230Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 670.275145ms" Jul 7 00:23:48.032580 containerd[1539]: time="2025-07-07T00:23:48.032460057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 00:23:48.034059 containerd[1539]: time="2025-07-07T00:23:48.034019253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 00:23:48.041400 containerd[1539]: time="2025-07-07T00:23:48.040572783Z" level=info msg="CreateContainer within sandbox \"940f9e00d7ab4518e6c790db8dc98b62ab8b1be154023ea3ead1e8df21fa57e7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:23:48.056549 containerd[1539]: time="2025-07-07T00:23:48.056471133Z" level=info msg="Container 3f26d9d5164b81b50cf59407b58754c744740d3b5696d71f3622bb09793ab499: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:23:48.074826 containerd[1539]: time="2025-07-07T00:23:48.074729276Z" level=info msg="CreateContainer within sandbox \"940f9e00d7ab4518e6c790db8dc98b62ab8b1be154023ea3ead1e8df21fa57e7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3f26d9d5164b81b50cf59407b58754c744740d3b5696d71f3622bb09793ab499\"" Jul 7 00:23:48.085725 containerd[1539]: time="2025-07-07T00:23:48.085183697Z" level=info msg="StartContainer for \"3f26d9d5164b81b50cf59407b58754c744740d3b5696d71f3622bb09793ab499\"" Jul 7 00:23:48.088745 containerd[1539]: time="2025-07-07T00:23:48.088630562Z" level=info msg="connecting to shim 3f26d9d5164b81b50cf59407b58754c744740d3b5696d71f3622bb09793ab499" address="unix:///run/containerd/s/9bf8901f0cebb34de545ec39760b0a608b406326ed191f8a81d58e922668eef7" protocol=ttrpc version=3 Jul 7 00:23:48.152954 systemd[1]: Started cri-containerd-3f26d9d5164b81b50cf59407b58754c744740d3b5696d71f3622bb09793ab499.scope - libcontainer container 3f26d9d5164b81b50cf59407b58754c744740d3b5696d71f3622bb09793ab499. Jul 7 00:23:48.270919 containerd[1539]: time="2025-07-07T00:23:48.270522040Z" level=info msg="StartContainer for \"3f26d9d5164b81b50cf59407b58754c744740d3b5696d71f3622bb09793ab499\" returns successfully" Jul 7 00:23:48.751702 kubelet[2697]: I0707 00:23:48.751508 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6f56d9bbdd-88gl4" podStartSLOduration=28.128075288 podStartE2EDuration="37.751485562s" podCreationTimestamp="2025-07-07 00:23:11 +0000 UTC" firstStartedPulling="2025-07-07 00:23:37.736552559 +0000 UTC m=+43.526467002" lastFinishedPulling="2025-07-07 00:23:47.359962844 +0000 UTC m=+53.149877276" observedRunningTime="2025-07-07 00:23:48.737954049 +0000 UTC m=+54.527868495" watchObservedRunningTime="2025-07-07 00:23:48.751485562 +0000 UTC m=+54.541400011" Jul 7 00:23:48.777037 kubelet[2697]: I0707 00:23:48.776782 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6f56d9bbdd-rm6r9" podStartSLOduration=29.612459533 podStartE2EDuration="37.776761309s" podCreationTimestamp="2025-07-07 00:23:11 +0000 UTC" firstStartedPulling="2025-07-07 00:23:39.869395814 +0000 UTC m=+45.659310251" lastFinishedPulling="2025-07-07 00:23:48.033697582 +0000 UTC m=+53.823612027" observedRunningTime="2025-07-07 00:23:48.776560541 +0000 UTC m=+54.566474992" watchObservedRunningTime="2025-07-07 00:23:48.776761309 +0000 UTC m=+54.566675750" Jul 7 00:23:49.734817 kubelet[2697]: I0707 00:23:49.734509 2697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:23:49.735239 kubelet[2697]: I0707 00:23:49.735201 2697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:23:50.014341 containerd[1539]: time="2025-07-07T00:23:50.013896667Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:50.015814 containerd[1539]: time="2025-07-07T00:23:50.015755034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 7 00:23:50.017187 containerd[1539]: time="2025-07-07T00:23:50.017130018Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:50.020888 containerd[1539]: time="2025-07-07T00:23:50.020041044Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:50.021034 containerd[1539]: time="2025-07-07T00:23:50.020825242Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.986765483s" Jul 7 00:23:50.021034 containerd[1539]: time="2025-07-07T00:23:50.020980511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 7 00:23:50.044309 containerd[1539]: time="2025-07-07T00:23:50.043889629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 00:23:50.047448 containerd[1539]: time="2025-07-07T00:23:50.046581572Z" level=info msg="CreateContainer within sandbox \"497b8953cfd6f2d9e66659bae248c8ba773eceb2e0dbd4bc218ccb3c75be238e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 00:23:50.060948 containerd[1539]: time="2025-07-07T00:23:50.060901610Z" level=info msg="Container bd6d23c4b69e4372ad062c08c1386b99a29ad323caeb2e073fc20e3932aa69ba: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:23:50.115893 containerd[1539]: time="2025-07-07T00:23:50.115841769Z" level=info msg="CreateContainer within sandbox \"497b8953cfd6f2d9e66659bae248c8ba773eceb2e0dbd4bc218ccb3c75be238e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"bd6d23c4b69e4372ad062c08c1386b99a29ad323caeb2e073fc20e3932aa69ba\"" Jul 7 00:23:50.116991 containerd[1539]: time="2025-07-07T00:23:50.116964821Z" level=info msg="StartContainer for \"bd6d23c4b69e4372ad062c08c1386b99a29ad323caeb2e073fc20e3932aa69ba\"" Jul 7 00:23:50.121389 containerd[1539]: time="2025-07-07T00:23:50.121337957Z" level=info msg="connecting to shim bd6d23c4b69e4372ad062c08c1386b99a29ad323caeb2e073fc20e3932aa69ba" address="unix:///run/containerd/s/15d086b4c1112c7eb8002d98eb3676ff9fa0c38bed4666d3fed0e5c66814249d" protocol=ttrpc version=3 Jul 7 00:23:50.167003 systemd[1]: Started cri-containerd-bd6d23c4b69e4372ad062c08c1386b99a29ad323caeb2e073fc20e3932aa69ba.scope - libcontainer container bd6d23c4b69e4372ad062c08c1386b99a29ad323caeb2e073fc20e3932aa69ba. Jul 7 00:23:50.238851 containerd[1539]: time="2025-07-07T00:23:50.237835911Z" level=info msg="StartContainer for \"bd6d23c4b69e4372ad062c08c1386b99a29ad323caeb2e073fc20e3932aa69ba\" returns successfully" Jul 7 00:23:52.514222 containerd[1539]: time="2025-07-07T00:23:52.514090613Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:52.517304 containerd[1539]: time="2025-07-07T00:23:52.517245407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 7 00:23:52.522708 containerd[1539]: time="2025-07-07T00:23:52.521804616Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:52.525520 containerd[1539]: time="2025-07-07T00:23:52.525439746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:52.526506 containerd[1539]: time="2025-07-07T00:23:52.526114546Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 2.481937108s" Jul 7 00:23:52.526506 containerd[1539]: time="2025-07-07T00:23:52.526154525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 7 00:23:52.527616 containerd[1539]: time="2025-07-07T00:23:52.527594713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 00:23:52.599400 containerd[1539]: time="2025-07-07T00:23:52.599353945Z" level=info msg="CreateContainer within sandbox \"e17fa888bfa72697b471a7a4e988d331af4046ca21412b9ad1a79b3fefbec564\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 00:23:52.628284 containerd[1539]: time="2025-07-07T00:23:52.626922194Z" level=info msg="Container 8dbfcf081b91b0d3d6245c2dc3d71048fd398d60da7224b98893be1a9b8f22ac: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:23:52.642771 containerd[1539]: time="2025-07-07T00:23:52.642725490Z" level=info msg="CreateContainer within sandbox \"e17fa888bfa72697b471a7a4e988d331af4046ca21412b9ad1a79b3fefbec564\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8dbfcf081b91b0d3d6245c2dc3d71048fd398d60da7224b98893be1a9b8f22ac\"" Jul 7 00:23:52.643897 containerd[1539]: time="2025-07-07T00:23:52.643869080Z" level=info msg="StartContainer for \"8dbfcf081b91b0d3d6245c2dc3d71048fd398d60da7224b98893be1a9b8f22ac\"" Jul 7 00:23:52.645577 containerd[1539]: time="2025-07-07T00:23:52.645540936Z" level=info msg="connecting to shim 8dbfcf081b91b0d3d6245c2dc3d71048fd398d60da7224b98893be1a9b8f22ac" address="unix:///run/containerd/s/6c25b2957d8ef2d3b4c732bd223b1554e50e1a45ee54135f3dc2a846c7ecc88f" protocol=ttrpc version=3 Jul 7 00:23:52.683968 systemd[1]: Started cri-containerd-8dbfcf081b91b0d3d6245c2dc3d71048fd398d60da7224b98893be1a9b8f22ac.scope - libcontainer container 8dbfcf081b91b0d3d6245c2dc3d71048fd398d60da7224b98893be1a9b8f22ac. Jul 7 00:23:52.779964 containerd[1539]: time="2025-07-07T00:23:52.779833056Z" level=info msg="StartContainer for \"8dbfcf081b91b0d3d6245c2dc3d71048fd398d60da7224b98893be1a9b8f22ac\" returns successfully" Jul 7 00:23:52.964736 containerd[1539]: time="2025-07-07T00:23:52.964050876Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8dbfcf081b91b0d3d6245c2dc3d71048fd398d60da7224b98893be1a9b8f22ac\" id:\"e3ee4616961d5083c670064eead9b369d39a82c6f936ba45b9e1ac2c6fcb5f92\" pid:5095 exited_at:{seconds:1751847832 nanos:928149710}" Jul 7 00:23:53.071020 kubelet[2697]: I0707 00:23:53.069576 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7b7cc78f44-gxnd4" podStartSLOduration=25.695850186 podStartE2EDuration="37.069545765s" podCreationTimestamp="2025-07-07 00:23:16 +0000 UTC" firstStartedPulling="2025-07-07 00:23:41.153642477 +0000 UTC m=+46.943556908" lastFinishedPulling="2025-07-07 00:23:52.527338044 +0000 UTC m=+58.317252487" observedRunningTime="2025-07-07 00:23:52.824056081 +0000 UTC m=+58.613970528" watchObservedRunningTime="2025-07-07 00:23:53.069545765 +0000 UTC m=+58.859460214" Jul 7 00:23:53.994700 containerd[1539]: time="2025-07-07T00:23:53.994405513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:53.996397 containerd[1539]: time="2025-07-07T00:23:53.996352850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 7 00:23:53.996931 containerd[1539]: time="2025-07-07T00:23:53.996630998Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:53.999778 containerd[1539]: time="2025-07-07T00:23:53.999710338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:23:54.001201 containerd[1539]: time="2025-07-07T00:23:54.001064256Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.473327743s" Jul 7 00:23:54.001201 containerd[1539]: time="2025-07-07T00:23:54.001104629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 7 00:23:54.004594 containerd[1539]: time="2025-07-07T00:23:54.004524402Z" level=info msg="CreateContainer within sandbox \"497b8953cfd6f2d9e66659bae248c8ba773eceb2e0dbd4bc218ccb3c75be238e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 00:23:54.015904 containerd[1539]: time="2025-07-07T00:23:54.015849920Z" level=info msg="Container 03793c6b0aee51cf018d07ed0b47e9ddab4c2e6301c1ab5282671142dc0a4b85: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:23:54.021714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1819988757.mount: Deactivated successfully. Jul 7 00:23:54.047660 containerd[1539]: time="2025-07-07T00:23:54.047558422Z" level=info msg="CreateContainer within sandbox \"497b8953cfd6f2d9e66659bae248c8ba773eceb2e0dbd4bc218ccb3c75be238e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"03793c6b0aee51cf018d07ed0b47e9ddab4c2e6301c1ab5282671142dc0a4b85\"" Jul 7 00:23:54.049136 containerd[1539]: time="2025-07-07T00:23:54.049093984Z" level=info msg="StartContainer for \"03793c6b0aee51cf018d07ed0b47e9ddab4c2e6301c1ab5282671142dc0a4b85\"" Jul 7 00:23:54.051315 containerd[1539]: time="2025-07-07T00:23:54.051225799Z" level=info msg="connecting to shim 03793c6b0aee51cf018d07ed0b47e9ddab4c2e6301c1ab5282671142dc0a4b85" address="unix:///run/containerd/s/15d086b4c1112c7eb8002d98eb3676ff9fa0c38bed4666d3fed0e5c66814249d" protocol=ttrpc version=3 Jul 7 00:23:54.089015 systemd[1]: Started cri-containerd-03793c6b0aee51cf018d07ed0b47e9ddab4c2e6301c1ab5282671142dc0a4b85.scope - libcontainer container 03793c6b0aee51cf018d07ed0b47e9ddab4c2e6301c1ab5282671142dc0a4b85. Jul 7 00:23:54.158010 containerd[1539]: time="2025-07-07T00:23:54.157969193Z" level=info msg="StartContainer for \"03793c6b0aee51cf018d07ed0b47e9ddab4c2e6301c1ab5282671142dc0a4b85\" returns successfully" Jul 7 00:23:54.688833 kubelet[2697]: I0707 00:23:54.681867 2697 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 00:23:54.689555 kubelet[2697]: I0707 00:23:54.688862 2697 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 00:23:54.837654 kubelet[2697]: I0707 00:23:54.837408 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-z8ldv" podStartSLOduration=24.774829333 podStartE2EDuration="38.837379489s" podCreationTimestamp="2025-07-07 00:23:16 +0000 UTC" firstStartedPulling="2025-07-07 00:23:39.939551847 +0000 UTC m=+45.729466294" lastFinishedPulling="2025-07-07 00:23:54.002102019 +0000 UTC m=+59.792016450" observedRunningTime="2025-07-07 00:23:54.826439891 +0000 UTC m=+60.616354341" watchObservedRunningTime="2025-07-07 00:23:54.837379489 +0000 UTC m=+60.627293939" Jul 7 00:23:55.651957 containerd[1539]: time="2025-07-07T00:23:55.651311486Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8dbfcf081b91b0d3d6245c2dc3d71048fd398d60da7224b98893be1a9b8f22ac\" id:\"5e71d60a9ec52176a223436b7e43a6ea6728159063e7e04401d6321b20e6fa17\" pid:5163 exited_at:{seconds:1751847835 nanos:650307208}" Jul 7 00:23:55.672980 systemd[1]: Started sshd@7-146.190.122.157:22-139.178.68.195:40682.service - OpenSSH per-connection server daemon (139.178.68.195:40682). Jul 7 00:23:55.773324 containerd[1539]: time="2025-07-07T00:23:55.772869475Z" level=info msg="TaskExit event in podsandbox handler container_id:\"59774bd0c88bf3bd0bd649b639dc51b7eed79aaa09674a5acc8801a79305d947\" id:\"3295c6eb06a7e49303b7d2389e39276a57097e76f44e0240819da757cb4bce98\" pid:5185 exited_at:{seconds:1751847835 nanos:772426821}" Jul 7 00:23:55.830287 sshd[5193]: Accepted publickey for core from 139.178.68.195 port 40682 ssh2: RSA SHA256:YTcSFuhTbyqZDVTaIAnK2RLjCV77bWlMJOmuLtNHqI8 Jul 7 00:23:55.833975 sshd-session[5193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:23:55.844210 systemd-logind[1519]: New session 8 of user core. Jul 7 00:23:55.854980 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 00:23:56.533715 sshd[5205]: Connection closed by 139.178.68.195 port 40682 Jul 7 00:23:56.533030 sshd-session[5193]: pam_unix(sshd:session): session closed for user core Jul 7 00:23:56.542298 systemd[1]: sshd@7-146.190.122.157:22-139.178.68.195:40682.service: Deactivated successfully. Jul 7 00:23:56.547197 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 00:23:56.549570 systemd-logind[1519]: Session 8 logged out. Waiting for processes to exit. Jul 7 00:23:56.553258 systemd-logind[1519]: Removed session 8. Jul 7 00:24:00.555963 containerd[1539]: time="2025-07-07T00:24:00.555897901Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca52f319811763ede2a95c592fbec1211769fe0189090260552f2514fe4f0d4a\" id:\"aeedc28afef97ebdc6ac64d6e396ba2b68c67527ebc559ec5175bdb79dd62ff6\" pid:5231 exited_at:{seconds:1751847840 nanos:555445224}" Jul 7 00:24:01.549536 systemd[1]: Started sshd@8-146.190.122.157:22-139.178.68.195:34708.service - OpenSSH per-connection server daemon (139.178.68.195:34708). Jul 7 00:24:01.647355 sshd[5245]: Accepted publickey for core from 139.178.68.195 port 34708 ssh2: RSA SHA256:YTcSFuhTbyqZDVTaIAnK2RLjCV77bWlMJOmuLtNHqI8 Jul 7 00:24:01.650427 sshd-session[5245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:24:01.659208 systemd-logind[1519]: New session 9 of user core. Jul 7 00:24:01.666066 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 00:24:01.767341 containerd[1539]: time="2025-07-07T00:24:01.767230075Z" level=info msg="TaskExit event in podsandbox handler container_id:\"59774bd0c88bf3bd0bd649b639dc51b7eed79aaa09674a5acc8801a79305d947\" id:\"bb61d9695d156b2445586bac87dfb8c3cf2b87354a4664d94b7af3cf37f689f8\" pid:5260 exited_at:{seconds:1751847841 nanos:766556609}" Jul 7 00:24:01.905605 sshd[5266]: Connection closed by 139.178.68.195 port 34708 Jul 7 00:24:01.904842 sshd-session[5245]: pam_unix(sshd:session): session closed for user core Jul 7 00:24:01.910579 systemd-logind[1519]: Session 9 logged out. Waiting for processes to exit. Jul 7 00:24:01.911423 systemd[1]: sshd@8-146.190.122.157:22-139.178.68.195:34708.service: Deactivated successfully. Jul 7 00:24:01.916884 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 00:24:01.921347 systemd-logind[1519]: Removed session 9. Jul 7 00:24:03.803802 kubelet[2697]: I0707 00:24:03.803744 2697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:24:06.924566 systemd[1]: Started sshd@9-146.190.122.157:22-139.178.68.195:34720.service - OpenSSH per-connection server daemon (139.178.68.195:34720). Jul 7 00:24:07.101597 sshd[5284]: Accepted publickey for core from 139.178.68.195 port 34720 ssh2: RSA SHA256:YTcSFuhTbyqZDVTaIAnK2RLjCV77bWlMJOmuLtNHqI8 Jul 7 00:24:07.105030 sshd-session[5284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:24:07.112077 systemd-logind[1519]: New session 10 of user core. Jul 7 00:24:07.119022 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 00:24:07.367771 sshd[5287]: Connection closed by 139.178.68.195 port 34720 Jul 7 00:24:07.370425 sshd-session[5284]: pam_unix(sshd:session): session closed for user core Jul 7 00:24:07.383863 systemd[1]: Started sshd@10-146.190.122.157:22-139.178.68.195:34724.service - OpenSSH per-connection server daemon (139.178.68.195:34724). Jul 7 00:24:07.384585 systemd[1]: sshd@9-146.190.122.157:22-139.178.68.195:34720.service: Deactivated successfully. Jul 7 00:24:07.387982 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 00:24:07.391459 systemd-logind[1519]: Session 10 logged out. Waiting for processes to exit. Jul 7 00:24:07.395961 systemd-logind[1519]: Removed session 10. Jul 7 00:24:07.468717 sshd[5296]: Accepted publickey for core from 139.178.68.195 port 34724 ssh2: RSA SHA256:YTcSFuhTbyqZDVTaIAnK2RLjCV77bWlMJOmuLtNHqI8 Jul 7 00:24:07.471198 sshd-session[5296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:24:07.479850 systemd-logind[1519]: New session 11 of user core. Jul 7 00:24:07.487966 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 00:24:07.752747 sshd[5301]: Connection closed by 139.178.68.195 port 34724 Jul 7 00:24:07.754454 sshd-session[5296]: pam_unix(sshd:session): session closed for user core Jul 7 00:24:07.768422 systemd[1]: sshd@10-146.190.122.157:22-139.178.68.195:34724.service: Deactivated successfully. Jul 7 00:24:07.772570 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 00:24:07.776984 systemd-logind[1519]: Session 11 logged out. Waiting for processes to exit. Jul 7 00:24:07.787035 systemd-logind[1519]: Removed session 11. Jul 7 00:24:07.790774 systemd[1]: Started sshd@11-146.190.122.157:22-139.178.68.195:34736.service - OpenSSH per-connection server daemon (139.178.68.195:34736). Jul 7 00:24:07.902237 sshd[5312]: Accepted publickey for core from 139.178.68.195 port 34736 ssh2: RSA SHA256:YTcSFuhTbyqZDVTaIAnK2RLjCV77bWlMJOmuLtNHqI8 Jul 7 00:24:07.904788 sshd-session[5312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:24:07.913317 systemd-logind[1519]: New session 12 of user core. Jul 7 00:24:07.922309 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 00:24:08.145272 sshd[5314]: Connection closed by 139.178.68.195 port 34736 Jul 7 00:24:08.146235 sshd-session[5312]: pam_unix(sshd:session): session closed for user core Jul 7 00:24:08.152845 systemd-logind[1519]: Session 12 logged out. Waiting for processes to exit. Jul 7 00:24:08.153092 systemd[1]: sshd@11-146.190.122.157:22-139.178.68.195:34736.service: Deactivated successfully. Jul 7 00:24:08.155525 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 00:24:08.158339 systemd-logind[1519]: Removed session 12. Jul 7 00:24:08.340082 kubelet[2697]: I0707 00:24:08.339439 2697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:24:08.371372 kubelet[2697]: E0707 00:24:08.371084 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:24:12.345920 kubelet[2697]: E0707 00:24:12.344756 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:24:13.167820 systemd[1]: Started sshd@12-146.190.122.157:22-139.178.68.195:39362.service - OpenSSH per-connection server daemon (139.178.68.195:39362). Jul 7 00:24:13.255456 sshd[5328]: Accepted publickey for core from 139.178.68.195 port 39362 ssh2: RSA SHA256:YTcSFuhTbyqZDVTaIAnK2RLjCV77bWlMJOmuLtNHqI8 Jul 7 00:24:13.256901 sshd-session[5328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:24:13.265976 systemd-logind[1519]: New session 13 of user core. Jul 7 00:24:13.276188 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 00:24:13.479333 sshd[5330]: Connection closed by 139.178.68.195 port 39362 Jul 7 00:24:13.480619 sshd-session[5328]: pam_unix(sshd:session): session closed for user core Jul 7 00:24:13.486029 systemd-logind[1519]: Session 13 logged out. Waiting for processes to exit. Jul 7 00:24:13.486659 systemd[1]: sshd@12-146.190.122.157:22-139.178.68.195:39362.service: Deactivated successfully. Jul 7 00:24:13.490139 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 00:24:13.493505 systemd-logind[1519]: Removed session 13. Jul 7 00:24:18.361171 kubelet[2697]: E0707 00:24:18.360957 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:24:18.503035 systemd[1]: Started sshd@13-146.190.122.157:22-139.178.68.195:44882.service - OpenSSH per-connection server daemon (139.178.68.195:44882). Jul 7 00:24:18.685726 sshd[5354]: Accepted publickey for core from 139.178.68.195 port 44882 ssh2: RSA SHA256:YTcSFuhTbyqZDVTaIAnK2RLjCV77bWlMJOmuLtNHqI8 Jul 7 00:24:18.689587 sshd-session[5354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:24:18.698814 systemd-logind[1519]: New session 14 of user core. Jul 7 00:24:18.704015 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 00:24:18.952595 sshd[5356]: Connection closed by 139.178.68.195 port 44882 Jul 7 00:24:18.953504 sshd-session[5354]: pam_unix(sshd:session): session closed for user core Jul 7 00:24:18.960327 systemd[1]: sshd@13-146.190.122.157:22-139.178.68.195:44882.service: Deactivated successfully. Jul 7 00:24:18.963498 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 00:24:18.965021 systemd-logind[1519]: Session 14 logged out. Waiting for processes to exit. Jul 7 00:24:18.967210 systemd-logind[1519]: Removed session 14. Jul 7 00:24:19.154480 containerd[1539]: time="2025-07-07T00:24:19.154335460Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8dbfcf081b91b0d3d6245c2dc3d71048fd398d60da7224b98893be1a9b8f22ac\" id:\"abdf3d624a412464e112b41cd2e1bcdd58aee3db562b2f0aa8eb62a06a6895a2\" pid:5381 exited_at:{seconds:1751847859 nanos:152397075}" Jul 7 00:24:23.972373 systemd[1]: Started sshd@14-146.190.122.157:22-139.178.68.195:44892.service - OpenSSH per-connection server daemon (139.178.68.195:44892). Jul 7 00:24:24.034846 sshd[5394]: Accepted publickey for core from 139.178.68.195 port 44892 ssh2: RSA SHA256:YTcSFuhTbyqZDVTaIAnK2RLjCV77bWlMJOmuLtNHqI8 Jul 7 00:24:24.037474 sshd-session[5394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:24:24.047336 systemd-logind[1519]: New session 15 of user core. Jul 7 00:24:24.052943 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 00:24:24.209719 sshd[5396]: Connection closed by 139.178.68.195 port 44892 Jul 7 00:24:24.210443 sshd-session[5394]: pam_unix(sshd:session): session closed for user core Jul 7 00:24:24.216137 systemd[1]: sshd@14-146.190.122.157:22-139.178.68.195:44892.service: Deactivated successfully. Jul 7 00:24:24.219505 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 00:24:24.221806 systemd-logind[1519]: Session 15 logged out. Waiting for processes to exit. Jul 7 00:24:24.224179 systemd-logind[1519]: Removed session 15. Jul 7 00:24:24.367502 kubelet[2697]: E0707 00:24:24.367378 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:24:25.645977 containerd[1539]: time="2025-07-07T00:24:25.636452427Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8dbfcf081b91b0d3d6245c2dc3d71048fd398d60da7224b98893be1a9b8f22ac\" id:\"7b4a1a6681c94013c822bcbb2e1d51e0bfa44cff478f37eb436bb2cbc3bb75d9\" pid:5427 exited_at:{seconds:1751847865 nanos:635537185}" Jul 7 00:24:25.675408 containerd[1539]: time="2025-07-07T00:24:25.675321449Z" level=info msg="TaskExit event in podsandbox handler container_id:\"59774bd0c88bf3bd0bd649b639dc51b7eed79aaa09674a5acc8801a79305d947\" id:\"c3315cab4430eeba88b2028d4de8c749080f1d67b7e9814c5e4ca748f3d86257\" pid:5440 exited_at:{seconds:1751847865 nanos:674950983}" Jul 7 00:24:26.347368 kubelet[2697]: E0707 00:24:26.347091 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:24:29.226509 systemd[1]: Started sshd@15-146.190.122.157:22-139.178.68.195:50296.service - OpenSSH per-connection server daemon (139.178.68.195:50296). Jul 7 00:24:29.322780 sshd[5454]: Accepted publickey for core from 139.178.68.195 port 50296 ssh2: RSA SHA256:YTcSFuhTbyqZDVTaIAnK2RLjCV77bWlMJOmuLtNHqI8 Jul 7 00:24:29.325842 sshd-session[5454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:24:29.336191 systemd-logind[1519]: New session 16 of user core. Jul 7 00:24:29.346995 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 00:24:29.599814 sshd[5456]: Connection closed by 139.178.68.195 port 50296 Jul 7 00:24:29.601215 sshd-session[5454]: pam_unix(sshd:session): session closed for user core Jul 7 00:24:29.620318 systemd[1]: sshd@15-146.190.122.157:22-139.178.68.195:50296.service: Deactivated successfully. Jul 7 00:24:29.625388 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 00:24:29.629791 systemd-logind[1519]: Session 16 logged out. Waiting for processes to exit. Jul 7 00:24:29.636128 systemd[1]: Started sshd@16-146.190.122.157:22-139.178.68.195:50310.service - OpenSSH per-connection server daemon (139.178.68.195:50310). Jul 7 00:24:29.639362 systemd-logind[1519]: Removed session 16. Jul 7 00:24:29.712788 sshd[5467]: Accepted publickey for core from 139.178.68.195 port 50310 ssh2: RSA SHA256:YTcSFuhTbyqZDVTaIAnK2RLjCV77bWlMJOmuLtNHqI8 Jul 7 00:24:29.716122 sshd-session[5467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:24:29.724785 systemd-logind[1519]: New session 17 of user core. Jul 7 00:24:29.730210 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 00:24:30.140271 sshd[5471]: Connection closed by 139.178.68.195 port 50310 Jul 7 00:24:30.142346 sshd-session[5467]: pam_unix(sshd:session): session closed for user core Jul 7 00:24:30.157448 systemd[1]: sshd@16-146.190.122.157:22-139.178.68.195:50310.service: Deactivated successfully. Jul 7 00:24:30.166901 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 00:24:30.169011 systemd-logind[1519]: Session 17 logged out. Waiting for processes to exit. Jul 7 00:24:30.178647 systemd[1]: Started sshd@17-146.190.122.157:22-139.178.68.195:50324.service - OpenSSH per-connection server daemon (139.178.68.195:50324). Jul 7 00:24:30.180435 systemd-logind[1519]: Removed session 17. Jul 7 00:24:30.318600 sshd[5482]: Accepted publickey for core from 139.178.68.195 port 50324 ssh2: RSA SHA256:YTcSFuhTbyqZDVTaIAnK2RLjCV77bWlMJOmuLtNHqI8 Jul 7 00:24:30.320436 sshd-session[5482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:24:30.327774 systemd-logind[1519]: New session 18 of user core. Jul 7 00:24:30.334149 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 00:24:30.789418 containerd[1539]: time="2025-07-07T00:24:30.789343208Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca52f319811763ede2a95c592fbec1211769fe0189090260552f2514fe4f0d4a\" id:\"f9583e54d7e65d88ae4ad551a66fb65a07085a290e03db2d6302bc4bab4e3b9c\" pid:5501 exited_at:{seconds:1751847870 nanos:788127331}" Jul 7 00:24:34.251564 sshd[5485]: Connection closed by 139.178.68.195 port 50324 Jul 7 00:24:34.259065 sshd-session[5482]: pam_unix(sshd:session): session closed for user core Jul 7 00:24:34.305585 systemd[1]: Started sshd@18-146.190.122.157:22-139.178.68.195:50332.service - OpenSSH per-connection server daemon (139.178.68.195:50332). Jul 7 00:24:34.306373 systemd[1]: sshd@17-146.190.122.157:22-139.178.68.195:50324.service: Deactivated successfully. Jul 7 00:24:34.324599 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 00:24:34.329986 systemd[1]: session-18.scope: Consumed 724ms CPU time, 79M memory peak. Jul 7 00:24:34.344361 systemd-logind[1519]: Session 18 logged out. Waiting for processes to exit. Jul 7 00:24:34.358810 systemd-logind[1519]: Removed session 18. Jul 7 00:24:34.515318 sshd[5524]: Accepted publickey for core from 139.178.68.195 port 50332 ssh2: RSA SHA256:YTcSFuhTbyqZDVTaIAnK2RLjCV77bWlMJOmuLtNHqI8 Jul 7 00:24:34.517984 sshd-session[5524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:24:34.536459 systemd-logind[1519]: New session 19 of user core. Jul 7 00:24:34.541562 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 00:24:35.753826 sshd[5531]: Connection closed by 139.178.68.195 port 50332 Jul 7 00:24:35.757021 sshd-session[5524]: pam_unix(sshd:session): session closed for user core Jul 7 00:24:35.770203 systemd[1]: sshd@18-146.190.122.157:22-139.178.68.195:50332.service: Deactivated successfully. Jul 7 00:24:35.775812 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 00:24:35.777848 systemd-logind[1519]: Session 19 logged out. Waiting for processes to exit. Jul 7 00:24:35.789514 systemd[1]: Started sshd@19-146.190.122.157:22-139.178.68.195:50344.service - OpenSSH per-connection server daemon (139.178.68.195:50344). Jul 7 00:24:35.794585 systemd-logind[1519]: Removed session 19. Jul 7 00:24:35.936296 sshd[5541]: Accepted publickey for core from 139.178.68.195 port 50344 ssh2: RSA SHA256:YTcSFuhTbyqZDVTaIAnK2RLjCV77bWlMJOmuLtNHqI8 Jul 7 00:24:35.938308 sshd-session[5541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:24:35.949780 systemd-logind[1519]: New session 20 of user core. Jul 7 00:24:35.956451 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 00:24:36.304603 sshd[5543]: Connection closed by 139.178.68.195 port 50344 Jul 7 00:24:36.305637 sshd-session[5541]: pam_unix(sshd:session): session closed for user core Jul 7 00:24:36.313292 systemd[1]: sshd@19-146.190.122.157:22-139.178.68.195:50344.service: Deactivated successfully. Jul 7 00:24:36.322834 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 00:24:36.326998 systemd-logind[1519]: Session 20 logged out. Waiting for processes to exit. Jul 7 00:24:36.331817 systemd-logind[1519]: Removed session 20. Jul 7 00:24:41.322831 systemd[1]: Started sshd@20-146.190.122.157:22-139.178.68.195:40388.service - OpenSSH per-connection server daemon (139.178.68.195:40388). Jul 7 00:24:41.466788 sshd[5558]: Accepted publickey for core from 139.178.68.195 port 40388 ssh2: RSA SHA256:YTcSFuhTbyqZDVTaIAnK2RLjCV77bWlMJOmuLtNHqI8 Jul 7 00:24:41.467661 sshd-session[5558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:24:41.480241 systemd-logind[1519]: New session 21 of user core. Jul 7 00:24:41.488036 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 00:24:41.694597 sshd[5562]: Connection closed by 139.178.68.195 port 40388 Jul 7 00:24:41.695998 sshd-session[5558]: pam_unix(sshd:session): session closed for user core Jul 7 00:24:41.702869 systemd[1]: sshd@20-146.190.122.157:22-139.178.68.195:40388.service: Deactivated successfully. Jul 7 00:24:41.706042 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 00:24:41.710544 systemd-logind[1519]: Session 21 logged out. Waiting for processes to exit. Jul 7 00:24:41.714853 systemd-logind[1519]: Removed session 21. Jul 7 00:24:46.714222 systemd[1]: Started sshd@21-146.190.122.157:22-139.178.68.195:40398.service - OpenSSH per-connection server daemon (139.178.68.195:40398). Jul 7 00:24:46.799128 sshd[5575]: Accepted publickey for core from 139.178.68.195 port 40398 ssh2: RSA SHA256:YTcSFuhTbyqZDVTaIAnK2RLjCV77bWlMJOmuLtNHqI8 Jul 7 00:24:46.803563 sshd-session[5575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:24:46.811873 systemd-logind[1519]: New session 22 of user core. Jul 7 00:24:46.818031 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 00:24:46.998848 sshd[5577]: Connection closed by 139.178.68.195 port 40398 Jul 7 00:24:46.997262 sshd-session[5575]: pam_unix(sshd:session): session closed for user core Jul 7 00:24:47.002743 systemd-logind[1519]: Session 22 logged out. Waiting for processes to exit. Jul 7 00:24:47.003219 systemd[1]: sshd@21-146.190.122.157:22-139.178.68.195:40398.service: Deactivated successfully. Jul 7 00:24:47.008529 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 00:24:47.013665 systemd-logind[1519]: Removed session 22. Jul 7 00:24:52.018417 systemd[1]: Started sshd@22-146.190.122.157:22-139.178.68.195:54734.service - OpenSSH per-connection server daemon (139.178.68.195:54734). Jul 7 00:24:52.092823 sshd[5589]: Accepted publickey for core from 139.178.68.195 port 54734 ssh2: RSA SHA256:YTcSFuhTbyqZDVTaIAnK2RLjCV77bWlMJOmuLtNHqI8 Jul 7 00:24:52.094650 sshd-session[5589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:24:52.102093 systemd-logind[1519]: New session 23 of user core. Jul 7 00:24:52.108901 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 00:24:52.270765 sshd[5591]: Connection closed by 139.178.68.195 port 54734 Jul 7 00:24:52.272383 sshd-session[5589]: pam_unix(sshd:session): session closed for user core Jul 7 00:24:52.279233 systemd[1]: sshd@22-146.190.122.157:22-139.178.68.195:54734.service: Deactivated successfully. Jul 7 00:24:52.284948 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 00:24:52.291451 systemd-logind[1519]: Session 23 logged out. Waiting for processes to exit. Jul 7 00:24:52.293245 systemd-logind[1519]: Removed session 23. Jul 7 00:24:56.122003 containerd[1539]: time="2025-07-07T00:24:56.121923092Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8dbfcf081b91b0d3d6245c2dc3d71048fd398d60da7224b98893be1a9b8f22ac\" id:\"2741e5527e919fa718c0597ed4de60ec90b584a992155ef1e4a15a6773af3384\" pid:5634 exited_at:{seconds:1751847896 nanos:88509990}" Jul 7 00:24:56.407253 containerd[1539]: time="2025-07-07T00:24:56.406537132Z" level=info msg="TaskExit event in podsandbox handler container_id:\"59774bd0c88bf3bd0bd649b639dc51b7eed79aaa09674a5acc8801a79305d947\" id:\"90d5921c74171edb6ca809f18d0002d5826806bad70232b558a918f02b79fa06\" pid:5627 exited_at:{seconds:1751847896 nanos:405979859}" Jul 7 00:24:57.291006 systemd[1]: Started sshd@23-146.190.122.157:22-139.178.68.195:54740.service - OpenSSH per-connection server daemon (139.178.68.195:54740). Jul 7 00:24:57.360927 sshd[5659]: Accepted publickey for core from 139.178.68.195 port 54740 ssh2: RSA SHA256:YTcSFuhTbyqZDVTaIAnK2RLjCV77bWlMJOmuLtNHqI8 Jul 7 00:24:57.364133 sshd-session[5659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:24:57.371066 systemd-logind[1519]: New session 24 of user core. Jul 7 00:24:57.376949 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 00:24:57.552817 sshd[5661]: Connection closed by 139.178.68.195 port 54740 Jul 7 00:24:57.555277 sshd-session[5659]: pam_unix(sshd:session): session closed for user core Jul 7 00:24:57.565236 systemd[1]: sshd@23-146.190.122.157:22-139.178.68.195:54740.service: Deactivated successfully. Jul 7 00:24:57.571832 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 00:24:57.578045 systemd-logind[1519]: Session 24 logged out. Waiting for processes to exit. Jul 7 00:24:57.579741 systemd-logind[1519]: Removed session 24. Jul 7 00:24:58.383112 kubelet[2697]: E0707 00:24:58.383040 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2"