Apr 30 03:25:28.947116 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 03:25:28.947161 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:25:28.947181 kernel: BIOS-provided physical RAM map: Apr 30 03:25:28.947193 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 30 03:25:28.947205 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 30 03:25:28.947215 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 30 03:25:28.947257 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Apr 30 03:25:28.947269 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Apr 30 03:25:28.947278 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 30 03:25:28.947294 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 30 03:25:28.947304 kernel: NX (Execute Disable) protection: active Apr 30 03:25:28.947313 kernel: APIC: Static calls initialized Apr 30 03:25:28.947329 kernel: SMBIOS 2.8 present. Apr 30 03:25:28.947339 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Apr 30 03:25:28.947351 kernel: Hypervisor detected: KVM Apr 30 03:25:28.947368 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 03:25:28.947383 kernel: kvm-clock: using sched offset of 3319768988 cycles Apr 30 03:25:28.947395 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 03:25:28.947406 kernel: tsc: Detected 2494.134 MHz processor Apr 30 03:25:28.947418 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 03:25:28.947430 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 03:25:28.947442 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Apr 30 03:25:28.947453 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 30 03:25:28.947464 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 03:25:28.947483 kernel: ACPI: Early table checksum verification disabled Apr 30 03:25:28.947494 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Apr 30 03:25:28.947507 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:25:28.947515 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:25:28.947523 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:25:28.947531 kernel: ACPI: FACS 0x000000007FFE0000 000040 Apr 30 03:25:28.947540 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:25:28.947548 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:25:28.947556 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:25:28.947568 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:25:28.947576 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Apr 30 03:25:28.947584 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Apr 30 03:25:28.947592 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Apr 30 03:25:28.947600 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Apr 30 03:25:28.947608 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Apr 30 03:25:28.947616 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Apr 30 03:25:28.947632 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Apr 30 03:25:28.947640 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 30 03:25:28.947649 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 30 03:25:28.947658 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Apr 30 03:25:28.947666 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Apr 30 03:25:28.947679 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Apr 30 03:25:28.947688 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Apr 30 03:25:28.947700 kernel: Zone ranges: Apr 30 03:25:28.947709 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 03:25:28.947717 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Apr 30 03:25:28.947725 kernel: Normal empty Apr 30 03:25:28.947733 kernel: Movable zone start for each node Apr 30 03:25:28.947741 kernel: Early memory node ranges Apr 30 03:25:28.947750 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 30 03:25:28.947758 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Apr 30 03:25:28.947767 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Apr 30 03:25:28.947779 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:25:28.947787 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 30 03:25:28.947798 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Apr 30 03:25:28.947806 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 30 03:25:28.947814 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 03:25:28.947823 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 30 03:25:28.947831 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 03:25:28.947840 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 03:25:28.947848 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 03:25:28.947860 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 03:25:28.947868 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 03:25:28.947876 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 03:25:28.947885 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 03:25:28.947893 kernel: TSC deadline timer available Apr 30 03:25:28.947901 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 03:25:28.947909 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 03:25:28.947918 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Apr 30 03:25:28.947929 kernel: Booting paravirtualized kernel on KVM Apr 30 03:25:28.947938 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 03:25:28.947950 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 03:25:28.947958 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 03:25:28.947966 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 03:25:28.947974 kernel: pcpu-alloc: [0] 0 1 Apr 30 03:25:28.947982 kernel: kvm-guest: PV spinlocks disabled, no host support Apr 30 03:25:28.947992 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:25:28.948001 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 03:25:28.948013 kernel: random: crng init done Apr 30 03:25:28.948021 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 03:25:28.948030 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 03:25:28.948038 kernel: Fallback order for Node 0: 0 Apr 30 03:25:28.948047 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Apr 30 03:25:28.948055 kernel: Policy zone: DMA32 Apr 30 03:25:28.948063 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 03:25:28.948072 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 125148K reserved, 0K cma-reserved) Apr 30 03:25:28.948080 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 03:25:28.948093 kernel: Kernel/User page tables isolation: enabled Apr 30 03:25:28.948102 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 03:25:28.948110 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 03:25:28.948118 kernel: Dynamic Preempt: voluntary Apr 30 03:25:28.948126 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 03:25:28.948139 kernel: rcu: RCU event tracing is enabled. Apr 30 03:25:28.948152 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 03:25:28.948163 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 03:25:28.948177 kernel: Rude variant of Tasks RCU enabled. Apr 30 03:25:28.948189 kernel: Tracing variant of Tasks RCU enabled. Apr 30 03:25:28.948198 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 03:25:28.948206 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 03:25:28.950264 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 30 03:25:28.950315 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 03:25:28.950332 kernel: Console: colour VGA+ 80x25 Apr 30 03:25:28.950342 kernel: printk: console [tty0] enabled Apr 30 03:25:28.950351 kernel: printk: console [ttyS0] enabled Apr 30 03:25:28.950359 kernel: ACPI: Core revision 20230628 Apr 30 03:25:28.950369 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 30 03:25:28.950386 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 03:25:28.950396 kernel: x2apic enabled Apr 30 03:25:28.950409 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 03:25:28.950421 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 30 03:25:28.950434 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f3946f721, max_idle_ns: 440795294991 ns Apr 30 03:25:28.950446 kernel: Calibrating delay loop (skipped) preset value.. 4988.26 BogoMIPS (lpj=2494134) Apr 30 03:25:28.950459 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 30 03:25:28.950472 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 30 03:25:28.950503 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 03:25:28.950516 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 03:25:28.950530 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 03:25:28.950550 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 03:25:28.950566 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 30 03:25:28.950577 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 03:25:28.950586 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 03:25:28.950595 kernel: MDS: Mitigation: Clear CPU buffers Apr 30 03:25:28.950604 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:25:28.950623 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 03:25:28.950632 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 03:25:28.950641 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 03:25:28.950651 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 03:25:28.950660 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 30 03:25:28.950669 kernel: Freeing SMP alternatives memory: 32K Apr 30 03:25:28.950678 kernel: pid_max: default: 32768 minimum: 301 Apr 30 03:25:28.950688 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 03:25:28.950702 kernel: landlock: Up and running. Apr 30 03:25:28.950711 kernel: SELinux: Initializing. Apr 30 03:25:28.950720 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 03:25:28.950730 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 03:25:28.950739 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Apr 30 03:25:28.950748 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:25:28.950757 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:25:28.950766 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:25:28.950780 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Apr 30 03:25:28.950789 kernel: signal: max sigframe size: 1776 Apr 30 03:25:28.950799 kernel: rcu: Hierarchical SRCU implementation. Apr 30 03:25:28.950809 kernel: rcu: Max phase no-delay instances is 400. Apr 30 03:25:28.950819 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 30 03:25:28.950828 kernel: smp: Bringing up secondary CPUs ... Apr 30 03:25:28.950837 kernel: smpboot: x86: Booting SMP configuration: Apr 30 03:25:28.950847 kernel: .... node #0, CPUs: #1 Apr 30 03:25:28.950859 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 03:25:28.950878 kernel: smpboot: Max logical packages: 1 Apr 30 03:25:28.950898 kernel: smpboot: Total of 2 processors activated (9976.53 BogoMIPS) Apr 30 03:25:28.950911 kernel: devtmpfs: initialized Apr 30 03:25:28.950925 kernel: x86/mm: Memory block size: 128MB Apr 30 03:25:28.950937 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 03:25:28.950950 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 03:25:28.950964 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 03:25:28.950980 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 03:25:28.950994 kernel: audit: initializing netlink subsys (disabled) Apr 30 03:25:28.951008 kernel: audit: type=2000 audit(1745983528.231:1): state=initialized audit_enabled=0 res=1 Apr 30 03:25:28.951028 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 03:25:28.951042 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 03:25:28.951057 kernel: cpuidle: using governor menu Apr 30 03:25:28.951073 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 03:25:28.951086 kernel: dca service started, version 1.12.1 Apr 30 03:25:28.951099 kernel: PCI: Using configuration type 1 for base access Apr 30 03:25:28.951115 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 03:25:28.951130 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 03:25:28.951143 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 03:25:28.951164 kernel: ACPI: Added _OSI(Module Device) Apr 30 03:25:28.951173 kernel: ACPI: Added _OSI(Processor Device) Apr 30 03:25:28.951182 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 03:25:28.951191 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 03:25:28.951201 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 03:25:28.951210 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 03:25:28.951219 kernel: ACPI: Interpreter enabled Apr 30 03:25:28.952316 kernel: ACPI: PM: (supports S0 S5) Apr 30 03:25:28.952336 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 03:25:28.952360 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 03:25:28.952375 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 03:25:28.952391 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 30 03:25:28.952407 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 03:25:28.952702 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 30 03:25:28.952821 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 30 03:25:28.952923 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 30 03:25:28.952943 kernel: acpiphp: Slot [3] registered Apr 30 03:25:28.952953 kernel: acpiphp: Slot [4] registered Apr 30 03:25:28.952962 kernel: acpiphp: Slot [5] registered Apr 30 03:25:28.952971 kernel: acpiphp: Slot [6] registered Apr 30 03:25:28.952980 kernel: acpiphp: Slot [7] registered Apr 30 03:25:28.952989 kernel: acpiphp: Slot [8] registered Apr 30 03:25:28.952998 kernel: acpiphp: Slot [9] registered Apr 30 03:25:28.953008 kernel: acpiphp: Slot [10] registered Apr 30 03:25:28.953017 kernel: acpiphp: Slot [11] registered Apr 30 03:25:28.953031 kernel: acpiphp: Slot [12] registered Apr 30 03:25:28.953040 kernel: acpiphp: Slot [13] registered Apr 30 03:25:28.953049 kernel: acpiphp: Slot [14] registered Apr 30 03:25:28.953058 kernel: acpiphp: Slot [15] registered Apr 30 03:25:28.953067 kernel: acpiphp: Slot [16] registered Apr 30 03:25:28.953076 kernel: acpiphp: Slot [17] registered Apr 30 03:25:28.953085 kernel: acpiphp: Slot [18] registered Apr 30 03:25:28.953094 kernel: acpiphp: Slot [19] registered Apr 30 03:25:28.953102 kernel: acpiphp: Slot [20] registered Apr 30 03:25:28.953111 kernel: acpiphp: Slot [21] registered Apr 30 03:25:28.953125 kernel: acpiphp: Slot [22] registered Apr 30 03:25:28.953134 kernel: acpiphp: Slot [23] registered Apr 30 03:25:28.953142 kernel: acpiphp: Slot [24] registered Apr 30 03:25:28.953151 kernel: acpiphp: Slot [25] registered Apr 30 03:25:28.953160 kernel: acpiphp: Slot [26] registered Apr 30 03:25:28.953169 kernel: acpiphp: Slot [27] registered Apr 30 03:25:28.953178 kernel: acpiphp: Slot [28] registered Apr 30 03:25:28.953187 kernel: acpiphp: Slot [29] registered Apr 30 03:25:28.953196 kernel: acpiphp: Slot [30] registered Apr 30 03:25:28.953209 kernel: acpiphp: Slot [31] registered Apr 30 03:25:28.953218 kernel: PCI host bridge to bus 0000:00 Apr 30 03:25:28.954487 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 03:25:28.954648 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 03:25:28.954797 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 03:25:28.954955 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 30 03:25:28.955097 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Apr 30 03:25:28.955268 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 03:25:28.955503 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 30 03:25:28.955706 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 30 03:25:28.955884 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Apr 30 03:25:28.955999 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Apr 30 03:25:28.956130 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Apr 30 03:25:28.958439 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Apr 30 03:25:28.958617 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Apr 30 03:25:28.958730 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Apr 30 03:25:28.958863 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Apr 30 03:25:28.958962 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Apr 30 03:25:28.959106 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Apr 30 03:25:28.959208 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Apr 30 03:25:28.959333 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Apr 30 03:25:28.959450 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Apr 30 03:25:28.959553 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Apr 30 03:25:28.959674 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Apr 30 03:25:28.959793 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Apr 30 03:25:28.959920 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Apr 30 03:25:28.960052 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 03:25:28.960186 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Apr 30 03:25:28.962911 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Apr 30 03:25:28.963044 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Apr 30 03:25:28.963142 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Apr 30 03:25:28.963279 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 30 03:25:28.963376 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Apr 30 03:25:28.963484 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Apr 30 03:25:28.963578 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Apr 30 03:25:28.963705 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Apr 30 03:25:28.963823 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Apr 30 03:25:28.963925 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Apr 30 03:25:28.964027 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Apr 30 03:25:28.964147 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Apr 30 03:25:28.966987 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Apr 30 03:25:28.967133 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Apr 30 03:25:28.967246 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Apr 30 03:25:28.967369 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Apr 30 03:25:28.967475 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Apr 30 03:25:28.967574 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Apr 30 03:25:28.967671 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Apr 30 03:25:28.967785 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Apr 30 03:25:28.967935 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Apr 30 03:25:28.968056 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Apr 30 03:25:28.968071 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 03:25:28.968082 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 03:25:28.968097 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 03:25:28.968110 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 03:25:28.968131 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 30 03:25:28.968140 kernel: iommu: Default domain type: Translated Apr 30 03:25:28.968150 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 03:25:28.968159 kernel: PCI: Using ACPI for IRQ routing Apr 30 03:25:28.968168 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 03:25:28.968178 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 30 03:25:28.968188 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Apr 30 03:25:28.968360 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Apr 30 03:25:28.968467 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Apr 30 03:25:28.968581 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 03:25:28.968595 kernel: vgaarb: loaded Apr 30 03:25:28.968605 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 30 03:25:28.968614 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 30 03:25:28.968623 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 03:25:28.968633 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 03:25:28.968642 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 03:25:28.968652 kernel: pnp: PnP ACPI init Apr 30 03:25:28.968661 kernel: pnp: PnP ACPI: found 4 devices Apr 30 03:25:28.968675 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 03:25:28.968684 kernel: NET: Registered PF_INET protocol family Apr 30 03:25:28.968693 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 03:25:28.968703 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 30 03:25:28.968712 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 03:25:28.968721 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:25:28.968730 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 03:25:28.968740 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 30 03:25:28.968749 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 03:25:28.968762 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 03:25:28.968771 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 03:25:28.968780 kernel: NET: Registered PF_XDP protocol family Apr 30 03:25:28.968903 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 03:25:28.969042 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 03:25:28.969138 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 03:25:28.970685 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 30 03:25:28.970923 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Apr 30 03:25:28.971108 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Apr 30 03:25:28.971287 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 30 03:25:28.971311 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 30 03:25:28.971458 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 32315 usecs Apr 30 03:25:28.971478 kernel: PCI: CLS 0 bytes, default 64 Apr 30 03:25:28.971495 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 30 03:25:28.971511 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f3946f721, max_idle_ns: 440795294991 ns Apr 30 03:25:28.971528 kernel: Initialise system trusted keyrings Apr 30 03:25:28.971553 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 30 03:25:28.971569 kernel: Key type asymmetric registered Apr 30 03:25:28.971585 kernel: Asymmetric key parser 'x509' registered Apr 30 03:25:28.971601 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 03:25:28.971616 kernel: io scheduler mq-deadline registered Apr 30 03:25:28.971632 kernel: io scheduler kyber registered Apr 30 03:25:28.971648 kernel: io scheduler bfq registered Apr 30 03:25:28.971664 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 03:25:28.971680 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Apr 30 03:25:28.971701 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Apr 30 03:25:28.971717 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Apr 30 03:25:28.971734 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 03:25:28.971750 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 03:25:28.971767 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 03:25:28.971782 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 03:25:28.971798 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 03:25:28.971815 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 30 03:25:28.972023 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 30 03:25:28.972170 kernel: rtc_cmos 00:03: registered as rtc0 Apr 30 03:25:28.974484 kernel: rtc_cmos 00:03: setting system clock to 2025-04-30T03:25:28 UTC (1745983528) Apr 30 03:25:28.974652 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Apr 30 03:25:28.974672 kernel: intel_pstate: CPU model not supported Apr 30 03:25:28.974687 kernel: NET: Registered PF_INET6 protocol family Apr 30 03:25:28.974701 kernel: Segment Routing with IPv6 Apr 30 03:25:28.974715 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 03:25:28.974729 kernel: NET: Registered PF_PACKET protocol family Apr 30 03:25:28.974756 kernel: Key type dns_resolver registered Apr 30 03:25:28.974770 kernel: IPI shorthand broadcast: enabled Apr 30 03:25:28.974785 kernel: sched_clock: Marking stable (1008003633, 112331326)->(1226106117, -105771158) Apr 30 03:25:28.974799 kernel: registered taskstats version 1 Apr 30 03:25:28.974813 kernel: Loading compiled-in X.509 certificates Apr 30 03:25:28.974829 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 03:25:28.974845 kernel: Key type .fscrypt registered Apr 30 03:25:28.974862 kernel: Key type fscrypt-provisioning registered Apr 30 03:25:28.974879 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 03:25:28.974898 kernel: ima: Allocated hash algorithm: sha1 Apr 30 03:25:28.974912 kernel: ima: No architecture policies found Apr 30 03:25:28.974927 kernel: clk: Disabling unused clocks Apr 30 03:25:28.974944 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 03:25:28.974960 kernel: Write protecting the kernel read-only data: 36864k Apr 30 03:25:28.975008 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 03:25:28.975030 kernel: Run /init as init process Apr 30 03:25:28.975047 kernel: with arguments: Apr 30 03:25:28.975062 kernel: /init Apr 30 03:25:28.975081 kernel: with environment: Apr 30 03:25:28.975097 kernel: HOME=/ Apr 30 03:25:28.975114 kernel: TERM=linux Apr 30 03:25:28.975128 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 03:25:28.975147 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:25:28.975167 systemd[1]: Detected virtualization kvm. Apr 30 03:25:28.975185 systemd[1]: Detected architecture x86-64. Apr 30 03:25:28.975205 systemd[1]: Running in initrd. Apr 30 03:25:28.975220 systemd[1]: No hostname configured, using default hostname. Apr 30 03:25:28.975250 systemd[1]: Hostname set to . Apr 30 03:25:28.975268 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:25:28.975284 systemd[1]: Queued start job for default target initrd.target. Apr 30 03:25:28.975300 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:25:28.975317 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:25:28.975337 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 03:25:28.975357 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:25:28.975371 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 03:25:28.975385 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 03:25:28.975401 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 03:25:28.975417 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 03:25:28.975432 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:25:28.975447 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:25:28.975469 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:25:28.975485 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:25:28.975503 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:25:28.975522 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:25:28.975538 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:25:28.975555 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:25:28.975578 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:25:28.975594 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:25:28.975609 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:25:28.975626 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:25:28.975644 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:25:28.975662 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:25:28.975678 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 03:25:28.975693 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:25:28.975715 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 03:25:28.975733 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 03:25:28.975749 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:25:28.975764 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:25:28.975780 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:25:28.975802 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 03:25:28.975820 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:25:28.975836 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 03:25:28.975856 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:25:28.975929 systemd-journald[183]: Collecting audit messages is disabled. Apr 30 03:25:28.975974 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:25:28.975992 systemd-journald[183]: Journal started Apr 30 03:25:28.976026 systemd-journald[183]: Runtime Journal (/run/log/journal/f09c46bc076b47c49dacabd58c1a3cb5) is 4.9M, max 39.3M, 34.4M free. Apr 30 03:25:28.972860 systemd-modules-load[184]: Inserted module 'overlay' Apr 30 03:25:29.001474 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:25:29.001534 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:25:29.010286 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 03:25:29.010376 kernel: Bridge firewalling registered Apr 30 03:25:29.010469 systemd-modules-load[184]: Inserted module 'br_netfilter' Apr 30 03:25:29.012780 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:25:29.014219 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:25:29.018543 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:25:29.028536 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:25:29.040502 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:25:29.049567 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:25:29.050953 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:25:29.052951 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:25:29.062607 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 03:25:29.064537 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:25:29.068043 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:25:29.079550 dracut-cmdline[215]: dracut-dracut-053 Apr 30 03:25:29.084458 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:25:29.113356 systemd-resolved[220]: Positive Trust Anchors: Apr 30 03:25:29.113380 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:25:29.113420 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:25:29.116825 systemd-resolved[220]: Defaulting to hostname 'linux'. Apr 30 03:25:29.118185 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:25:29.118859 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:25:29.188283 kernel: SCSI subsystem initialized Apr 30 03:25:29.200486 kernel: Loading iSCSI transport class v2.0-870. Apr 30 03:25:29.214263 kernel: iscsi: registered transport (tcp) Apr 30 03:25:29.238310 kernel: iscsi: registered transport (qla4xxx) Apr 30 03:25:29.238442 kernel: QLogic iSCSI HBA Driver Apr 30 03:25:29.306912 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 03:25:29.313543 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 03:25:29.358301 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 03:25:29.358401 kernel: device-mapper: uevent: version 1.0.3 Apr 30 03:25:29.358425 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 03:25:29.406278 kernel: raid6: avx2x4 gen() 15559 MB/s Apr 30 03:25:29.423279 kernel: raid6: avx2x2 gen() 16110 MB/s Apr 30 03:25:29.440421 kernel: raid6: avx2x1 gen() 12199 MB/s Apr 30 03:25:29.440529 kernel: raid6: using algorithm avx2x2 gen() 16110 MB/s Apr 30 03:25:29.458656 kernel: raid6: .... xor() 19080 MB/s, rmw enabled Apr 30 03:25:29.458740 kernel: raid6: using avx2x2 recovery algorithm Apr 30 03:25:29.482278 kernel: xor: automatically using best checksumming function avx Apr 30 03:25:29.675302 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 03:25:29.692593 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:25:29.704782 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:25:29.724148 systemd-udevd[401]: Using default interface naming scheme 'v255'. Apr 30 03:25:29.731749 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:25:29.739139 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 03:25:29.763745 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Apr 30 03:25:29.813923 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:25:29.827787 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:25:29.913800 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:25:29.924808 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 03:25:29.954894 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 03:25:29.956513 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:25:29.957038 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:25:29.959094 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:25:29.968539 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 03:25:29.993842 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:25:30.032261 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Apr 30 03:25:30.083423 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 03:25:30.083456 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Apr 30 03:25:30.083664 kernel: scsi host0: Virtio SCSI HBA Apr 30 03:25:30.083878 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 03:25:30.083901 kernel: GPT:9289727 != 125829119 Apr 30 03:25:30.083919 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 03:25:30.083938 kernel: GPT:9289727 != 125829119 Apr 30 03:25:30.083956 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 03:25:30.083974 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:25:30.088132 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Apr 30 03:25:30.094366 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Apr 30 03:25:30.094595 kernel: libata version 3.00 loaded. Apr 30 03:25:30.097696 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:25:30.105980 kernel: ACPI: bus type USB registered Apr 30 03:25:30.106020 kernel: usbcore: registered new interface driver usbfs Apr 30 03:25:30.106058 kernel: usbcore: registered new interface driver hub Apr 30 03:25:30.106075 kernel: usbcore: registered new device driver usb Apr 30 03:25:30.097897 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:25:30.098682 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:25:30.099161 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:25:30.101447 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:25:30.101999 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:25:30.114990 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:25:30.120899 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 03:25:30.120928 kernel: AES CTR mode by8 optimization enabled Apr 30 03:25:30.136413 kernel: ata_piix 0000:00:01.1: version 2.13 Apr 30 03:25:30.165693 kernel: scsi host1: ata_piix Apr 30 03:25:30.165874 kernel: scsi host2: ata_piix Apr 30 03:25:30.166012 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Apr 30 03:25:30.166037 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Apr 30 03:25:30.211285 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (457) Apr 30 03:25:30.218078 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 30 03:25:30.231332 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (454) Apr 30 03:25:30.235023 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 30 03:25:30.239508 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:25:30.244346 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 30 03:25:30.244860 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 30 03:25:30.250602 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 03:25:30.255532 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 03:25:30.257436 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:25:30.268184 disk-uuid[530]: Primary Header is updated. Apr 30 03:25:30.268184 disk-uuid[530]: Secondary Entries is updated. Apr 30 03:25:30.268184 disk-uuid[530]: Secondary Header is updated. Apr 30 03:25:30.277311 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:25:30.285313 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:25:30.287365 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:25:30.295265 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:25:30.377425 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Apr 30 03:25:30.387785 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Apr 30 03:25:30.388050 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Apr 30 03:25:30.388757 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Apr 30 03:25:30.389092 kernel: hub 1-0:1.0: USB hub found Apr 30 03:25:30.389506 kernel: hub 1-0:1.0: 2 ports detected Apr 30 03:25:31.295308 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:25:31.296879 disk-uuid[532]: The operation has completed successfully. Apr 30 03:25:31.352385 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 03:25:31.352526 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 03:25:31.364703 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 03:25:31.385931 sh[561]: Success Apr 30 03:25:31.401258 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 03:25:31.508443 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 03:25:31.511562 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 03:25:31.513105 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 03:25:31.537512 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 03:25:31.537623 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:25:31.538408 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 03:25:31.540530 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 03:25:31.540635 kernel: BTRFS info (device dm-0): using free space tree Apr 30 03:25:31.551147 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 03:25:31.552522 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 03:25:31.560586 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 03:25:31.564568 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 03:25:31.580210 kernel: BTRFS info (device vda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:25:31.580314 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:25:31.580340 kernel: BTRFS info (device vda6): using free space tree Apr 30 03:25:31.585396 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 03:25:31.600067 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 03:25:31.601324 kernel: BTRFS info (device vda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:25:31.610847 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 03:25:31.619556 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 03:25:31.734075 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:25:31.743664 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:25:31.778007 ignition[652]: Ignition 2.19.0 Apr 30 03:25:31.778025 ignition[652]: Stage: fetch-offline Apr 30 03:25:31.778099 ignition[652]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:25:31.778115 ignition[652]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:25:31.779316 ignition[652]: parsed url from cmdline: "" Apr 30 03:25:31.779322 ignition[652]: no config URL provided Apr 30 03:25:31.779330 ignition[652]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:25:31.779341 ignition[652]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:25:31.779347 ignition[652]: failed to fetch config: resource requires networking Apr 30 03:25:31.779572 ignition[652]: Ignition finished successfully Apr 30 03:25:31.784610 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:25:31.785811 systemd-networkd[745]: lo: Link UP Apr 30 03:25:31.785827 systemd-networkd[745]: lo: Gained carrier Apr 30 03:25:31.789195 systemd-networkd[745]: Enumeration completed Apr 30 03:25:31.790005 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:25:31.790165 systemd-networkd[745]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Apr 30 03:25:31.790170 systemd-networkd[745]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Apr 30 03:25:31.791358 systemd[1]: Reached target network.target - Network. Apr 30 03:25:31.792066 systemd-networkd[745]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:25:31.792071 systemd-networkd[745]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:25:31.793956 systemd-networkd[745]: eth0: Link UP Apr 30 03:25:31.793961 systemd-networkd[745]: eth0: Gained carrier Apr 30 03:25:31.793973 systemd-networkd[745]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Apr 30 03:25:31.799048 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 03:25:31.799500 systemd-networkd[745]: eth1: Link UP Apr 30 03:25:31.799505 systemd-networkd[745]: eth1: Gained carrier Apr 30 03:25:31.799523 systemd-networkd[745]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:25:31.815065 systemd-networkd[745]: eth0: DHCPv4 address 24.199.113.144/20, gateway 24.199.112.1 acquired from 169.254.169.253 Apr 30 03:25:31.818425 systemd-networkd[745]: eth1: DHCPv4 address 10.124.0.15/20 acquired from 169.254.169.253 Apr 30 03:25:31.836691 ignition[753]: Ignition 2.19.0 Apr 30 03:25:31.837506 ignition[753]: Stage: fetch Apr 30 03:25:31.838062 ignition[753]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:25:31.838552 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:25:31.839221 ignition[753]: parsed url from cmdline: "" Apr 30 03:25:31.839244 ignition[753]: no config URL provided Apr 30 03:25:31.839253 ignition[753]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:25:31.839270 ignition[753]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:25:31.839314 ignition[753]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Apr 30 03:25:31.868548 ignition[753]: GET result: OK Apr 30 03:25:31.868726 ignition[753]: parsing config with SHA512: 5e4ed7f2863b67596aa5dc22d3b9fa4ebdccb93cd859686b71f19442b70230f8febf76c489657fdd86945ad4e10d3a5648bd9ead0def8e2c188271f76f9c6be4 Apr 30 03:25:31.876146 unknown[753]: fetched base config from "system" Apr 30 03:25:31.876162 unknown[753]: fetched base config from "system" Apr 30 03:25:31.876886 ignition[753]: fetch: fetch complete Apr 30 03:25:31.876173 unknown[753]: fetched user config from "digitalocean" Apr 30 03:25:31.876896 ignition[753]: fetch: fetch passed Apr 30 03:25:31.876988 ignition[753]: Ignition finished successfully Apr 30 03:25:31.880103 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 03:25:31.886593 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 03:25:31.928984 ignition[760]: Ignition 2.19.0 Apr 30 03:25:31.929001 ignition[760]: Stage: kargs Apr 30 03:25:31.929359 ignition[760]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:25:31.929376 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:25:31.930866 ignition[760]: kargs: kargs passed Apr 30 03:25:31.930936 ignition[760]: Ignition finished successfully Apr 30 03:25:31.932817 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 03:25:31.937589 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 03:25:31.967840 ignition[767]: Ignition 2.19.0 Apr 30 03:25:31.967860 ignition[767]: Stage: disks Apr 30 03:25:31.968111 ignition[767]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:25:31.968129 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:25:31.970939 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 03:25:31.969527 ignition[767]: disks: disks passed Apr 30 03:25:31.969595 ignition[767]: Ignition finished successfully Apr 30 03:25:31.975955 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 03:25:31.976663 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:25:31.977259 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:25:31.977988 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:25:31.978739 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:25:31.984643 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 03:25:32.015419 systemd-fsck[775]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 03:25:32.022802 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 03:25:32.029621 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 03:25:32.170265 kernel: EXT4-fs (vda9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 03:25:32.171619 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 03:25:32.173674 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 03:25:32.179501 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:25:32.189743 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 03:25:32.195146 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Apr 30 03:25:32.198715 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 03:25:32.199675 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 03:25:32.202796 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (784) Apr 30 03:25:32.199723 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:25:32.205970 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 03:25:32.208565 kernel: BTRFS info (device vda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:25:32.208622 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:25:32.208636 kernel: BTRFS info (device vda6): using free space tree Apr 30 03:25:32.215283 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 03:25:32.218250 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 03:25:32.230027 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:25:32.306530 coreos-metadata[786]: Apr 30 03:25:32.306 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Apr 30 03:25:32.322440 coreos-metadata[786]: Apr 30 03:25:32.322 INFO Fetch successful Apr 30 03:25:32.323131 initrd-setup-root[815]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 03:25:32.331110 coreos-metadata[787]: Apr 30 03:25:32.331 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Apr 30 03:25:32.332740 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Apr 30 03:25:32.334615 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Apr 30 03:25:32.340919 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Apr 30 03:25:32.346652 initrd-setup-root[830]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 03:25:32.350372 coreos-metadata[787]: Apr 30 03:25:32.350 INFO Fetch successful Apr 30 03:25:32.355300 coreos-metadata[787]: Apr 30 03:25:32.355 INFO wrote hostname ci-4081.3.3-2-e7e0406ed5 to /sysroot/etc/hostname Apr 30 03:25:32.356491 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:25:32.357720 initrd-setup-root[837]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 03:25:32.474895 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 03:25:32.480460 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 03:25:32.483386 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 03:25:32.494300 kernel: BTRFS info (device vda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:25:32.518893 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 03:25:32.532520 ignition[905]: INFO : Ignition 2.19.0 Apr 30 03:25:32.532520 ignition[905]: INFO : Stage: mount Apr 30 03:25:32.533653 ignition[905]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:25:32.533653 ignition[905]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:25:32.534650 ignition[905]: INFO : mount: mount passed Apr 30 03:25:32.534650 ignition[905]: INFO : Ignition finished successfully Apr 30 03:25:32.537190 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 03:25:32.537784 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 03:25:32.544493 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 03:25:32.568659 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:25:32.588480 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (918) Apr 30 03:25:32.588557 kernel: BTRFS info (device vda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:25:32.591069 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:25:32.591153 kernel: BTRFS info (device vda6): using free space tree Apr 30 03:25:32.595271 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 03:25:32.597976 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:25:32.642009 ignition[935]: INFO : Ignition 2.19.0 Apr 30 03:25:32.642962 ignition[935]: INFO : Stage: files Apr 30 03:25:32.643389 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:25:32.643389 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:25:32.645366 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Apr 30 03:25:32.647680 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 03:25:32.647680 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 03:25:32.653669 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 03:25:32.654949 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 03:25:32.655904 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 03:25:32.655754 unknown[935]: wrote ssh authorized keys file for user: core Apr 30 03:25:32.658119 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Apr 30 03:25:32.659065 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Apr 30 03:25:32.698016 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 03:25:32.852644 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Apr 30 03:25:32.852644 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 03:25:32.854748 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 03:25:32.854748 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:25:32.854748 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:25:32.854748 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:25:32.854748 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:25:32.854748 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:25:32.854748 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:25:32.854748 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:25:32.854748 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:25:32.854748 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:25:32.854748 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:25:32.854748 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:25:32.854748 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Apr 30 03:25:32.969795 systemd-networkd[745]: eth1: Gained IPv6LL Apr 30 03:25:33.412768 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 30 03:25:33.727256 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:25:33.727256 ignition[935]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 30 03:25:33.729340 ignition[935]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:25:33.729340 ignition[935]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:25:33.729340 ignition[935]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 30 03:25:33.729340 ignition[935]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 30 03:25:33.729340 ignition[935]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 03:25:33.729340 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:25:33.729340 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:25:33.729340 ignition[935]: INFO : files: files passed Apr 30 03:25:33.734661 ignition[935]: INFO : Ignition finished successfully Apr 30 03:25:33.731770 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 03:25:33.738552 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 03:25:33.743409 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 03:25:33.747673 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 03:25:33.747828 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 03:25:33.771393 initrd-setup-root-after-ignition[963]: grep: Apr 30 03:25:33.771393 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:25:33.773745 initrd-setup-root-after-ignition[963]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:25:33.773745 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:25:33.775949 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:25:33.776877 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 03:25:33.782473 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 03:25:33.829477 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 03:25:33.829657 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 03:25:33.831254 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 03:25:33.831741 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 03:25:33.832952 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 03:25:33.839559 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 03:25:33.862260 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:25:33.867645 systemd-networkd[745]: eth0: Gained IPv6LL Apr 30 03:25:33.869101 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 03:25:33.893775 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 03:25:33.893899 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 03:25:33.896322 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:25:33.896782 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:25:33.897764 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 03:25:33.898602 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 03:25:33.898681 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:25:33.899738 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 03:25:33.900121 systemd[1]: Stopped target basic.target - Basic System. Apr 30 03:25:33.901048 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 03:25:33.901815 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:25:33.902541 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 03:25:33.903482 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 03:25:33.904552 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:25:33.905443 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 03:25:33.906258 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 03:25:33.907102 systemd[1]: Stopped target swap.target - Swaps. Apr 30 03:25:33.907893 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 03:25:33.907973 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:25:33.909022 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:25:33.909550 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:25:33.910317 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 03:25:33.910694 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:25:33.911327 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 03:25:33.911423 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 03:25:33.912783 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 03:25:33.912844 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:25:33.913380 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 03:25:33.913428 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 03:25:33.914097 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 03:25:33.914143 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:25:33.921497 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 03:25:33.923380 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 03:25:33.923757 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 03:25:33.923822 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:25:33.925377 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 03:25:33.925438 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:25:33.949679 ignition[988]: INFO : Ignition 2.19.0 Apr 30 03:25:33.950683 ignition[988]: INFO : Stage: umount Apr 30 03:25:33.952468 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:25:33.952468 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:25:33.956103 ignition[988]: INFO : umount: umount passed Apr 30 03:25:33.958033 ignition[988]: INFO : Ignition finished successfully Apr 30 03:25:33.958173 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 03:25:33.960622 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 03:25:33.961338 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 03:25:33.963037 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 03:25:33.963182 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 03:25:33.967463 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 03:25:33.967568 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 03:25:33.968561 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 03:25:33.968641 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 03:25:33.969358 systemd[1]: Stopped target network.target - Network. Apr 30 03:25:33.969889 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 03:25:33.969958 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:25:33.970688 systemd[1]: Stopped target paths.target - Path Units. Apr 30 03:25:33.971307 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 03:25:33.973597 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:25:33.974129 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 03:25:33.974951 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 03:25:33.975612 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 03:25:33.975667 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:25:33.976295 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 03:25:33.976339 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:25:33.976903 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 03:25:33.976958 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 03:25:33.977624 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 03:25:33.977670 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 03:25:33.978809 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 03:25:33.979660 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 03:25:33.981364 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 03:25:33.981480 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 03:25:33.982359 systemd-networkd[745]: eth0: DHCPv6 lease lost Apr 30 03:25:33.983611 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 03:25:33.983717 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 03:25:33.986411 systemd-networkd[745]: eth1: DHCPv6 lease lost Apr 30 03:25:33.987336 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 03:25:33.987460 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 03:25:33.990574 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 03:25:33.990746 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 03:25:33.993194 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 03:25:33.993332 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:25:33.997423 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 03:25:33.997877 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 03:25:33.997958 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:25:33.998411 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:25:33.998452 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:25:33.999191 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 03:25:33.999283 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 03:25:34.002598 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 03:25:34.002665 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:25:34.003597 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:25:34.020752 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 03:25:34.020921 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:25:34.024719 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 03:25:34.025350 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 03:25:34.026419 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 03:25:34.026502 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:25:34.027583 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 03:25:34.027667 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:25:34.029260 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 03:25:34.029345 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 03:25:34.029930 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:25:34.029999 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:25:34.038583 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 03:25:34.039159 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 03:25:34.039267 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:25:34.039738 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 03:25:34.039796 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:25:34.042460 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 03:25:34.042548 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:25:34.043178 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:25:34.043251 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:25:34.043992 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 03:25:34.046357 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 03:25:34.049739 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 03:25:34.049864 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 03:25:34.051703 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 03:25:34.057578 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 03:25:34.081154 systemd[1]: Switching root. Apr 30 03:25:34.116040 systemd-journald[183]: Journal stopped Apr 30 03:25:35.499607 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Apr 30 03:25:35.499954 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 03:25:35.499993 kernel: SELinux: policy capability open_perms=1 Apr 30 03:25:35.500013 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 03:25:35.500033 kernel: SELinux: policy capability always_check_network=0 Apr 30 03:25:35.500811 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 03:25:35.500851 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 03:25:35.500872 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 03:25:35.500891 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 03:25:35.500923 kernel: audit: type=1403 audit(1745983534.299:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 03:25:35.500957 systemd[1]: Successfully loaded SELinux policy in 40.555ms. Apr 30 03:25:35.500988 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.172ms. Apr 30 03:25:35.503321 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:25:35.503354 systemd[1]: Detected virtualization kvm. Apr 30 03:25:35.503376 systemd[1]: Detected architecture x86-64. Apr 30 03:25:35.503425 systemd[1]: Detected first boot. Apr 30 03:25:35.503449 systemd[1]: Hostname set to . Apr 30 03:25:35.503467 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:25:35.503487 zram_generator::config[1030]: No configuration found. Apr 30 03:25:35.503520 systemd[1]: Populated /etc with preset unit settings. Apr 30 03:25:35.503543 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 03:25:35.503563 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 03:25:35.503583 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 03:25:35.503603 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 03:25:35.503623 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 03:25:35.503642 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 03:25:35.503662 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 03:25:35.503692 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 03:25:35.503714 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 03:25:35.503736 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 03:25:35.503763 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 03:25:35.503784 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:25:35.503807 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:25:35.503829 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 03:25:35.503850 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 03:25:35.503876 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 03:25:35.503896 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:25:35.503916 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 03:25:35.503939 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:25:35.503961 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 03:25:35.504010 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 03:25:35.504038 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 03:25:35.504059 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 03:25:35.504083 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:25:35.504103 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:25:35.504121 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:25:35.504142 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:25:35.504217 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 03:25:35.506313 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 03:25:35.506352 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:25:35.506376 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:25:35.506408 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:25:35.506429 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 03:25:35.506450 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 03:25:35.506471 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 03:25:35.506493 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 03:25:35.506512 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:25:35.506532 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 03:25:35.506550 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 03:25:35.506575 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 03:25:35.506595 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 03:25:35.506616 systemd[1]: Reached target machines.target - Containers. Apr 30 03:25:35.506647 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 03:25:35.506670 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:25:35.506692 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:25:35.506714 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 03:25:35.506736 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:25:35.506758 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:25:35.506812 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:25:35.506834 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 03:25:35.506855 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:25:35.506876 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 03:25:35.506896 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 03:25:35.506920 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 03:25:35.506942 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 03:25:35.506964 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 03:25:35.506991 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:25:35.507014 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:25:35.507034 kernel: loop: module loaded Apr 30 03:25:35.507055 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 03:25:35.507080 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 03:25:35.507100 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:25:35.507122 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 03:25:35.507142 systemd[1]: Stopped verity-setup.service. Apr 30 03:25:35.507164 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:25:35.507191 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 03:25:35.507213 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 03:25:35.509325 kernel: fuse: init (API version 7.39) Apr 30 03:25:35.509366 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 03:25:35.509391 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 03:25:35.509424 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 03:25:35.509445 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 03:25:35.509464 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:25:35.509489 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 03:25:35.509511 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 03:25:35.509537 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 03:25:35.509565 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:25:35.509588 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:25:35.509609 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:25:35.509630 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:25:35.509653 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 03:25:35.509674 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 03:25:35.509696 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:25:35.509719 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:25:35.509746 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:25:35.509767 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 03:25:35.509787 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 03:25:35.509806 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 03:25:35.509826 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 03:25:35.509848 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:25:35.509869 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:25:35.509891 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:25:35.509913 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 03:25:35.509939 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 03:25:35.509959 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 03:25:35.509980 kernel: ACPI: bus type drm_connector registered Apr 30 03:25:35.510002 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 03:25:35.510023 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:25:35.510051 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 03:25:35.510070 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 03:25:35.510089 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 03:25:35.510111 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:25:35.510191 systemd-journald[1103]: Collecting audit messages is disabled. Apr 30 03:25:35.512358 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 03:25:35.512408 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:25:35.512432 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 03:25:35.512454 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 03:25:35.512475 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:25:35.512496 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:25:35.512529 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 03:25:35.512556 systemd-journald[1103]: Journal started Apr 30 03:25:35.512612 systemd-journald[1103]: Runtime Journal (/run/log/journal/f09c46bc076b47c49dacabd58c1a3cb5) is 4.9M, max 39.3M, 34.4M free. Apr 30 03:25:34.990972 systemd[1]: Queued start job for default target multi-user.target. Apr 30 03:25:35.018030 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 30 03:25:35.018695 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 03:25:35.518958 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:25:35.539015 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:25:35.558442 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:25:35.605694 kernel: loop0: detected capacity change from 0 to 140768 Apr 30 03:25:35.600621 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 03:25:35.605035 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 03:25:35.608376 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 03:25:35.610558 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 03:25:35.621624 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 03:25:35.666019 systemd-journald[1103]: Time spent on flushing to /var/log/journal/f09c46bc076b47c49dacabd58c1a3cb5 is 66.697ms for 997 entries. Apr 30 03:25:35.666019 systemd-journald[1103]: System Journal (/var/log/journal/f09c46bc076b47c49dacabd58c1a3cb5) is 8.0M, max 195.6M, 187.6M free. Apr 30 03:25:35.756722 systemd-journald[1103]: Received client request to flush runtime journal. Apr 30 03:25:35.756833 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 03:25:35.756873 kernel: loop1: detected capacity change from 0 to 8 Apr 30 03:25:35.756908 kernel: loop2: detected capacity change from 0 to 142488 Apr 30 03:25:35.685284 systemd-tmpfiles[1128]: ACLs are not supported, ignoring. Apr 30 03:25:35.685300 systemd-tmpfiles[1128]: ACLs are not supported, ignoring. Apr 30 03:25:35.703309 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:25:35.716338 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 03:25:35.719116 udevadm[1161]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 03:25:35.723889 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 03:25:35.724678 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 03:25:35.764344 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 03:25:35.819705 kernel: loop3: detected capacity change from 0 to 218376 Apr 30 03:25:35.859616 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 03:25:35.871903 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:25:35.887211 kernel: loop4: detected capacity change from 0 to 140768 Apr 30 03:25:35.934541 kernel: loop5: detected capacity change from 0 to 8 Apr 30 03:25:35.939267 kernel: loop6: detected capacity change from 0 to 142488 Apr 30 03:25:35.956973 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Apr 30 03:25:35.957983 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Apr 30 03:25:35.973592 kernel: loop7: detected capacity change from 0 to 218376 Apr 30 03:25:35.987528 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:25:35.994416 (sd-merge)[1177]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Apr 30 03:25:35.995357 (sd-merge)[1177]: Merged extensions into '/usr'. Apr 30 03:25:36.008861 systemd[1]: Reloading requested from client PID 1134 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 03:25:36.009504 systemd[1]: Reloading... Apr 30 03:25:36.211524 zram_generator::config[1205]: No configuration found. Apr 30 03:25:36.321818 ldconfig[1130]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 03:25:36.399676 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:25:36.452271 systemd[1]: Reloading finished in 442 ms. Apr 30 03:25:36.477745 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 03:25:36.479248 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 03:25:36.491708 systemd[1]: Starting ensure-sysext.service... Apr 30 03:25:36.497481 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:25:36.512743 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Apr 30 03:25:36.512774 systemd[1]: Reloading... Apr 30 03:25:36.543673 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 03:25:36.544936 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 03:25:36.546520 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 03:25:36.546791 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Apr 30 03:25:36.546852 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Apr 30 03:25:36.550906 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:25:36.551435 systemd-tmpfiles[1249]: Skipping /boot Apr 30 03:25:36.573604 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:25:36.574391 systemd-tmpfiles[1249]: Skipping /boot Apr 30 03:25:36.660322 zram_generator::config[1285]: No configuration found. Apr 30 03:25:36.821998 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:25:36.882135 systemd[1]: Reloading finished in 368 ms. Apr 30 03:25:36.897340 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:25:36.908807 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:25:36.924116 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 03:25:36.933650 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 03:25:36.946572 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:25:36.951395 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 03:25:36.953354 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 03:25:36.980616 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:25:36.990557 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 03:25:37.037864 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:25:37.038142 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:25:37.046811 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:25:37.055785 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:25:37.065765 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:25:37.066501 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:25:37.067254 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:25:37.070397 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 03:25:37.073497 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 03:25:37.094612 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 03:25:37.096546 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:25:37.096755 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:25:37.096951 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:25:37.097066 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:25:37.097135 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:25:37.105643 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:25:37.105907 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:25:37.112029 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:25:37.112475 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:25:37.129062 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:25:37.131634 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:25:37.131966 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:25:37.132158 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:25:37.132307 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:25:37.135361 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 03:25:37.136837 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:25:37.137036 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:25:37.148772 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 03:25:37.150008 systemd[1]: Finished ensure-sysext.service. Apr 30 03:25:37.154859 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:25:37.155133 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:25:37.156388 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:25:37.156621 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:25:37.161805 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:25:37.165199 augenrules[1359]: No rules Apr 30 03:25:37.173721 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 03:25:37.174858 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:25:37.181221 systemd-udevd[1333]: Using default interface naming scheme 'v255'. Apr 30 03:25:37.201333 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 03:25:37.231610 systemd-resolved[1323]: Positive Trust Anchors: Apr 30 03:25:37.232190 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:25:37.232271 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:25:37.240163 systemd-resolved[1323]: Using system hostname 'ci-4081.3.3-2-e7e0406ed5'. Apr 30 03:25:37.240452 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:25:37.252715 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:25:37.253480 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:25:37.256979 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:25:37.325473 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 03:25:37.327293 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 03:25:37.385117 systemd-networkd[1372]: lo: Link UP Apr 30 03:25:37.385410 systemd-networkd[1372]: lo: Gained carrier Apr 30 03:25:37.388381 systemd-networkd[1372]: Enumeration completed Apr 30 03:25:37.388569 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:25:37.389312 systemd[1]: Reached target network.target - Network. Apr 30 03:25:37.399594 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 03:25:37.463267 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1373) Apr 30 03:25:37.479516 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Apr 30 03:25:37.480155 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:25:37.480427 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:25:37.490540 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:25:37.499280 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:25:37.502727 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:25:37.503850 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:25:37.504058 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:25:37.504090 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:25:37.504676 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 03:25:37.518132 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:25:37.518493 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:25:37.529272 kernel: ISO 9660 Extensions: RRIP_1991A Apr 30 03:25:37.555634 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Apr 30 03:25:37.557858 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:25:37.558062 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:25:37.563591 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:25:37.564321 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:25:37.581083 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:25:37.581186 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:25:37.597891 systemd-networkd[1372]: eth1: Configuring with /run/systemd/network/10-82:05:95:ce:e8:8c.network. Apr 30 03:25:37.600402 systemd-networkd[1372]: eth1: Link UP Apr 30 03:25:37.600546 systemd-networkd[1372]: eth1: Gained carrier Apr 30 03:25:37.605990 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. Apr 30 03:25:37.642787 systemd-networkd[1372]: eth0: Configuring with /run/systemd/network/10-8e:0d:b3:c0:35:97.network. Apr 30 03:25:37.642986 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 03:25:37.644062 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. Apr 30 03:25:37.645220 systemd-networkd[1372]: eth0: Link UP Apr 30 03:25:37.645273 systemd-networkd[1372]: eth0: Gained carrier Apr 30 03:25:37.650146 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. Apr 30 03:25:37.656285 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 30 03:25:37.656259 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 03:25:37.660408 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Apr 30 03:25:37.660979 kernel: ACPI: button: Power Button [PWRF] Apr 30 03:25:37.681289 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 30 03:25:37.703423 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 03:25:37.780806 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:25:37.828276 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 03:25:37.870079 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Apr 30 03:25:37.870208 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Apr 30 03:25:37.870799 kernel: Console: switching to colour dummy device 80x25 Apr 30 03:25:37.873464 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 30 03:25:37.873621 kernel: [drm] features: -context_init Apr 30 03:25:37.873651 kernel: [drm] number of scanouts: 1 Apr 30 03:25:37.873669 kernel: [drm] number of cap sets: 0 Apr 30 03:25:37.881271 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Apr 30 03:25:37.906256 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Apr 30 03:25:37.906367 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 03:25:37.906169 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:25:37.912072 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 30 03:25:37.930926 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:25:37.931127 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:25:37.932900 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:25:37.974848 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:25:38.039120 kernel: EDAC MC: Ver: 3.0.0 Apr 30 03:25:38.068384 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:25:38.085268 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 03:25:38.092588 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 03:25:38.124276 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:25:38.160685 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 03:25:38.162422 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:25:38.162568 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:25:38.162788 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 03:25:38.162934 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 03:25:38.163381 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 03:25:38.163580 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 03:25:38.163660 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 03:25:38.163723 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 03:25:38.163757 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:25:38.163819 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:25:38.165903 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 03:25:38.168397 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 03:25:38.175652 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 03:25:38.180107 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 03:25:38.183713 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 03:25:38.185991 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:25:38.188823 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:25:38.189439 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:25:38.189471 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:25:38.195462 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 03:25:38.201206 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:25:38.209667 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 03:25:38.225418 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 03:25:38.229530 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 03:25:38.245508 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 03:25:38.247420 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 03:25:38.257494 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 03:25:38.264527 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 03:25:38.277727 jq[1438]: false Apr 30 03:25:38.281130 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 03:25:38.293571 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 03:25:38.297276 coreos-metadata[1436]: Apr 30 03:25:38.296 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Apr 30 03:25:38.314372 coreos-metadata[1436]: Apr 30 03:25:38.313 INFO Fetch successful Apr 30 03:25:38.314747 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 03:25:38.319716 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 03:25:38.320722 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 03:25:38.327885 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 03:25:38.340491 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 03:25:38.344758 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 03:25:38.359845 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 03:25:38.360367 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 03:25:38.369026 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 03:25:38.369728 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 03:25:38.389125 dbus-daemon[1437]: [system] SELinux support is enabled Apr 30 03:25:38.393026 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 03:25:38.403849 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 03:25:38.403911 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 03:25:38.404930 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 03:25:38.405057 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Apr 30 03:25:38.405086 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 03:25:38.437948 extend-filesystems[1441]: Found loop4 Apr 30 03:25:38.440169 extend-filesystems[1441]: Found loop5 Apr 30 03:25:38.440169 extend-filesystems[1441]: Found loop6 Apr 30 03:25:38.440169 extend-filesystems[1441]: Found loop7 Apr 30 03:25:38.440169 extend-filesystems[1441]: Found vda Apr 30 03:25:38.440169 extend-filesystems[1441]: Found vda1 Apr 30 03:25:38.440169 extend-filesystems[1441]: Found vda2 Apr 30 03:25:38.494278 extend-filesystems[1441]: Found vda3 Apr 30 03:25:38.494278 extend-filesystems[1441]: Found usr Apr 30 03:25:38.494278 extend-filesystems[1441]: Found vda4 Apr 30 03:25:38.494278 extend-filesystems[1441]: Found vda6 Apr 30 03:25:38.494278 extend-filesystems[1441]: Found vda7 Apr 30 03:25:38.494278 extend-filesystems[1441]: Found vda9 Apr 30 03:25:38.494278 extend-filesystems[1441]: Checking size of /dev/vda9 Apr 30 03:25:38.572432 jq[1451]: true Apr 30 03:25:38.573119 update_engine[1447]: I20250430 03:25:38.464248 1447 main.cc:92] Flatcar Update Engine starting Apr 30 03:25:38.573119 update_engine[1447]: I20250430 03:25:38.466675 1447 update_check_scheduler.cc:74] Next update check in 7m40s Apr 30 03:25:38.582002 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Apr 30 03:25:38.469114 systemd[1]: Started update-engine.service - Update Engine. Apr 30 03:25:38.582344 tar[1456]: linux-amd64/LICENSE Apr 30 03:25:38.582344 tar[1456]: linux-amd64/helm Apr 30 03:25:38.582634 extend-filesystems[1441]: Resized partition /dev/vda9 Apr 30 03:25:38.482676 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 03:25:38.595551 extend-filesystems[1480]: resize2fs 1.47.1 (20-May-2024) Apr 30 03:25:38.619863 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1382) Apr 30 03:25:38.484341 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 03:25:38.629695 jq[1472]: true Apr 30 03:25:38.500558 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 03:25:38.550985 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 03:25:38.581325 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 03:25:38.587932 (ntainerd)[1469]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 03:25:38.588920 systemd-logind[1446]: New seat seat0. Apr 30 03:25:38.620478 systemd-logind[1446]: Watching system buttons on /dev/input/event1 (Power Button) Apr 30 03:25:38.620511 systemd-logind[1446]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 03:25:38.620906 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 03:25:38.772392 bash[1500]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:25:38.762797 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 03:25:38.785754 systemd[1]: Starting sshkeys.service... Apr 30 03:25:38.855487 sshd_keygen[1464]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 03:25:38.896904 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Apr 30 03:25:38.897817 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 03:25:38.905930 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 03:25:38.916929 extend-filesystems[1480]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 30 03:25:38.916929 extend-filesystems[1480]: old_desc_blocks = 1, new_desc_blocks = 8 Apr 30 03:25:38.916929 extend-filesystems[1480]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Apr 30 03:25:38.920917 extend-filesystems[1441]: Resized filesystem in /dev/vda9 Apr 30 03:25:38.920917 extend-filesystems[1441]: Found vdb Apr 30 03:25:38.921910 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 03:25:38.922310 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 03:25:38.987859 systemd-networkd[1372]: eth0: Gained IPv6LL Apr 30 03:25:38.988610 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. Apr 30 03:25:38.999753 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 03:25:39.003729 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 03:25:39.021812 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:25:39.036069 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 03:25:39.062441 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 03:25:39.082820 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 03:25:39.122499 locksmithd[1476]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 03:25:39.126645 coreos-metadata[1512]: Apr 30 03:25:39.126 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Apr 30 03:25:39.137973 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 03:25:39.141091 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 03:25:39.145904 coreos-metadata[1512]: Apr 30 03:25:39.140 INFO Fetch successful Apr 30 03:25:39.153813 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 03:25:39.163687 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 03:25:39.181448 unknown[1512]: wrote ssh authorized keys file for user: core Apr 30 03:25:39.233588 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 03:25:39.242663 update-ssh-keys[1542]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:25:39.244575 systemd-networkd[1372]: eth1: Gained IPv6LL Apr 30 03:25:39.245611 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. Apr 30 03:25:39.252330 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 03:25:39.269997 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 03:25:39.275837 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 03:25:39.280780 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 03:25:39.288786 systemd[1]: Finished sshkeys.service. Apr 30 03:25:39.307995 containerd[1469]: time="2025-04-30T03:25:39.305693105Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 03:25:39.360751 containerd[1469]: time="2025-04-30T03:25:39.360111073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:25:39.368746 containerd[1469]: time="2025-04-30T03:25:39.368686031Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:25:39.369278 containerd[1469]: time="2025-04-30T03:25:39.368980728Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 03:25:39.369278 containerd[1469]: time="2025-04-30T03:25:39.369014205Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 03:25:39.369278 containerd[1469]: time="2025-04-30T03:25:39.369195187Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 03:25:39.369661 containerd[1469]: time="2025-04-30T03:25:39.369210508Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 03:25:39.369661 containerd[1469]: time="2025-04-30T03:25:39.369524321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:25:39.369661 containerd[1469]: time="2025-04-30T03:25:39.369539695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:25:39.370498 containerd[1469]: time="2025-04-30T03:25:39.369967940Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:25:39.370498 containerd[1469]: time="2025-04-30T03:25:39.369994052Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 03:25:39.370498 containerd[1469]: time="2025-04-30T03:25:39.370008000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:25:39.370498 containerd[1469]: time="2025-04-30T03:25:39.370017131Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 03:25:39.370498 containerd[1469]: time="2025-04-30T03:25:39.370117207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:25:39.370498 containerd[1469]: time="2025-04-30T03:25:39.370458800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:25:39.370949 containerd[1469]: time="2025-04-30T03:25:39.370850297Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:25:39.371049 containerd[1469]: time="2025-04-30T03:25:39.371029346Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 03:25:39.371810 containerd[1469]: time="2025-04-30T03:25:39.371420366Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 03:25:39.371810 containerd[1469]: time="2025-04-30T03:25:39.371522819Z" level=info msg="metadata content store policy set" policy=shared Apr 30 03:25:39.376053 containerd[1469]: time="2025-04-30T03:25:39.375996509Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 03:25:39.376584 containerd[1469]: time="2025-04-30T03:25:39.376459796Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 03:25:39.377474 containerd[1469]: time="2025-04-30T03:25:39.377306158Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 03:25:39.377474 containerd[1469]: time="2025-04-30T03:25:39.377352083Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 03:25:39.377474 containerd[1469]: time="2025-04-30T03:25:39.377390726Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 03:25:39.377646 containerd[1469]: time="2025-04-30T03:25:39.377634912Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 03:25:39.378601 containerd[1469]: time="2025-04-30T03:25:39.378442470Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 03:25:39.378689 containerd[1469]: time="2025-04-30T03:25:39.378670229Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 03:25:39.378725 containerd[1469]: time="2025-04-30T03:25:39.378691353Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 03:25:39.378725 containerd[1469]: time="2025-04-30T03:25:39.378705350Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 03:25:39.378725 containerd[1469]: time="2025-04-30T03:25:39.378722480Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 03:25:39.378844 containerd[1469]: time="2025-04-30T03:25:39.378736953Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 03:25:39.378844 containerd[1469]: time="2025-04-30T03:25:39.378749778Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 03:25:39.378844 containerd[1469]: time="2025-04-30T03:25:39.378764230Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 03:25:39.378844 containerd[1469]: time="2025-04-30T03:25:39.378778013Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 03:25:39.378844 containerd[1469]: time="2025-04-30T03:25:39.378790553Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 03:25:39.378844 containerd[1469]: time="2025-04-30T03:25:39.378803774Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 03:25:39.378844 containerd[1469]: time="2025-04-30T03:25:39.378815841Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 03:25:39.379522 containerd[1469]: time="2025-04-30T03:25:39.378853390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 03:25:39.379522 containerd[1469]: time="2025-04-30T03:25:39.378871369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 03:25:39.379522 containerd[1469]: time="2025-04-30T03:25:39.378882795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 03:25:39.379522 containerd[1469]: time="2025-04-30T03:25:39.378897541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 03:25:39.379522 containerd[1469]: time="2025-04-30T03:25:39.378925301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 03:25:39.379522 containerd[1469]: time="2025-04-30T03:25:39.378939715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 03:25:39.379522 containerd[1469]: time="2025-04-30T03:25:39.378950794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 03:25:39.379522 containerd[1469]: time="2025-04-30T03:25:39.378963404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 03:25:39.379522 containerd[1469]: time="2025-04-30T03:25:39.378985255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 03:25:39.379522 containerd[1469]: time="2025-04-30T03:25:39.379003179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 03:25:39.379522 containerd[1469]: time="2025-04-30T03:25:39.379015179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 03:25:39.379522 containerd[1469]: time="2025-04-30T03:25:39.379027154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 03:25:39.379522 containerd[1469]: time="2025-04-30T03:25:39.379039680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 03:25:39.379522 containerd[1469]: time="2025-04-30T03:25:39.379055806Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 03:25:39.379522 containerd[1469]: time="2025-04-30T03:25:39.379077451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 03:25:39.379911 containerd[1469]: time="2025-04-30T03:25:39.379090446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 03:25:39.379911 containerd[1469]: time="2025-04-30T03:25:39.379101288Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 03:25:39.380475 containerd[1469]: time="2025-04-30T03:25:39.380012659Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 03:25:39.380475 containerd[1469]: time="2025-04-30T03:25:39.380041390Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 03:25:39.380475 containerd[1469]: time="2025-04-30T03:25:39.380136350Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 03:25:39.380475 containerd[1469]: time="2025-04-30T03:25:39.380155474Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 03:25:39.380475 containerd[1469]: time="2025-04-30T03:25:39.380173438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 03:25:39.380475 containerd[1469]: time="2025-04-30T03:25:39.380191096Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 03:25:39.380475 containerd[1469]: time="2025-04-30T03:25:39.380212064Z" level=info msg="NRI interface is disabled by configuration." Apr 30 03:25:39.380475 containerd[1469]: time="2025-04-30T03:25:39.380233866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 03:25:39.380712 containerd[1469]: time="2025-04-30T03:25:39.380561503Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 03:25:39.380712 containerd[1469]: time="2025-04-30T03:25:39.380632340Z" level=info msg="Connect containerd service" Apr 30 03:25:39.380712 containerd[1469]: time="2025-04-30T03:25:39.380676557Z" level=info msg="using legacy CRI server" Apr 30 03:25:39.380712 containerd[1469]: time="2025-04-30T03:25:39.380684011Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 03:25:39.381221 containerd[1469]: time="2025-04-30T03:25:39.380796562Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 03:25:39.382784 containerd[1469]: time="2025-04-30T03:25:39.382750307Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:25:39.382933 containerd[1469]: time="2025-04-30T03:25:39.382895198Z" level=info msg="Start subscribing containerd event" Apr 30 03:25:39.382964 containerd[1469]: time="2025-04-30T03:25:39.382950681Z" level=info msg="Start recovering state" Apr 30 03:25:39.383055 containerd[1469]: time="2025-04-30T03:25:39.383021556Z" level=info msg="Start event monitor" Apr 30 03:25:39.383055 containerd[1469]: time="2025-04-30T03:25:39.383051818Z" level=info msg="Start snapshots syncer" Apr 30 03:25:39.383129 containerd[1469]: time="2025-04-30T03:25:39.383060943Z" level=info msg="Start cni network conf syncer for default" Apr 30 03:25:39.383129 containerd[1469]: time="2025-04-30T03:25:39.383068230Z" level=info msg="Start streaming server" Apr 30 03:25:39.384571 containerd[1469]: time="2025-04-30T03:25:39.384531038Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 03:25:39.387604 containerd[1469]: time="2025-04-30T03:25:39.384595516Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 03:25:39.387604 containerd[1469]: time="2025-04-30T03:25:39.384658675Z" level=info msg="containerd successfully booted in 0.080444s" Apr 30 03:25:39.384825 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 03:25:39.785285 tar[1456]: linux-amd64/README.md Apr 30 03:25:39.804429 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 03:25:40.474907 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:25:40.483902 (kubelet)[1560]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:25:40.484810 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 03:25:40.489688 systemd[1]: Startup finished in 1.154s (kernel) + 5.586s (initrd) + 6.230s (userspace) = 12.970s. Apr 30 03:25:41.210947 kubelet[1560]: E0430 03:25:41.210860 1560 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:25:41.214519 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:25:41.214695 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:25:41.215124 systemd[1]: kubelet.service: Consumed 1.398s CPU time. Apr 30 03:25:42.476700 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 03:25:42.489799 systemd[1]: Started sshd@0-24.199.113.144:22-139.178.89.65:34592.service - OpenSSH per-connection server daemon (139.178.89.65:34592). Apr 30 03:25:42.560827 sshd[1572]: Accepted publickey for core from 139.178.89.65 port 34592 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:25:42.563508 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:42.577823 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 03:25:42.591157 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 03:25:42.596682 systemd-logind[1446]: New session 1 of user core. Apr 30 03:25:42.611918 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 03:25:42.621784 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 03:25:42.629739 (systemd)[1576]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 03:25:42.752970 systemd[1576]: Queued start job for default target default.target. Apr 30 03:25:42.760270 systemd[1576]: Created slice app.slice - User Application Slice. Apr 30 03:25:42.760311 systemd[1576]: Reached target paths.target - Paths. Apr 30 03:25:42.760328 systemd[1576]: Reached target timers.target - Timers. Apr 30 03:25:42.762067 systemd[1576]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 03:25:42.787848 systemd[1576]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 03:25:42.788160 systemd[1576]: Reached target sockets.target - Sockets. Apr 30 03:25:42.788199 systemd[1576]: Reached target basic.target - Basic System. Apr 30 03:25:42.788303 systemd[1576]: Reached target default.target - Main User Target. Apr 30 03:25:42.788368 systemd[1576]: Startup finished in 147ms. Apr 30 03:25:42.788674 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 03:25:42.802576 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 03:25:42.880015 systemd[1]: Started sshd@1-24.199.113.144:22-139.178.89.65:34596.service - OpenSSH per-connection server daemon (139.178.89.65:34596). Apr 30 03:25:42.923646 sshd[1587]: Accepted publickey for core from 139.178.89.65 port 34596 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:25:42.926325 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:42.935077 systemd-logind[1446]: New session 2 of user core. Apr 30 03:25:42.943624 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 03:25:43.012952 sshd[1587]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:43.024970 systemd[1]: sshd@1-24.199.113.144:22-139.178.89.65:34596.service: Deactivated successfully. Apr 30 03:25:43.027448 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 03:25:43.029619 systemd-logind[1446]: Session 2 logged out. Waiting for processes to exit. Apr 30 03:25:43.035752 systemd[1]: Started sshd@2-24.199.113.144:22-139.178.89.65:34610.service - OpenSSH per-connection server daemon (139.178.89.65:34610). Apr 30 03:25:43.038003 systemd-logind[1446]: Removed session 2. Apr 30 03:25:43.090337 sshd[1594]: Accepted publickey for core from 139.178.89.65 port 34610 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:25:43.092377 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:43.098648 systemd-logind[1446]: New session 3 of user core. Apr 30 03:25:43.110679 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 03:25:43.169437 sshd[1594]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:43.183787 systemd[1]: sshd@2-24.199.113.144:22-139.178.89.65:34610.service: Deactivated successfully. Apr 30 03:25:43.186211 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 03:25:43.188461 systemd-logind[1446]: Session 3 logged out. Waiting for processes to exit. Apr 30 03:25:43.192960 systemd[1]: Started sshd@3-24.199.113.144:22-139.178.89.65:34612.service - OpenSSH per-connection server daemon (139.178.89.65:34612). Apr 30 03:25:43.195391 systemd-logind[1446]: Removed session 3. Apr 30 03:25:43.246784 sshd[1601]: Accepted publickey for core from 139.178.89.65 port 34612 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:25:43.249054 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:43.256964 systemd-logind[1446]: New session 4 of user core. Apr 30 03:25:43.261709 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 03:25:43.327039 sshd[1601]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:43.339549 systemd[1]: sshd@3-24.199.113.144:22-139.178.89.65:34612.service: Deactivated successfully. Apr 30 03:25:43.342323 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 03:25:43.345456 systemd-logind[1446]: Session 4 logged out. Waiting for processes to exit. Apr 30 03:25:43.351717 systemd[1]: Started sshd@4-24.199.113.144:22-139.178.89.65:34614.service - OpenSSH per-connection server daemon (139.178.89.65:34614). Apr 30 03:25:43.353805 systemd-logind[1446]: Removed session 4. Apr 30 03:25:43.391272 sshd[1608]: Accepted publickey for core from 139.178.89.65 port 34614 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:25:43.393360 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:43.399850 systemd-logind[1446]: New session 5 of user core. Apr 30 03:25:43.412620 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 03:25:43.483308 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 03:25:43.483645 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:25:43.501088 sudo[1611]: pam_unix(sudo:session): session closed for user root Apr 30 03:25:43.505143 sshd[1608]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:43.519179 systemd[1]: sshd@4-24.199.113.144:22-139.178.89.65:34614.service: Deactivated successfully. Apr 30 03:25:43.522704 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 03:25:43.525699 systemd-logind[1446]: Session 5 logged out. Waiting for processes to exit. Apr 30 03:25:43.530706 systemd[1]: Started sshd@5-24.199.113.144:22-139.178.89.65:34626.service - OpenSSH per-connection server daemon (139.178.89.65:34626). Apr 30 03:25:43.532962 systemd-logind[1446]: Removed session 5. Apr 30 03:25:43.586623 sshd[1616]: Accepted publickey for core from 139.178.89.65 port 34626 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:25:43.588764 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:43.595322 systemd-logind[1446]: New session 6 of user core. Apr 30 03:25:43.602624 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 03:25:43.666410 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 03:25:43.666859 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:25:43.673204 sudo[1620]: pam_unix(sudo:session): session closed for user root Apr 30 03:25:43.680940 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 03:25:43.681282 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:25:43.698834 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 03:25:43.703451 auditctl[1623]: No rules Apr 30 03:25:43.703903 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 03:25:43.704343 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 03:25:43.713914 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:25:43.750523 augenrules[1641]: No rules Apr 30 03:25:43.752067 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:25:43.753659 sudo[1619]: pam_unix(sudo:session): session closed for user root Apr 30 03:25:43.758165 sshd[1616]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:43.768833 systemd[1]: sshd@5-24.199.113.144:22-139.178.89.65:34626.service: Deactivated successfully. Apr 30 03:25:43.771870 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 03:25:43.774460 systemd-logind[1446]: Session 6 logged out. Waiting for processes to exit. Apr 30 03:25:43.786200 systemd[1]: Started sshd@6-24.199.113.144:22-139.178.89.65:34632.service - OpenSSH per-connection server daemon (139.178.89.65:34632). Apr 30 03:25:43.788343 systemd-logind[1446]: Removed session 6. Apr 30 03:25:43.827415 sshd[1649]: Accepted publickey for core from 139.178.89.65 port 34632 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:25:43.829716 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:43.836647 systemd-logind[1446]: New session 7 of user core. Apr 30 03:25:43.840649 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 03:25:43.902037 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 03:25:43.902410 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:25:44.386782 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 03:25:44.396906 (dockerd)[1667]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 03:25:44.918273 dockerd[1667]: time="2025-04-30T03:25:44.918190148Z" level=info msg="Starting up" Apr 30 03:25:45.061835 dockerd[1667]: time="2025-04-30T03:25:45.061491858Z" level=info msg="Loading containers: start." Apr 30 03:25:45.216281 kernel: Initializing XFRM netlink socket Apr 30 03:25:45.250936 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. Apr 30 03:25:45.262955 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. Apr 30 03:25:45.317812 systemd-networkd[1372]: docker0: Link UP Apr 30 03:25:45.318190 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. Apr 30 03:25:45.340175 dockerd[1667]: time="2025-04-30T03:25:45.339707158Z" level=info msg="Loading containers: done." Apr 30 03:25:45.361133 dockerd[1667]: time="2025-04-30T03:25:45.360411387Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 03:25:45.361133 dockerd[1667]: time="2025-04-30T03:25:45.360552116Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 03:25:45.361133 dockerd[1667]: time="2025-04-30T03:25:45.360700164Z" level=info msg="Daemon has completed initialization" Apr 30 03:25:45.362481 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1457950563-merged.mount: Deactivated successfully. Apr 30 03:25:45.403214 dockerd[1667]: time="2025-04-30T03:25:45.403067738Z" level=info msg="API listen on /run/docker.sock" Apr 30 03:25:45.403909 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 03:25:46.359616 containerd[1469]: time="2025-04-30T03:25:46.359552161Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" Apr 30 03:25:46.895927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2279565154.mount: Deactivated successfully. Apr 30 03:25:48.080637 containerd[1469]: time="2025-04-30T03:25:48.080387066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:25:48.082203 containerd[1469]: time="2025-04-30T03:25:48.082113540Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" Apr 30 03:25:48.082540 containerd[1469]: time="2025-04-30T03:25:48.082482156Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:25:48.087741 containerd[1469]: time="2025-04-30T03:25:48.086081758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:25:48.089448 containerd[1469]: time="2025-04-30T03:25:48.088386941Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 1.728778828s" Apr 30 03:25:48.089448 containerd[1469]: time="2025-04-30T03:25:48.088468045Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" Apr 30 03:25:48.091409 containerd[1469]: time="2025-04-30T03:25:48.091359588Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" Apr 30 03:25:49.718073 containerd[1469]: time="2025-04-30T03:25:49.717984249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:25:49.719317 containerd[1469]: time="2025-04-30T03:25:49.718835283Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" Apr 30 03:25:49.719859 containerd[1469]: time="2025-04-30T03:25:49.719822161Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:25:49.725081 containerd[1469]: time="2025-04-30T03:25:49.725015366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:25:49.727388 containerd[1469]: time="2025-04-30T03:25:49.726168590Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 1.634556272s" Apr 30 03:25:49.727388 containerd[1469]: time="2025-04-30T03:25:49.726219342Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" Apr 30 03:25:49.728176 containerd[1469]: time="2025-04-30T03:25:49.728015729Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" Apr 30 03:25:50.924947 containerd[1469]: time="2025-04-30T03:25:50.924869989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:25:50.927294 containerd[1469]: time="2025-04-30T03:25:50.927178413Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" Apr 30 03:25:50.928167 containerd[1469]: time="2025-04-30T03:25:50.927970490Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:25:50.931876 containerd[1469]: time="2025-04-30T03:25:50.931735525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:25:50.934991 containerd[1469]: time="2025-04-30T03:25:50.934270283Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.206104076s" Apr 30 03:25:50.934991 containerd[1469]: time="2025-04-30T03:25:50.934346580Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" Apr 30 03:25:50.935892 containerd[1469]: time="2025-04-30T03:25:50.935857640Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" Apr 30 03:25:51.215854 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 03:25:51.231720 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:25:51.464792 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:25:51.484887 (kubelet)[1884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:25:51.580436 kubelet[1884]: E0430 03:25:51.580356 1884 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:25:51.586359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:25:51.586940 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:25:52.077732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3350160779.mount: Deactivated successfully. Apr 30 03:25:52.734044 containerd[1469]: time="2025-04-30T03:25:52.733965882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:25:52.736265 containerd[1469]: time="2025-04-30T03:25:52.736173245Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" Apr 30 03:25:52.737534 containerd[1469]: time="2025-04-30T03:25:52.737499734Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:25:52.742164 containerd[1469]: time="2025-04-30T03:25:52.740930198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:25:52.742164 containerd[1469]: time="2025-04-30T03:25:52.741755242Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 1.805716969s" Apr 30 03:25:52.742164 containerd[1469]: time="2025-04-30T03:25:52.741790754Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" Apr 30 03:25:52.742697 containerd[1469]: time="2025-04-30T03:25:52.742654226Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Apr 30 03:25:52.744657 systemd-resolved[1323]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Apr 30 03:25:53.313550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2572142587.mount: Deactivated successfully. Apr 30 03:25:54.366537 containerd[1469]: time="2025-04-30T03:25:54.364960263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:25:54.366537 containerd[1469]: time="2025-04-30T03:25:54.366176207Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Apr 30 03:25:54.366537 containerd[1469]: time="2025-04-30T03:25:54.366466457Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:25:54.371140 containerd[1469]: time="2025-04-30T03:25:54.371078105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:25:54.373272 containerd[1469]: time="2025-04-30T03:25:54.373163744Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.630461768s" Apr 30 03:25:54.373272 containerd[1469]: time="2025-04-30T03:25:54.373263757Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Apr 30 03:25:54.375181 containerd[1469]: time="2025-04-30T03:25:54.375125169Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 30 03:25:54.847564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3544140056.mount: Deactivated successfully. Apr 30 03:25:54.859259 containerd[1469]: time="2025-04-30T03:25:54.857787801Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Apr 30 03:25:54.860075 containerd[1469]: time="2025-04-30T03:25:54.860019599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:25:54.861898 containerd[1469]: time="2025-04-30T03:25:54.861852326Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:25:54.863159 containerd[1469]: time="2025-04-30T03:25:54.863104586Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 487.921527ms" Apr 30 03:25:54.863159 containerd[1469]: time="2025-04-30T03:25:54.863160964Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 30 03:25:54.865276 containerd[1469]: time="2025-04-30T03:25:54.864679105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:25:54.865953 containerd[1469]: time="2025-04-30T03:25:54.865910694Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Apr 30 03:25:55.378148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2157222941.mount: Deactivated successfully. Apr 30 03:25:55.819388 systemd-resolved[1323]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Apr 30 03:25:57.292902 containerd[1469]: time="2025-04-30T03:25:57.292822268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:25:57.294509 containerd[1469]: time="2025-04-30T03:25:57.294433430Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Apr 30 03:25:57.295086 containerd[1469]: time="2025-04-30T03:25:57.295037580Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:25:57.300123 containerd[1469]: time="2025-04-30T03:25:57.300042774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:25:57.301793 containerd[1469]: time="2025-04-30T03:25:57.301735814Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.435783138s" Apr 30 03:25:57.301793 containerd[1469]: time="2025-04-30T03:25:57.301782361Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Apr 30 03:26:01.197110 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:26:01.212404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:26:01.280812 systemd[1]: Reloading requested from client PID 2037 ('systemctl') (unit session-7.scope)... Apr 30 03:26:01.281102 systemd[1]: Reloading... Apr 30 03:26:01.482481 zram_generator::config[2076]: No configuration found. Apr 30 03:26:01.720710 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:26:01.851047 systemd[1]: Reloading finished in 569 ms. Apr 30 03:26:01.924584 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 03:26:01.924734 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 03:26:01.925211 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:26:01.928497 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:26:02.126615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:26:02.144103 (kubelet)[2129]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:26:02.222496 kubelet[2129]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:26:02.222496 kubelet[2129]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 03:26:02.222496 kubelet[2129]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:26:02.223746 kubelet[2129]: I0430 03:26:02.222659 2129 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:26:02.635516 kubelet[2129]: I0430 03:26:02.635441 2129 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 03:26:02.635516 kubelet[2129]: I0430 03:26:02.635487 2129 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:26:02.635893 kubelet[2129]: I0430 03:26:02.635784 2129 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 03:26:02.677115 kubelet[2129]: I0430 03:26:02.677039 2129 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:26:02.678893 kubelet[2129]: E0430 03:26:02.678749 2129 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://24.199.113.144:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 24.199.113.144:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:26:02.696845 kubelet[2129]: E0430 03:26:02.696760 2129 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 03:26:02.696845 kubelet[2129]: I0430 03:26:02.696823 2129 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 03:26:02.703150 kubelet[2129]: I0430 03:26:02.703079 2129 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:26:02.703442 kubelet[2129]: I0430 03:26:02.703386 2129 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:26:02.703654 kubelet[2129]: I0430 03:26:02.703427 2129 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-2-e7e0406ed5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 03:26:02.703654 kubelet[2129]: I0430 03:26:02.703656 2129 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:26:02.704037 kubelet[2129]: I0430 03:26:02.703667 2129 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 03:26:02.704037 kubelet[2129]: I0430 03:26:02.703870 2129 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:26:02.708699 kubelet[2129]: I0430 03:26:02.708484 2129 kubelet.go:446] "Attempting to sync node with API server" Apr 30 03:26:02.708699 kubelet[2129]: I0430 03:26:02.708670 2129 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:26:02.708935 kubelet[2129]: I0430 03:26:02.708731 2129 kubelet.go:352] "Adding apiserver pod source" Apr 30 03:26:02.708935 kubelet[2129]: I0430 03:26:02.708753 2129 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:26:02.717572 kubelet[2129]: W0430 03:26:02.716601 2129 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://24.199.113.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-2-e7e0406ed5&limit=500&resourceVersion=0": dial tcp 24.199.113.144:6443: connect: connection refused Apr 30 03:26:02.717572 kubelet[2129]: E0430 03:26:02.716684 2129 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://24.199.113.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-2-e7e0406ed5&limit=500&resourceVersion=0\": dial tcp 24.199.113.144:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:26:02.717931 kubelet[2129]: W0430 03:26:02.717870 2129 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://24.199.113.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 24.199.113.144:6443: connect: connection refused Apr 30 03:26:02.718063 kubelet[2129]: E0430 03:26:02.718036 2129 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://24.199.113.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 24.199.113.144:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:26:02.718274 kubelet[2129]: I0430 03:26:02.718258 2129 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:26:02.725254 kubelet[2129]: I0430 03:26:02.725201 2129 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:26:02.726973 kubelet[2129]: W0430 03:26:02.726917 2129 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 03:26:02.727793 kubelet[2129]: I0430 03:26:02.727727 2129 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 03:26:02.727793 kubelet[2129]: I0430 03:26:02.727781 2129 server.go:1287] "Started kubelet" Apr 30 03:26:02.730365 kubelet[2129]: I0430 03:26:02.730288 2129 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:26:02.732103 kubelet[2129]: I0430 03:26:02.732067 2129 server.go:490] "Adding debug handlers to kubelet server" Apr 30 03:26:02.733912 kubelet[2129]: I0430 03:26:02.733833 2129 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:26:02.734544 kubelet[2129]: I0430 03:26:02.734515 2129 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:26:02.738969 kubelet[2129]: E0430 03:26:02.736848 2129 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://24.199.113.144:6443/api/v1/namespaces/default/events\": dial tcp 24.199.113.144:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.3-2-e7e0406ed5.183afaccb80f09a8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-2-e7e0406ed5,UID:ci-4081.3.3-2-e7e0406ed5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-2-e7e0406ed5,},FirstTimestamp:2025-04-30 03:26:02.727754152 +0000 UTC m=+0.572323082,LastTimestamp:2025-04-30 03:26:02.727754152 +0000 UTC m=+0.572323082,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-2-e7e0406ed5,}" Apr 30 03:26:02.739876 kubelet[2129]: I0430 03:26:02.739649 2129 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:26:02.744744 kubelet[2129]: I0430 03:26:02.744667 2129 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 03:26:02.751764 kubelet[2129]: E0430 03:26:02.751718 2129 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.3-2-e7e0406ed5\" not found" Apr 30 03:26:02.752272 kubelet[2129]: I0430 03:26:02.752036 2129 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 03:26:02.753481 kubelet[2129]: I0430 03:26:02.753455 2129 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:26:02.753669 kubelet[2129]: I0430 03:26:02.753659 2129 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:26:02.754856 kubelet[2129]: W0430 03:26:02.754245 2129 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://24.199.113.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 24.199.113.144:6443: connect: connection refused Apr 30 03:26:02.754856 kubelet[2129]: E0430 03:26:02.754317 2129 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://24.199.113.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 24.199.113.144:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:26:02.754856 kubelet[2129]: E0430 03:26:02.754392 2129 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.113.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-2-e7e0406ed5?timeout=10s\": dial tcp 24.199.113.144:6443: connect: connection refused" interval="200ms" Apr 30 03:26:02.757688 kubelet[2129]: E0430 03:26:02.757653 2129 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:26:02.758585 kubelet[2129]: I0430 03:26:02.758555 2129 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:26:02.759099 kubelet[2129]: I0430 03:26:02.759076 2129 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:26:02.761765 kubelet[2129]: I0430 03:26:02.761734 2129 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:26:02.789857 kubelet[2129]: I0430 03:26:02.789778 2129 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:26:02.798875 kubelet[2129]: I0430 03:26:02.798390 2129 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:26:02.798875 kubelet[2129]: I0430 03:26:02.798446 2129 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 03:26:02.798875 kubelet[2129]: I0430 03:26:02.798477 2129 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 03:26:02.798875 kubelet[2129]: I0430 03:26:02.798489 2129 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 03:26:02.798875 kubelet[2129]: E0430 03:26:02.798567 2129 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:26:02.800102 kubelet[2129]: I0430 03:26:02.800064 2129 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 03:26:02.800102 kubelet[2129]: I0430 03:26:02.800088 2129 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 03:26:02.800369 kubelet[2129]: I0430 03:26:02.800123 2129 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:26:02.803078 kubelet[2129]: I0430 03:26:02.802955 2129 policy_none.go:49] "None policy: Start" Apr 30 03:26:02.803078 kubelet[2129]: I0430 03:26:02.803006 2129 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 03:26:02.803078 kubelet[2129]: I0430 03:26:02.803030 2129 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:26:02.806059 kubelet[2129]: W0430 03:26:02.805898 2129 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://24.199.113.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 24.199.113.144:6443: connect: connection refused Apr 30 03:26:02.806059 kubelet[2129]: E0430 03:26:02.806001 2129 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://24.199.113.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 24.199.113.144:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:26:02.813628 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 03:26:02.834314 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 03:26:02.839948 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 03:26:02.852203 kubelet[2129]: E0430 03:26:02.852122 2129 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.3-2-e7e0406ed5\" not found" Apr 30 03:26:02.852469 kubelet[2129]: I0430 03:26:02.852395 2129 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:26:02.853953 kubelet[2129]: I0430 03:26:02.852774 2129 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 03:26:02.853953 kubelet[2129]: I0430 03:26:02.852828 2129 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:26:02.853953 kubelet[2129]: I0430 03:26:02.853495 2129 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:26:02.856211 kubelet[2129]: E0430 03:26:02.856176 2129 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 03:26:02.856526 kubelet[2129]: E0430 03:26:02.856488 2129 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.3-2-e7e0406ed5\" not found" Apr 30 03:26:02.915464 systemd[1]: Created slice kubepods-burstable-poda8038d26e047432ca9520b29e47318e0.slice - libcontainer container kubepods-burstable-poda8038d26e047432ca9520b29e47318e0.slice. Apr 30 03:26:02.935755 kubelet[2129]: E0430 03:26:02.935404 2129 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-2-e7e0406ed5\" not found" node="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:02.942830 systemd[1]: Created slice kubepods-burstable-podd2fac22366d3ca4cb9b858e7dfa466ab.slice - libcontainer container kubepods-burstable-podd2fac22366d3ca4cb9b858e7dfa466ab.slice. Apr 30 03:26:02.952715 kubelet[2129]: E0430 03:26:02.952307 2129 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-2-e7e0406ed5\" not found" node="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:02.955286 kubelet[2129]: I0430 03:26:02.955179 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8038d26e047432ca9520b29e47318e0-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-2-e7e0406ed5\" (UID: \"a8038d26e047432ca9520b29e47318e0\") " pod="kube-system/kube-apiserver-ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:02.955453 kubelet[2129]: I0430 03:26:02.955302 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8038d26e047432ca9520b29e47318e0-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-2-e7e0406ed5\" (UID: \"a8038d26e047432ca9520b29e47318e0\") " pod="kube-system/kube-apiserver-ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:02.956292 kubelet[2129]: E0430 03:26:02.955701 2129 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.113.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-2-e7e0406ed5?timeout=10s\": dial tcp 24.199.113.144:6443: connect: connection refused" interval="400ms" Apr 30 03:26:02.956292 kubelet[2129]: I0430 03:26:02.955807 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d2fac22366d3ca4cb9b858e7dfa466ab-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-2-e7e0406ed5\" (UID: \"d2fac22366d3ca4cb9b858e7dfa466ab\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:02.956292 kubelet[2129]: I0430 03:26:02.955860 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d2fac22366d3ca4cb9b858e7dfa466ab-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-2-e7e0406ed5\" (UID: \"d2fac22366d3ca4cb9b858e7dfa466ab\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:02.956292 kubelet[2129]: I0430 03:26:02.955888 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d2fac22366d3ca4cb9b858e7dfa466ab-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-2-e7e0406ed5\" (UID: \"d2fac22366d3ca4cb9b858e7dfa466ab\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:02.956292 kubelet[2129]: I0430 03:26:02.956039 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/69f55424ac3d5a59363ec78b011f765a-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-2-e7e0406ed5\" (UID: \"69f55424ac3d5a59363ec78b011f765a\") " pod="kube-system/kube-scheduler-ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:02.956292 kubelet[2129]: I0430 03:26:02.956045 2129 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:02.956670 kubelet[2129]: I0430 03:26:02.956073 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d2fac22366d3ca4cb9b858e7dfa466ab-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-2-e7e0406ed5\" (UID: \"d2fac22366d3ca4cb9b858e7dfa466ab\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:02.956670 kubelet[2129]: I0430 03:26:02.956100 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d2fac22366d3ca4cb9b858e7dfa466ab-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-2-e7e0406ed5\" (UID: \"d2fac22366d3ca4cb9b858e7dfa466ab\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:02.956670 kubelet[2129]: I0430 03:26:02.956129 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8038d26e047432ca9520b29e47318e0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-2-e7e0406ed5\" (UID: \"a8038d26e047432ca9520b29e47318e0\") " pod="kube-system/kube-apiserver-ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:02.958612 systemd[1]: Created slice kubepods-burstable-pod69f55424ac3d5a59363ec78b011f765a.slice - libcontainer container kubepods-burstable-pod69f55424ac3d5a59363ec78b011f765a.slice. Apr 30 03:26:02.962375 kubelet[2129]: E0430 03:26:02.960531 2129 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://24.199.113.144:6443/api/v1/nodes\": dial tcp 24.199.113.144:6443: connect: connection refused" node="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:02.964169 kubelet[2129]: E0430 03:26:02.964138 2129 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-2-e7e0406ed5\" not found" node="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:03.164277 kubelet[2129]: I0430 03:26:03.163795 2129 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:03.164531 kubelet[2129]: E0430 03:26:03.164487 2129 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://24.199.113.144:6443/api/v1/nodes\": dial tcp 24.199.113.144:6443: connect: connection refused" node="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:03.209641 kubelet[2129]: E0430 03:26:03.209327 2129 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://24.199.113.144:6443/api/v1/namespaces/default/events\": dial tcp 24.199.113.144:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.3-2-e7e0406ed5.183afaccb80f09a8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-2-e7e0406ed5,UID:ci-4081.3.3-2-e7e0406ed5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-2-e7e0406ed5,},FirstTimestamp:2025-04-30 03:26:02.727754152 +0000 UTC m=+0.572323082,LastTimestamp:2025-04-30 03:26:02.727754152 +0000 UTC m=+0.572323082,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-2-e7e0406ed5,}" Apr 30 03:26:03.236545 kubelet[2129]: E0430 03:26:03.236478 2129 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:03.237872 containerd[1469]: time="2025-04-30T03:26:03.237796287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-2-e7e0406ed5,Uid:a8038d26e047432ca9520b29e47318e0,Namespace:kube-system,Attempt:0,}" Apr 30 03:26:03.240668 systemd-resolved[1323]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Apr 30 03:26:03.253067 kubelet[2129]: E0430 03:26:03.252925 2129 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:03.259816 containerd[1469]: time="2025-04-30T03:26:03.259755913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-2-e7e0406ed5,Uid:d2fac22366d3ca4cb9b858e7dfa466ab,Namespace:kube-system,Attempt:0,}" Apr 30 03:26:03.265616 kubelet[2129]: E0430 03:26:03.265551 2129 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:03.266934 containerd[1469]: time="2025-04-30T03:26:03.266482324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-2-e7e0406ed5,Uid:69f55424ac3d5a59363ec78b011f765a,Namespace:kube-system,Attempt:0,}" Apr 30 03:26:03.357221 kubelet[2129]: E0430 03:26:03.357142 2129 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.113.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-2-e7e0406ed5?timeout=10s\": dial tcp 24.199.113.144:6443: connect: connection refused" interval="800ms" Apr 30 03:26:03.566709 kubelet[2129]: I0430 03:26:03.566025 2129 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:03.566709 kubelet[2129]: E0430 03:26:03.566596 2129 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://24.199.113.144:6443/api/v1/nodes\": dial tcp 24.199.113.144:6443: connect: connection refused" node="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:03.679291 kubelet[2129]: W0430 03:26:03.678144 2129 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://24.199.113.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 24.199.113.144:6443: connect: connection refused Apr 30 03:26:03.679291 kubelet[2129]: E0430 03:26:03.678216 2129 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://24.199.113.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 24.199.113.144:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:26:03.678602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount779657722.mount: Deactivated successfully. Apr 30 03:26:03.690271 containerd[1469]: time="2025-04-30T03:26:03.689530918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:26:03.691144 containerd[1469]: time="2025-04-30T03:26:03.691087042Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 30 03:26:03.691678 containerd[1469]: time="2025-04-30T03:26:03.691624161Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:26:03.692885 containerd[1469]: time="2025-04-30T03:26:03.692706376Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:26:03.694085 containerd[1469]: time="2025-04-30T03:26:03.694018107Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:26:03.694205 containerd[1469]: time="2025-04-30T03:26:03.694114379Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:26:03.695448 containerd[1469]: time="2025-04-30T03:26:03.695376419Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:26:03.696511 kubelet[2129]: W0430 03:26:03.696455 2129 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://24.199.113.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 24.199.113.144:6443: connect: connection refused Apr 30 03:26:03.696672 kubelet[2129]: E0430 03:26:03.696522 2129 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://24.199.113.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 24.199.113.144:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:26:03.700147 containerd[1469]: time="2025-04-30T03:26:03.700033306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:26:03.701505 containerd[1469]: time="2025-04-30T03:26:03.701176134Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 434.585056ms" Apr 30 03:26:03.705923 containerd[1469]: time="2025-04-30T03:26:03.705611109Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 445.440553ms" Apr 30 03:26:03.711391 containerd[1469]: time="2025-04-30T03:26:03.711207884Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 473.293473ms" Apr 30 03:26:03.881312 containerd[1469]: time="2025-04-30T03:26:03.876654623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:26:03.881312 containerd[1469]: time="2025-04-30T03:26:03.876748664Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:26:03.881312 containerd[1469]: time="2025-04-30T03:26:03.876765904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:26:03.881312 containerd[1469]: time="2025-04-30T03:26:03.876922172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:26:03.890260 containerd[1469]: time="2025-04-30T03:26:03.889711491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:26:03.890260 containerd[1469]: time="2025-04-30T03:26:03.889880550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:26:03.890260 containerd[1469]: time="2025-04-30T03:26:03.889911025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:26:03.890260 containerd[1469]: time="2025-04-30T03:26:03.890123292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:26:03.895992 containerd[1469]: time="2025-04-30T03:26:03.895525043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:26:03.895992 containerd[1469]: time="2025-04-30T03:26:03.895625894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:26:03.895992 containerd[1469]: time="2025-04-30T03:26:03.895645777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:26:03.895992 containerd[1469]: time="2025-04-30T03:26:03.895786500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:26:03.926667 systemd[1]: Started cri-containerd-b59772906a06871b6f829f70da8441782df0a0f6e5ec4b0b6f4fd7444f433bff.scope - libcontainer container b59772906a06871b6f829f70da8441782df0a0f6e5ec4b0b6f4fd7444f433bff. Apr 30 03:26:03.946637 systemd[1]: Started cri-containerd-07b35c786b88b60043cea91e8db406038f18daa7e1c754570e6cab9abf91bf6b.scope - libcontainer container 07b35c786b88b60043cea91e8db406038f18daa7e1c754570e6cab9abf91bf6b. Apr 30 03:26:03.952655 systemd[1]: Started cri-containerd-7880c78d18624330f05ebf1797328332feeb8119c5bab41d099a180e444fe112.scope - libcontainer container 7880c78d18624330f05ebf1797328332feeb8119c5bab41d099a180e444fe112. Apr 30 03:26:03.994397 kubelet[2129]: W0430 03:26:03.994318 2129 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://24.199.113.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 24.199.113.144:6443: connect: connection refused Apr 30 03:26:03.994397 kubelet[2129]: E0430 03:26:03.994448 2129 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://24.199.113.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 24.199.113.144:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:26:04.034750 containerd[1469]: time="2025-04-30T03:26:04.034190093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-2-e7e0406ed5,Uid:a8038d26e047432ca9520b29e47318e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"b59772906a06871b6f829f70da8441782df0a0f6e5ec4b0b6f4fd7444f433bff\"" Apr 30 03:26:04.040363 kubelet[2129]: E0430 03:26:04.040068 2129 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:04.052521 containerd[1469]: time="2025-04-30T03:26:04.052221967Z" level=info msg="CreateContainer within sandbox \"b59772906a06871b6f829f70da8441782df0a0f6e5ec4b0b6f4fd7444f433bff\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 03:26:04.057843 containerd[1469]: time="2025-04-30T03:26:04.057790681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-2-e7e0406ed5,Uid:d2fac22366d3ca4cb9b858e7dfa466ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"07b35c786b88b60043cea91e8db406038f18daa7e1c754570e6cab9abf91bf6b\"" Apr 30 03:26:04.059521 kubelet[2129]: E0430 03:26:04.059482 2129 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:04.063169 containerd[1469]: time="2025-04-30T03:26:04.062928059Z" level=info msg="CreateContainer within sandbox \"07b35c786b88b60043cea91e8db406038f18daa7e1c754570e6cab9abf91bf6b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 03:26:04.077691 containerd[1469]: time="2025-04-30T03:26:04.077495642Z" level=info msg="CreateContainer within sandbox \"b59772906a06871b6f829f70da8441782df0a0f6e5ec4b0b6f4fd7444f433bff\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"79d638460070882c55b1fb78389de730382396d6ac9cbb1b779495b8bf913ce4\"" Apr 30 03:26:04.080298 containerd[1469]: time="2025-04-30T03:26:04.079634559Z" level=info msg="StartContainer for \"79d638460070882c55b1fb78389de730382396d6ac9cbb1b779495b8bf913ce4\"" Apr 30 03:26:04.080728 containerd[1469]: time="2025-04-30T03:26:04.080681841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-2-e7e0406ed5,Uid:69f55424ac3d5a59363ec78b011f765a,Namespace:kube-system,Attempt:0,} returns sandbox id \"7880c78d18624330f05ebf1797328332feeb8119c5bab41d099a180e444fe112\"" Apr 30 03:26:04.083085 kubelet[2129]: E0430 03:26:04.082989 2129 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:04.086931 containerd[1469]: time="2025-04-30T03:26:04.086856565Z" level=info msg="CreateContainer within sandbox \"7880c78d18624330f05ebf1797328332feeb8119c5bab41d099a180e444fe112\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 03:26:04.091137 containerd[1469]: time="2025-04-30T03:26:04.091076159Z" level=info msg="CreateContainer within sandbox \"07b35c786b88b60043cea91e8db406038f18daa7e1c754570e6cab9abf91bf6b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"63a6e25d91fe8d4ca7f5373a1ae8a9213a02facc28762db97834d82a6b4e6b7c\"" Apr 30 03:26:04.094319 containerd[1469]: time="2025-04-30T03:26:04.092704440Z" level=info msg="StartContainer for \"63a6e25d91fe8d4ca7f5373a1ae8a9213a02facc28762db97834d82a6b4e6b7c\"" Apr 30 03:26:04.098862 kubelet[2129]: W0430 03:26:04.098770 2129 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://24.199.113.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-2-e7e0406ed5&limit=500&resourceVersion=0": dial tcp 24.199.113.144:6443: connect: connection refused Apr 30 03:26:04.099154 kubelet[2129]: E0430 03:26:04.099127 2129 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://24.199.113.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-2-e7e0406ed5&limit=500&resourceVersion=0\": dial tcp 24.199.113.144:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:26:04.105594 containerd[1469]: time="2025-04-30T03:26:04.105530922Z" level=info msg="CreateContainer within sandbox \"7880c78d18624330f05ebf1797328332feeb8119c5bab41d099a180e444fe112\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c6abe6e18379e37e9e08c978e6044ed1794cdc22ec648355bf52547060ded864\"" Apr 30 03:26:04.106469 containerd[1469]: time="2025-04-30T03:26:04.106431662Z" level=info msg="StartContainer for \"c6abe6e18379e37e9e08c978e6044ed1794cdc22ec648355bf52547060ded864\"" Apr 30 03:26:04.151532 systemd[1]: Started cri-containerd-79d638460070882c55b1fb78389de730382396d6ac9cbb1b779495b8bf913ce4.scope - libcontainer container 79d638460070882c55b1fb78389de730382396d6ac9cbb1b779495b8bf913ce4. Apr 30 03:26:04.158939 kubelet[2129]: E0430 03:26:04.158900 2129 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.113.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-2-e7e0406ed5?timeout=10s\": dial tcp 24.199.113.144:6443: connect: connection refused" interval="1.6s" Apr 30 03:26:04.162566 systemd[1]: Started cri-containerd-63a6e25d91fe8d4ca7f5373a1ae8a9213a02facc28762db97834d82a6b4e6b7c.scope - libcontainer container 63a6e25d91fe8d4ca7f5373a1ae8a9213a02facc28762db97834d82a6b4e6b7c. Apr 30 03:26:04.171551 systemd[1]: Started cri-containerd-c6abe6e18379e37e9e08c978e6044ed1794cdc22ec648355bf52547060ded864.scope - libcontainer container c6abe6e18379e37e9e08c978e6044ed1794cdc22ec648355bf52547060ded864. Apr 30 03:26:04.242564 containerd[1469]: time="2025-04-30T03:26:04.242510283Z" level=info msg="StartContainer for \"79d638460070882c55b1fb78389de730382396d6ac9cbb1b779495b8bf913ce4\" returns successfully" Apr 30 03:26:04.273821 containerd[1469]: time="2025-04-30T03:26:04.273754496Z" level=info msg="StartContainer for \"63a6e25d91fe8d4ca7f5373a1ae8a9213a02facc28762db97834d82a6b4e6b7c\" returns successfully" Apr 30 03:26:04.296428 containerd[1469]: time="2025-04-30T03:26:04.296208001Z" level=info msg="StartContainer for \"c6abe6e18379e37e9e08c978e6044ed1794cdc22ec648355bf52547060ded864\" returns successfully" Apr 30 03:26:04.368155 kubelet[2129]: I0430 03:26:04.368050 2129 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:04.368662 kubelet[2129]: E0430 03:26:04.368524 2129 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://24.199.113.144:6443/api/v1/nodes\": dial tcp 24.199.113.144:6443: connect: connection refused" node="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:04.810257 kubelet[2129]: E0430 03:26:04.809120 2129 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-2-e7e0406ed5\" not found" node="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:04.810257 kubelet[2129]: E0430 03:26:04.809310 2129 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:04.816116 kubelet[2129]: E0430 03:26:04.815793 2129 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-2-e7e0406ed5\" not found" node="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:04.816116 kubelet[2129]: E0430 03:26:04.816005 2129 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:04.820011 kubelet[2129]: E0430 03:26:04.819696 2129 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-2-e7e0406ed5\" not found" node="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:04.820011 kubelet[2129]: E0430 03:26:04.819897 2129 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:05.822151 kubelet[2129]: E0430 03:26:05.822103 2129 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-2-e7e0406ed5\" not found" node="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:05.822834 kubelet[2129]: E0430 03:26:05.822310 2129 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:05.822834 kubelet[2129]: E0430 03:26:05.822661 2129 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-2-e7e0406ed5\" not found" node="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:05.822834 kubelet[2129]: E0430 03:26:05.822790 2129 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:05.971416 kubelet[2129]: I0430 03:26:05.971166 2129 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:06.823635 kubelet[2129]: E0430 03:26:06.823376 2129 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-2-e7e0406ed5\" not found" node="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:06.823635 kubelet[2129]: E0430 03:26:06.823523 2129 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:06.866728 kubelet[2129]: E0430 03:26:06.866650 2129 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.3-2-e7e0406ed5\" not found" node="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:06.944419 kubelet[2129]: I0430 03:26:06.944370 2129 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:06.954407 kubelet[2129]: I0430 03:26:06.953412 2129 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:07.015273 kubelet[2129]: E0430 03:26:07.013214 2129 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.3-2-e7e0406ed5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:07.015273 kubelet[2129]: I0430 03:26:07.013274 2129 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:07.016729 kubelet[2129]: E0430 03:26:07.016688 2129 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.3-2-e7e0406ed5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:07.016729 kubelet[2129]: I0430 03:26:07.016726 2129 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:07.020513 kubelet[2129]: E0430 03:26:07.020456 2129 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.3-2-e7e0406ed5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:07.719422 kubelet[2129]: I0430 03:26:07.719321 2129 apiserver.go:52] "Watching apiserver" Apr 30 03:26:07.754002 kubelet[2129]: I0430 03:26:07.753838 2129 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:26:09.112811 systemd[1]: Reloading requested from client PID 2403 ('systemctl') (unit session-7.scope)... Apr 30 03:26:09.112828 systemd[1]: Reloading... Apr 30 03:26:09.230362 zram_generator::config[2442]: No configuration found. Apr 30 03:26:09.375925 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:26:09.404861 kubelet[2129]: I0430 03:26:09.402540 2129 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:09.413628 kubelet[2129]: W0430 03:26:09.413132 2129 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:26:09.413628 kubelet[2129]: E0430 03:26:09.413492 2129 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:09.518555 systemd[1]: Reloading finished in 404 ms. Apr 30 03:26:09.576900 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:26:09.590983 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:26:09.591256 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:26:09.591334 systemd[1]: kubelet.service: Consumed 1.092s CPU time, 121.0M memory peak, 0B memory swap peak. Apr 30 03:26:09.602354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:26:09.769550 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:26:09.769846 (kubelet)[2493]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:26:09.838101 kubelet[2493]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:26:09.838101 kubelet[2493]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 03:26:09.838101 kubelet[2493]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:26:09.841423 kubelet[2493]: I0430 03:26:09.840320 2493 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:26:09.854913 kubelet[2493]: I0430 03:26:09.854846 2493 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 03:26:09.854913 kubelet[2493]: I0430 03:26:09.854885 2493 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:26:09.855257 kubelet[2493]: I0430 03:26:09.855223 2493 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 03:26:09.856920 kubelet[2493]: I0430 03:26:09.856881 2493 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 03:26:09.865307 kubelet[2493]: I0430 03:26:09.865207 2493 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:26:09.870955 kubelet[2493]: E0430 03:26:09.870889 2493 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 03:26:09.870955 kubelet[2493]: I0430 03:26:09.870934 2493 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 03:26:09.878272 kubelet[2493]: I0430 03:26:09.876572 2493 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:26:09.878272 kubelet[2493]: I0430 03:26:09.876934 2493 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:26:09.878272 kubelet[2493]: I0430 03:26:09.876974 2493 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-2-e7e0406ed5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 03:26:09.878272 kubelet[2493]: I0430 03:26:09.877453 2493 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:26:09.878756 kubelet[2493]: I0430 03:26:09.877471 2493 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 03:26:09.878756 kubelet[2493]: I0430 03:26:09.877544 2493 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:26:09.878756 kubelet[2493]: I0430 03:26:09.877773 2493 kubelet.go:446] "Attempting to sync node with API server" Apr 30 03:26:09.878756 kubelet[2493]: I0430 03:26:09.877795 2493 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:26:09.878756 kubelet[2493]: I0430 03:26:09.877830 2493 kubelet.go:352] "Adding apiserver pod source" Apr 30 03:26:09.878756 kubelet[2493]: I0430 03:26:09.877867 2493 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:26:09.886801 kubelet[2493]: I0430 03:26:09.886756 2493 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:26:09.887294 kubelet[2493]: I0430 03:26:09.887269 2493 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:26:09.888260 kubelet[2493]: I0430 03:26:09.887854 2493 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 03:26:09.888260 kubelet[2493]: I0430 03:26:09.887906 2493 server.go:1287] "Started kubelet" Apr 30 03:26:09.892656 kubelet[2493]: I0430 03:26:09.892574 2493 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:26:09.902495 kubelet[2493]: I0430 03:26:09.902426 2493 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:26:09.903778 kubelet[2493]: I0430 03:26:09.903727 2493 server.go:490] "Adding debug handlers to kubelet server" Apr 30 03:26:09.905361 kubelet[2493]: I0430 03:26:09.905267 2493 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:26:09.905652 kubelet[2493]: I0430 03:26:09.905629 2493 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:26:09.905964 kubelet[2493]: I0430 03:26:09.905942 2493 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 03:26:09.908835 kubelet[2493]: I0430 03:26:09.908736 2493 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 03:26:09.909477 kubelet[2493]: E0430 03:26:09.909384 2493 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.3-2-e7e0406ed5\" not found" Apr 30 03:26:09.914585 kubelet[2493]: I0430 03:26:09.912127 2493 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:26:09.914585 kubelet[2493]: I0430 03:26:09.912324 2493 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:26:09.916089 kubelet[2493]: I0430 03:26:09.915139 2493 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:26:09.917836 kubelet[2493]: I0430 03:26:09.917134 2493 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:26:09.917836 kubelet[2493]: I0430 03:26:09.917205 2493 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 03:26:09.917836 kubelet[2493]: I0430 03:26:09.917320 2493 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 03:26:09.917836 kubelet[2493]: I0430 03:26:09.917333 2493 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 03:26:09.917836 kubelet[2493]: E0430 03:26:09.917415 2493 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:26:09.930711 kubelet[2493]: I0430 03:26:09.928896 2493 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:26:09.932875 kubelet[2493]: I0430 03:26:09.932584 2493 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:26:09.934053 kubelet[2493]: E0430 03:26:09.934018 2493 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:26:09.937176 kubelet[2493]: I0430 03:26:09.936889 2493 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:26:10.004723 kubelet[2493]: I0430 03:26:10.004677 2493 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 03:26:10.004723 kubelet[2493]: I0430 03:26:10.004705 2493 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 03:26:10.004723 kubelet[2493]: I0430 03:26:10.004738 2493 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:26:10.005131 kubelet[2493]: I0430 03:26:10.005084 2493 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 03:26:10.005131 kubelet[2493]: I0430 03:26:10.005116 2493 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 03:26:10.005304 kubelet[2493]: I0430 03:26:10.005149 2493 policy_none.go:49] "None policy: Start" Apr 30 03:26:10.005304 kubelet[2493]: I0430 03:26:10.005166 2493 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 03:26:10.005304 kubelet[2493]: I0430 03:26:10.005185 2493 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:26:10.005459 kubelet[2493]: I0430 03:26:10.005386 2493 state_mem.go:75] "Updated machine memory state" Apr 30 03:26:10.013021 kubelet[2493]: I0430 03:26:10.012975 2493 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:26:10.013309 kubelet[2493]: I0430 03:26:10.013290 2493 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 03:26:10.013376 kubelet[2493]: I0430 03:26:10.013313 2493 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:26:10.014219 kubelet[2493]: I0430 03:26:10.014171 2493 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:26:10.018287 kubelet[2493]: I0430 03:26:10.018190 2493 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:10.020157 kubelet[2493]: I0430 03:26:10.020124 2493 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:10.021788 kubelet[2493]: I0430 03:26:10.021664 2493 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:10.027606 kubelet[2493]: E0430 03:26:10.027566 2493 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 03:26:10.045065 kubelet[2493]: W0430 03:26:10.044943 2493 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:26:10.045566 kubelet[2493]: W0430 03:26:10.045525 2493 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:26:10.051762 kubelet[2493]: W0430 03:26:10.051714 2493 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:26:10.052116 kubelet[2493]: E0430 03:26:10.051815 2493 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.3-2-e7e0406ed5\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:10.116011 kubelet[2493]: I0430 03:26:10.115517 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8038d26e047432ca9520b29e47318e0-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-2-e7e0406ed5\" (UID: \"a8038d26e047432ca9520b29e47318e0\") " pod="kube-system/kube-apiserver-ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:10.116011 kubelet[2493]: I0430 03:26:10.115593 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d2fac22366d3ca4cb9b858e7dfa466ab-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-2-e7e0406ed5\" (UID: \"d2fac22366d3ca4cb9b858e7dfa466ab\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:10.116011 kubelet[2493]: I0430 03:26:10.115668 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d2fac22366d3ca4cb9b858e7dfa466ab-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-2-e7e0406ed5\" (UID: \"d2fac22366d3ca4cb9b858e7dfa466ab\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:10.116011 kubelet[2493]: I0430 03:26:10.115703 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/69f55424ac3d5a59363ec78b011f765a-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-2-e7e0406ed5\" (UID: \"69f55424ac3d5a59363ec78b011f765a\") " pod="kube-system/kube-scheduler-ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:10.116011 kubelet[2493]: I0430 03:26:10.115723 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8038d26e047432ca9520b29e47318e0-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-2-e7e0406ed5\" (UID: \"a8038d26e047432ca9520b29e47318e0\") " pod="kube-system/kube-apiserver-ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:10.117592 kubelet[2493]: I0430 03:26:10.115740 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8038d26e047432ca9520b29e47318e0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-2-e7e0406ed5\" (UID: \"a8038d26e047432ca9520b29e47318e0\") " pod="kube-system/kube-apiserver-ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:10.117592 kubelet[2493]: I0430 03:26:10.115801 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d2fac22366d3ca4cb9b858e7dfa466ab-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-2-e7e0406ed5\" (UID: \"d2fac22366d3ca4cb9b858e7dfa466ab\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:10.117592 kubelet[2493]: I0430 03:26:10.115820 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d2fac22366d3ca4cb9b858e7dfa466ab-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-2-e7e0406ed5\" (UID: \"d2fac22366d3ca4cb9b858e7dfa466ab\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:10.119589 kubelet[2493]: I0430 03:26:10.116469 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d2fac22366d3ca4cb9b858e7dfa466ab-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-2-e7e0406ed5\" (UID: \"d2fac22366d3ca4cb9b858e7dfa466ab\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:10.129315 kubelet[2493]: I0430 03:26:10.128891 2493 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:10.141864 kubelet[2493]: I0430 03:26:10.141747 2493 kubelet_node_status.go:125] "Node was previously registered" node="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:10.142248 kubelet[2493]: I0430 03:26:10.142052 2493 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:10.346801 kubelet[2493]: E0430 03:26:10.346271 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:10.348112 kubelet[2493]: E0430 03:26:10.348023 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:10.354557 kubelet[2493]: E0430 03:26:10.354443 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:10.882247 kubelet[2493]: I0430 03:26:10.881766 2493 apiserver.go:52] "Watching apiserver" Apr 30 03:26:10.912893 kubelet[2493]: I0430 03:26:10.912835 2493 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:26:10.981381 kubelet[2493]: E0430 03:26:10.978745 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:10.981381 kubelet[2493]: I0430 03:26:10.979376 2493 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:10.981381 kubelet[2493]: E0430 03:26:10.979747 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:11.034961 kubelet[2493]: W0430 03:26:11.034918 2493 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:26:11.035376 kubelet[2493]: E0430 03:26:11.035344 2493 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.3-2-e7e0406ed5\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:11.035837 kubelet[2493]: E0430 03:26:11.035784 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:11.147345 kubelet[2493]: I0430 03:26:11.146648 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.3-2-e7e0406ed5" podStartSLOduration=1.146618159 podStartE2EDuration="1.146618159s" podCreationTimestamp="2025-04-30 03:26:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:26:11.096798798 +0000 UTC m=+1.320340736" watchObservedRunningTime="2025-04-30 03:26:11.146618159 +0000 UTC m=+1.370160093" Apr 30 03:26:11.190509 kubelet[2493]: I0430 03:26:11.190171 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.3-2-e7e0406ed5" podStartSLOduration=1.190149968 podStartE2EDuration="1.190149968s" podCreationTimestamp="2025-04-30 03:26:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:26:11.150416268 +0000 UTC m=+1.373958210" watchObservedRunningTime="2025-04-30 03:26:11.190149968 +0000 UTC m=+1.413691903" Apr 30 03:26:11.981806 kubelet[2493]: E0430 03:26:11.981441 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:11.982318 kubelet[2493]: E0430 03:26:11.981825 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:12.984970 kubelet[2493]: E0430 03:26:12.984930 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:16.388180 systemd-timesyncd[1366]: Contacted time server 15.204.87.223:123 (2.flatcar.pool.ntp.org). Apr 30 03:26:16.388292 systemd-timesyncd[1366]: Initial clock synchronization to Wed 2025-04-30 03:26:16.387881 UTC. Apr 30 03:26:16.388393 systemd-resolved[1323]: Clock change detected. Flushing caches. Apr 30 03:26:16.651785 sudo[1652]: pam_unix(sudo:session): session closed for user root Apr 30 03:26:16.658643 sshd[1649]: pam_unix(sshd:session): session closed for user core Apr 30 03:26:16.666210 systemd[1]: sshd@6-24.199.113.144:22-139.178.89.65:34632.service: Deactivated successfully. Apr 30 03:26:16.670012 kubelet[2493]: I0430 03:26:16.669864 2493 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 03:26:16.671641 containerd[1469]: time="2025-04-30T03:26:16.670738486Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 03:26:16.671345 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 03:26:16.673218 kubelet[2493]: I0430 03:26:16.670965 2493 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 03:26:16.672604 systemd[1]: session-7.scope: Consumed 6.116s CPU time, 147.8M memory peak, 0B memory swap peak. Apr 30 03:26:16.674404 systemd-logind[1446]: Session 7 logged out. Waiting for processes to exit. Apr 30 03:26:16.676555 systemd-logind[1446]: Removed session 7. Apr 30 03:26:17.387148 kubelet[2493]: I0430 03:26:17.386788 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.3-2-e7e0406ed5" podStartSLOduration=8.386764973 podStartE2EDuration="8.386764973s" podCreationTimestamp="2025-04-30 03:26:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:26:11.190478855 +0000 UTC m=+1.414020798" watchObservedRunningTime="2025-04-30 03:26:17.386764973 +0000 UTC m=+6.787084756" Apr 30 03:26:17.406074 systemd[1]: Created slice kubepods-besteffort-pod21b8466f_90f0_40bc_bca4_90fbf1744045.slice - libcontainer container kubepods-besteffort-pod21b8466f_90f0_40bc_bca4_90fbf1744045.slice. Apr 30 03:26:17.488868 kubelet[2493]: I0430 03:26:17.488625 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/21b8466f-90f0-40bc-bca4-90fbf1744045-kube-proxy\") pod \"kube-proxy-fj5sf\" (UID: \"21b8466f-90f0-40bc-bca4-90fbf1744045\") " pod="kube-system/kube-proxy-fj5sf" Apr 30 03:26:17.488868 kubelet[2493]: I0430 03:26:17.488670 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21b8466f-90f0-40bc-bca4-90fbf1744045-xtables-lock\") pod \"kube-proxy-fj5sf\" (UID: \"21b8466f-90f0-40bc-bca4-90fbf1744045\") " pod="kube-system/kube-proxy-fj5sf" Apr 30 03:26:17.488868 kubelet[2493]: I0430 03:26:17.488699 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zfpl\" (UniqueName: \"kubernetes.io/projected/21b8466f-90f0-40bc-bca4-90fbf1744045-kube-api-access-6zfpl\") pod \"kube-proxy-fj5sf\" (UID: \"21b8466f-90f0-40bc-bca4-90fbf1744045\") " pod="kube-system/kube-proxy-fj5sf" Apr 30 03:26:17.488868 kubelet[2493]: I0430 03:26:17.488722 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21b8466f-90f0-40bc-bca4-90fbf1744045-lib-modules\") pod \"kube-proxy-fj5sf\" (UID: \"21b8466f-90f0-40bc-bca4-90fbf1744045\") " pod="kube-system/kube-proxy-fj5sf" Apr 30 03:26:17.717158 kubelet[2493]: E0430 03:26:17.716620 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:17.717853 containerd[1469]: time="2025-04-30T03:26:17.717801496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fj5sf,Uid:21b8466f-90f0-40bc-bca4-90fbf1744045,Namespace:kube-system,Attempt:0,}" Apr 30 03:26:17.766749 containerd[1469]: time="2025-04-30T03:26:17.765777319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:26:17.766749 containerd[1469]: time="2025-04-30T03:26:17.765884150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:26:17.766749 containerd[1469]: time="2025-04-30T03:26:17.765923266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:26:17.766749 containerd[1469]: time="2025-04-30T03:26:17.766066985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:26:17.814769 systemd[1]: Started cri-containerd-34d5c06e8c1b3b14c8307ca003d4072108195c4ec5882263d719093b614e9aac.scope - libcontainer container 34d5c06e8c1b3b14c8307ca003d4072108195c4ec5882263d719093b614e9aac. Apr 30 03:26:17.824050 systemd[1]: Created slice kubepods-besteffort-podf5bc4280_c292_4116_a562_c2d01f6f9751.slice - libcontainer container kubepods-besteffort-podf5bc4280_c292_4116_a562_c2d01f6f9751.slice. Apr 30 03:26:17.856192 containerd[1469]: time="2025-04-30T03:26:17.856058853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fj5sf,Uid:21b8466f-90f0-40bc-bca4-90fbf1744045,Namespace:kube-system,Attempt:0,} returns sandbox id \"34d5c06e8c1b3b14c8307ca003d4072108195c4ec5882263d719093b614e9aac\"" Apr 30 03:26:17.857433 kubelet[2493]: E0430 03:26:17.857397 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:17.862846 containerd[1469]: time="2025-04-30T03:26:17.862654641Z" level=info msg="CreateContainer within sandbox \"34d5c06e8c1b3b14c8307ca003d4072108195c4ec5882263d719093b614e9aac\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 03:26:17.878043 containerd[1469]: time="2025-04-30T03:26:17.877997481Z" level=info msg="CreateContainer within sandbox \"34d5c06e8c1b3b14c8307ca003d4072108195c4ec5882263d719093b614e9aac\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"abbd21524000e15e4040706c616839dd72b86172ffb5e735879c71595e09cf68\"" Apr 30 03:26:17.879077 containerd[1469]: time="2025-04-30T03:26:17.878947903Z" level=info msg="StartContainer for \"abbd21524000e15e4040706c616839dd72b86172ffb5e735879c71595e09cf68\"" Apr 30 03:26:17.892486 kubelet[2493]: I0430 03:26:17.891217 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghb4p\" (UniqueName: \"kubernetes.io/projected/f5bc4280-c292-4116-a562-c2d01f6f9751-kube-api-access-ghb4p\") pod \"tigera-operator-789496d6f5-2vvsh\" (UID: \"f5bc4280-c292-4116-a562-c2d01f6f9751\") " pod="tigera-operator/tigera-operator-789496d6f5-2vvsh" Apr 30 03:26:17.892486 kubelet[2493]: I0430 03:26:17.891273 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f5bc4280-c292-4116-a562-c2d01f6f9751-var-lib-calico\") pod \"tigera-operator-789496d6f5-2vvsh\" (UID: \"f5bc4280-c292-4116-a562-c2d01f6f9751\") " pod="tigera-operator/tigera-operator-789496d6f5-2vvsh" Apr 30 03:26:17.917745 systemd[1]: Started cri-containerd-abbd21524000e15e4040706c616839dd72b86172ffb5e735879c71595e09cf68.scope - libcontainer container abbd21524000e15e4040706c616839dd72b86172ffb5e735879c71595e09cf68. Apr 30 03:26:17.952950 containerd[1469]: time="2025-04-30T03:26:17.952872101Z" level=info msg="StartContainer for \"abbd21524000e15e4040706c616839dd72b86172ffb5e735879c71595e09cf68\" returns successfully" Apr 30 03:26:18.131112 containerd[1469]: time="2025-04-30T03:26:18.130942522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-2vvsh,Uid:f5bc4280-c292-4116-a562-c2d01f6f9751,Namespace:tigera-operator,Attempt:0,}" Apr 30 03:26:18.161422 containerd[1469]: time="2025-04-30T03:26:18.160690942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:26:18.161422 containerd[1469]: time="2025-04-30T03:26:18.160770500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:26:18.161422 containerd[1469]: time="2025-04-30T03:26:18.160786073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:26:18.163385 containerd[1469]: time="2025-04-30T03:26:18.162749199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:26:18.191742 systemd[1]: Started cri-containerd-aa2b82448387a1d16bc59834b9e371b40119feb103fb394ce5c6a53a1d41efb1.scope - libcontainer container aa2b82448387a1d16bc59834b9e371b40119feb103fb394ce5c6a53a1d41efb1. Apr 30 03:26:18.244431 containerd[1469]: time="2025-04-30T03:26:18.244278877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-2vvsh,Uid:f5bc4280-c292-4116-a562-c2d01f6f9751,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"aa2b82448387a1d16bc59834b9e371b40119feb103fb394ce5c6a53a1d41efb1\"" Apr 30 03:26:18.249519 containerd[1469]: time="2025-04-30T03:26:18.249170224Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" Apr 30 03:26:18.383344 kubelet[2493]: E0430 03:26:18.383218 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:18.768108 kubelet[2493]: E0430 03:26:18.767943 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:18.825301 kubelet[2493]: E0430 03:26:18.824960 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:18.826871 kubelet[2493]: E0430 03:26:18.826810 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:18.827401 kubelet[2493]: E0430 03:26:18.827259 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:18.843343 kubelet[2493]: I0430 03:26:18.842875 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fj5sf" podStartSLOduration=1.8428516799999999 podStartE2EDuration="1.84285168s" podCreationTimestamp="2025-04-30 03:26:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:26:18.842746172 +0000 UTC m=+8.243065954" watchObservedRunningTime="2025-04-30 03:26:18.84285168 +0000 UTC m=+8.243171464" Apr 30 03:26:19.829483 kubelet[2493]: E0430 03:26:19.829386 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:21.281427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2053981460.mount: Deactivated successfully. Apr 30 03:26:22.291489 containerd[1469]: time="2025-04-30T03:26:22.291313336Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:26:22.292923 containerd[1469]: time="2025-04-30T03:26:22.292583308Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" Apr 30 03:26:22.292923 containerd[1469]: time="2025-04-30T03:26:22.292844333Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:26:22.299828 containerd[1469]: time="2025-04-30T03:26:22.299749402Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:26:22.301942 containerd[1469]: time="2025-04-30T03:26:22.301313327Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 4.052090805s" Apr 30 03:26:22.302388 containerd[1469]: time="2025-04-30T03:26:22.301387310Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" Apr 30 03:26:22.308150 containerd[1469]: time="2025-04-30T03:26:22.308096290Z" level=info msg="CreateContainer within sandbox \"aa2b82448387a1d16bc59834b9e371b40119feb103fb394ce5c6a53a1d41efb1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 30 03:26:22.325360 containerd[1469]: time="2025-04-30T03:26:22.325171299Z" level=info msg="CreateContainer within sandbox \"aa2b82448387a1d16bc59834b9e371b40119feb103fb394ce5c6a53a1d41efb1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3e1ffdaf62e7b15b6db6a34e229edd9168e9088b2c9b24b292e11379089b5f03\"" Apr 30 03:26:22.326354 containerd[1469]: time="2025-04-30T03:26:22.326286503Z" level=info msg="StartContainer for \"3e1ffdaf62e7b15b6db6a34e229edd9168e9088b2c9b24b292e11379089b5f03\"" Apr 30 03:26:22.376843 systemd[1]: Started cri-containerd-3e1ffdaf62e7b15b6db6a34e229edd9168e9088b2c9b24b292e11379089b5f03.scope - libcontainer container 3e1ffdaf62e7b15b6db6a34e229edd9168e9088b2c9b24b292e11379089b5f03. Apr 30 03:26:22.417321 containerd[1469]: time="2025-04-30T03:26:22.417249342Z" level=info msg="StartContainer for \"3e1ffdaf62e7b15b6db6a34e229edd9168e9088b2c9b24b292e11379089b5f03\" returns successfully" Apr 30 03:26:22.713088 kubelet[2493]: E0430 03:26:22.711843 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:22.857655 kubelet[2493]: I0430 03:26:22.857135 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-2vvsh" podStartSLOduration=1.799891637 podStartE2EDuration="5.85709226s" podCreationTimestamp="2025-04-30 03:26:17 +0000 UTC" firstStartedPulling="2025-04-30 03:26:18.246759682 +0000 UTC m=+7.647079446" lastFinishedPulling="2025-04-30 03:26:22.303960291 +0000 UTC m=+11.704280069" observedRunningTime="2025-04-30 03:26:22.857059362 +0000 UTC m=+12.257379140" watchObservedRunningTime="2025-04-30 03:26:22.85709226 +0000 UTC m=+12.257412043" Apr 30 03:26:24.344918 update_engine[1447]: I20250430 03:26:24.344727 1447 update_attempter.cc:509] Updating boot flags... Apr 30 03:26:24.387500 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2871) Apr 30 03:26:24.492786 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2871) Apr 30 03:26:24.561799 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2871) Apr 30 03:26:25.772145 systemd[1]: Created slice kubepods-besteffort-podf984b4f5_7db5_484f_b52b_868225ed0fd8.slice - libcontainer container kubepods-besteffort-podf984b4f5_7db5_484f_b52b_868225ed0fd8.slice. Apr 30 03:26:25.796761 kubelet[2493]: W0430 03:26:25.790420 2493 reflector.go:569] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-4081.3.3-2-e7e0406ed5" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.3-2-e7e0406ed5' and this object Apr 30 03:26:25.796761 kubelet[2493]: I0430 03:26:25.791680 2493 status_manager.go:890] "Failed to get status for pod" podUID="f984b4f5-7db5-484f-b52b-868225ed0fd8" pod="calico-system/calico-typha-9599479cb-mtzld" err="pods \"calico-typha-9599479cb-mtzld\" is forbidden: User \"system:node:ci-4081.3.3-2-e7e0406ed5\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.3-2-e7e0406ed5' and this object" Apr 30 03:26:25.796761 kubelet[2493]: W0430 03:26:25.791969 2493 reflector.go:569] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.3-2-e7e0406ed5" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.3-2-e7e0406ed5' and this object Apr 30 03:26:25.796761 kubelet[2493]: E0430 03:26:25.794518 2493 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081.3.3-2-e7e0406ed5\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.3-2-e7e0406ed5' and this object" logger="UnhandledError" Apr 30 03:26:25.796761 kubelet[2493]: W0430 03:26:25.794746 2493 reflector.go:569] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-4081.3.3-2-e7e0406ed5" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.3-2-e7e0406ed5' and this object Apr 30 03:26:25.797535 kubelet[2493]: E0430 03:26:25.794786 2493 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ci-4081.3.3-2-e7e0406ed5\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.3-2-e7e0406ed5' and this object" logger="UnhandledError" Apr 30 03:26:25.800143 kubelet[2493]: E0430 03:26:25.800058 2493 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:ci-4081.3.3-2-e7e0406ed5\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.3-2-e7e0406ed5' and this object" logger="UnhandledError" Apr 30 03:26:25.854576 kubelet[2493]: I0430 03:26:25.854324 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgpjx\" (UniqueName: \"kubernetes.io/projected/f984b4f5-7db5-484f-b52b-868225ed0fd8-kube-api-access-zgpjx\") pod \"calico-typha-9599479cb-mtzld\" (UID: \"f984b4f5-7db5-484f-b52b-868225ed0fd8\") " pod="calico-system/calico-typha-9599479cb-mtzld" Apr 30 03:26:25.854576 kubelet[2493]: I0430 03:26:25.854398 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f984b4f5-7db5-484f-b52b-868225ed0fd8-tigera-ca-bundle\") pod \"calico-typha-9599479cb-mtzld\" (UID: \"f984b4f5-7db5-484f-b52b-868225ed0fd8\") " pod="calico-system/calico-typha-9599479cb-mtzld" Apr 30 03:26:25.854576 kubelet[2493]: I0430 03:26:25.854428 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f984b4f5-7db5-484f-b52b-868225ed0fd8-typha-certs\") pod \"calico-typha-9599479cb-mtzld\" (UID: \"f984b4f5-7db5-484f-b52b-868225ed0fd8\") " pod="calico-system/calico-typha-9599479cb-mtzld" Apr 30 03:26:25.972357 systemd[1]: Created slice kubepods-besteffort-podb145449a_96c4_48cc_acaa_5d67b046b6d9.slice - libcontainer container kubepods-besteffort-podb145449a_96c4_48cc_acaa_5d67b046b6d9.slice. Apr 30 03:26:26.056095 kubelet[2493]: I0430 03:26:26.056016 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b145449a-96c4-48cc-acaa-5d67b046b6d9-cni-bin-dir\") pod \"calico-node-tt4wb\" (UID: \"b145449a-96c4-48cc-acaa-5d67b046b6d9\") " pod="calico-system/calico-node-tt4wb" Apr 30 03:26:26.056095 kubelet[2493]: I0430 03:26:26.056089 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b145449a-96c4-48cc-acaa-5d67b046b6d9-cni-net-dir\") pod \"calico-node-tt4wb\" (UID: \"b145449a-96c4-48cc-acaa-5d67b046b6d9\") " pod="calico-system/calico-node-tt4wb" Apr 30 03:26:26.056373 kubelet[2493]: I0430 03:26:26.056118 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn8db\" (UniqueName: \"kubernetes.io/projected/b145449a-96c4-48cc-acaa-5d67b046b6d9-kube-api-access-hn8db\") pod \"calico-node-tt4wb\" (UID: \"b145449a-96c4-48cc-acaa-5d67b046b6d9\") " pod="calico-system/calico-node-tt4wb" Apr 30 03:26:26.056373 kubelet[2493]: I0430 03:26:26.056152 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b145449a-96c4-48cc-acaa-5d67b046b6d9-tigera-ca-bundle\") pod \"calico-node-tt4wb\" (UID: \"b145449a-96c4-48cc-acaa-5d67b046b6d9\") " pod="calico-system/calico-node-tt4wb" Apr 30 03:26:26.056373 kubelet[2493]: I0430 03:26:26.056178 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b145449a-96c4-48cc-acaa-5d67b046b6d9-xtables-lock\") pod \"calico-node-tt4wb\" (UID: \"b145449a-96c4-48cc-acaa-5d67b046b6d9\") " pod="calico-system/calico-node-tt4wb" Apr 30 03:26:26.056373 kubelet[2493]: I0430 03:26:26.056208 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b145449a-96c4-48cc-acaa-5d67b046b6d9-var-run-calico\") pod \"calico-node-tt4wb\" (UID: \"b145449a-96c4-48cc-acaa-5d67b046b6d9\") " pod="calico-system/calico-node-tt4wb" Apr 30 03:26:26.056373 kubelet[2493]: I0430 03:26:26.056232 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b145449a-96c4-48cc-acaa-5d67b046b6d9-node-certs\") pod \"calico-node-tt4wb\" (UID: \"b145449a-96c4-48cc-acaa-5d67b046b6d9\") " pod="calico-system/calico-node-tt4wb" Apr 30 03:26:26.056705 kubelet[2493]: I0430 03:26:26.056256 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b145449a-96c4-48cc-acaa-5d67b046b6d9-var-lib-calico\") pod \"calico-node-tt4wb\" (UID: \"b145449a-96c4-48cc-acaa-5d67b046b6d9\") " pod="calico-system/calico-node-tt4wb" Apr 30 03:26:26.056705 kubelet[2493]: I0430 03:26:26.056284 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b145449a-96c4-48cc-acaa-5d67b046b6d9-lib-modules\") pod \"calico-node-tt4wb\" (UID: \"b145449a-96c4-48cc-acaa-5d67b046b6d9\") " pod="calico-system/calico-node-tt4wb" Apr 30 03:26:26.056705 kubelet[2493]: I0430 03:26:26.056314 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b145449a-96c4-48cc-acaa-5d67b046b6d9-cni-log-dir\") pod \"calico-node-tt4wb\" (UID: \"b145449a-96c4-48cc-acaa-5d67b046b6d9\") " pod="calico-system/calico-node-tt4wb" Apr 30 03:26:26.056705 kubelet[2493]: I0430 03:26:26.056361 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b145449a-96c4-48cc-acaa-5d67b046b6d9-policysync\") pod \"calico-node-tt4wb\" (UID: \"b145449a-96c4-48cc-acaa-5d67b046b6d9\") " pod="calico-system/calico-node-tt4wb" Apr 30 03:26:26.056705 kubelet[2493]: I0430 03:26:26.056387 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b145449a-96c4-48cc-acaa-5d67b046b6d9-flexvol-driver-host\") pod \"calico-node-tt4wb\" (UID: \"b145449a-96c4-48cc-acaa-5d67b046b6d9\") " pod="calico-system/calico-node-tt4wb" Apr 30 03:26:26.171609 kubelet[2493]: E0430 03:26:26.171555 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.172442 kubelet[2493]: W0430 03:26:26.172255 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.184817 kubelet[2493]: E0430 03:26:26.184563 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.220526 kubelet[2493]: E0430 03:26:26.220438 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kld5k" podUID="b718a743-a03f-4839-ab75-67d0237668cd" Apr 30 03:26:26.226828 kubelet[2493]: E0430 03:26:26.226781 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.226828 kubelet[2493]: W0430 03:26:26.226816 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.227095 kubelet[2493]: E0430 03:26:26.226847 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.227160 kubelet[2493]: E0430 03:26:26.227132 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.227160 kubelet[2493]: W0430 03:26:26.227143 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.227160 kubelet[2493]: E0430 03:26:26.227157 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.227711 kubelet[2493]: E0430 03:26:26.227429 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.227711 kubelet[2493]: W0430 03:26:26.227462 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.227711 kubelet[2493]: E0430 03:26:26.227477 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.228570 kubelet[2493]: E0430 03:26:26.228519 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.228570 kubelet[2493]: W0430 03:26:26.228543 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.228570 kubelet[2493]: E0430 03:26:26.228563 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.230415 kubelet[2493]: E0430 03:26:26.230368 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.230415 kubelet[2493]: W0430 03:26:26.230402 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.230666 kubelet[2493]: E0430 03:26:26.230427 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.233401 kubelet[2493]: E0430 03:26:26.231436 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.233401 kubelet[2493]: W0430 03:26:26.231476 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.233401 kubelet[2493]: E0430 03:26:26.231500 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.233401 kubelet[2493]: E0430 03:26:26.232794 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.233401 kubelet[2493]: W0430 03:26:26.232818 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.233401 kubelet[2493]: E0430 03:26:26.232841 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.233401 kubelet[2493]: E0430 03:26:26.233205 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.233401 kubelet[2493]: W0430 03:26:26.233222 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.233401 kubelet[2493]: E0430 03:26:26.233241 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.234300 kubelet[2493]: E0430 03:26:26.234266 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.234300 kubelet[2493]: W0430 03:26:26.234293 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.234300 kubelet[2493]: E0430 03:26:26.234313 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.234654 kubelet[2493]: E0430 03:26:26.234631 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.234654 kubelet[2493]: W0430 03:26:26.234652 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.234765 kubelet[2493]: E0430 03:26:26.234669 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.236820 kubelet[2493]: I0430 03:26:26.236762 2493 status_manager.go:890] "Failed to get status for pod" podUID="b718a743-a03f-4839-ab75-67d0237668cd" pod="calico-system/csi-node-driver-kld5k" err="pods \"csi-node-driver-kld5k\" is forbidden: User \"system:node:ci-4081.3.3-2-e7e0406ed5\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.3-2-e7e0406ed5' and this object" Apr 30 03:26:26.237201 kubelet[2493]: E0430 03:26:26.237172 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.237201 kubelet[2493]: W0430 03:26:26.237197 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.237317 kubelet[2493]: E0430 03:26:26.237223 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.237568 kubelet[2493]: E0430 03:26:26.237549 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.237631 kubelet[2493]: W0430 03:26:26.237568 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.237631 kubelet[2493]: E0430 03:26:26.237586 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.237966 kubelet[2493]: E0430 03:26:26.237935 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.238026 kubelet[2493]: W0430 03:26:26.237966 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.238026 kubelet[2493]: E0430 03:26:26.237983 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.238810 kubelet[2493]: E0430 03:26:26.238776 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.238810 kubelet[2493]: W0430 03:26:26.238801 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.238937 kubelet[2493]: E0430 03:26:26.238820 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.241140 kubelet[2493]: E0430 03:26:26.241099 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.241140 kubelet[2493]: W0430 03:26:26.241133 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.241304 kubelet[2493]: E0430 03:26:26.241158 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.241501 kubelet[2493]: E0430 03:26:26.241485 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.241551 kubelet[2493]: W0430 03:26:26.241502 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.241551 kubelet[2493]: E0430 03:26:26.241520 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.241801 kubelet[2493]: E0430 03:26:26.241783 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.241801 kubelet[2493]: W0430 03:26:26.241800 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.241912 kubelet[2493]: E0430 03:26:26.241816 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.244704 kubelet[2493]: E0430 03:26:26.244646 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.244704 kubelet[2493]: W0430 03:26:26.244682 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.244704 kubelet[2493]: E0430 03:26:26.244712 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.245111 kubelet[2493]: E0430 03:26:26.245079 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.245111 kubelet[2493]: W0430 03:26:26.245099 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.245224 kubelet[2493]: E0430 03:26:26.245117 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.245416 kubelet[2493]: E0430 03:26:26.245393 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.245416 kubelet[2493]: W0430 03:26:26.245411 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.245658 kubelet[2493]: E0430 03:26:26.245425 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.258668 kubelet[2493]: E0430 03:26:26.258597 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.258668 kubelet[2493]: W0430 03:26:26.258634 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.258668 kubelet[2493]: E0430 03:26:26.258668 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.259065 kubelet[2493]: I0430 03:26:26.258731 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b718a743-a03f-4839-ab75-67d0237668cd-kubelet-dir\") pod \"csi-node-driver-kld5k\" (UID: \"b718a743-a03f-4839-ab75-67d0237668cd\") " pod="calico-system/csi-node-driver-kld5k" Apr 30 03:26:26.259400 kubelet[2493]: E0430 03:26:26.259368 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.259489 kubelet[2493]: W0430 03:26:26.259400 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.259489 kubelet[2493]: E0430 03:26:26.259441 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.259589 kubelet[2493]: I0430 03:26:26.259498 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78tl6\" (UniqueName: \"kubernetes.io/projected/b718a743-a03f-4839-ab75-67d0237668cd-kube-api-access-78tl6\") pod \"csi-node-driver-kld5k\" (UID: \"b718a743-a03f-4839-ab75-67d0237668cd\") " pod="calico-system/csi-node-driver-kld5k" Apr 30 03:26:26.259864 kubelet[2493]: E0430 03:26:26.259840 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.259929 kubelet[2493]: W0430 03:26:26.259864 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.259929 kubelet[2493]: E0430 03:26:26.259890 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.259929 kubelet[2493]: I0430 03:26:26.259919 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b718a743-a03f-4839-ab75-67d0237668cd-varrun\") pod \"csi-node-driver-kld5k\" (UID: \"b718a743-a03f-4839-ab75-67d0237668cd\") " pod="calico-system/csi-node-driver-kld5k" Apr 30 03:26:26.261632 kubelet[2493]: E0430 03:26:26.261589 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.261632 kubelet[2493]: W0430 03:26:26.261617 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.262052 kubelet[2493]: E0430 03:26:26.261754 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.262052 kubelet[2493]: I0430 03:26:26.261814 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b718a743-a03f-4839-ab75-67d0237668cd-registration-dir\") pod \"csi-node-driver-kld5k\" (UID: \"b718a743-a03f-4839-ab75-67d0237668cd\") " pod="calico-system/csi-node-driver-kld5k" Apr 30 03:26:26.263874 kubelet[2493]: E0430 03:26:26.263212 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.263874 kubelet[2493]: W0430 03:26:26.263246 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.263874 kubelet[2493]: E0430 03:26:26.263803 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.264809 kubelet[2493]: E0430 03:26:26.264781 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.264809 kubelet[2493]: W0430 03:26:26.264806 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.265792 kubelet[2493]: E0430 03:26:26.264994 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.266732 kubelet[2493]: E0430 03:26:26.266693 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.266732 kubelet[2493]: W0430 03:26:26.266725 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.267149 kubelet[2493]: E0430 03:26:26.267105 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.267239 kubelet[2493]: E0430 03:26:26.267157 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.267239 kubelet[2493]: W0430 03:26:26.267177 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.267357 kubelet[2493]: E0430 03:26:26.267268 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.267357 kubelet[2493]: I0430 03:26:26.267316 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b718a743-a03f-4839-ab75-67d0237668cd-socket-dir\") pod \"csi-node-driver-kld5k\" (UID: \"b718a743-a03f-4839-ab75-67d0237668cd\") " pod="calico-system/csi-node-driver-kld5k" Apr 30 03:26:26.268491 kubelet[2493]: E0430 03:26:26.267587 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.268491 kubelet[2493]: W0430 03:26:26.267609 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.268491 kubelet[2493]: E0430 03:26:26.267853 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.268491 kubelet[2493]: W0430 03:26:26.267866 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.268491 kubelet[2493]: E0430 03:26:26.267882 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.268491 kubelet[2493]: E0430 03:26:26.267902 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.268491 kubelet[2493]: E0430 03:26:26.268273 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.268491 kubelet[2493]: W0430 03:26:26.268290 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.268491 kubelet[2493]: E0430 03:26:26.268334 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.268854 kubelet[2493]: E0430 03:26:26.268605 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.268854 kubelet[2493]: W0430 03:26:26.268619 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.268854 kubelet[2493]: E0430 03:26:26.268634 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.268951 kubelet[2493]: E0430 03:26:26.268876 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.268951 kubelet[2493]: W0430 03:26:26.268890 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.269020 kubelet[2493]: E0430 03:26:26.268974 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.269294 kubelet[2493]: E0430 03:26:26.269272 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.269294 kubelet[2493]: W0430 03:26:26.269291 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.269437 kubelet[2493]: E0430 03:26:26.269306 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.269699 kubelet[2493]: E0430 03:26:26.269680 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.269699 kubelet[2493]: W0430 03:26:26.269698 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.269796 kubelet[2493]: E0430 03:26:26.269714 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.370121 kubelet[2493]: E0430 03:26:26.369564 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.370121 kubelet[2493]: W0430 03:26:26.369607 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.370121 kubelet[2493]: E0430 03:26:26.369641 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.371814 kubelet[2493]: E0430 03:26:26.371634 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.371814 kubelet[2493]: W0430 03:26:26.371670 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.371814 kubelet[2493]: E0430 03:26:26.371713 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.373492 kubelet[2493]: E0430 03:26:26.372764 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.373492 kubelet[2493]: W0430 03:26:26.372816 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.373492 kubelet[2493]: E0430 03:26:26.372856 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.374018 kubelet[2493]: E0430 03:26:26.373929 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.374018 kubelet[2493]: W0430 03:26:26.373963 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.374018 kubelet[2493]: E0430 03:26:26.373992 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.376660 kubelet[2493]: E0430 03:26:26.374558 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.378076 kubelet[2493]: W0430 03:26:26.376441 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.378076 kubelet[2493]: E0430 03:26:26.376921 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.378076 kubelet[2493]: E0430 03:26:26.377340 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.378076 kubelet[2493]: W0430 03:26:26.377358 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.378076 kubelet[2493]: E0430 03:26:26.377524 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.378076 kubelet[2493]: E0430 03:26:26.377796 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.378076 kubelet[2493]: W0430 03:26:26.377811 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.378076 kubelet[2493]: E0430 03:26:26.377852 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.378614 kubelet[2493]: E0430 03:26:26.378199 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.378614 kubelet[2493]: W0430 03:26:26.378216 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.378614 kubelet[2493]: E0430 03:26:26.378249 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.378614 kubelet[2493]: E0430 03:26:26.378614 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.378834 kubelet[2493]: W0430 03:26:26.378629 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.378834 kubelet[2493]: E0430 03:26:26.378655 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.379360 kubelet[2493]: E0430 03:26:26.379232 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.379360 kubelet[2493]: W0430 03:26:26.379321 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.379885 kubelet[2493]: E0430 03:26:26.379615 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.379885 kubelet[2493]: E0430 03:26:26.379635 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.379885 kubelet[2493]: W0430 03:26:26.379719 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.379885 kubelet[2493]: E0430 03:26:26.379759 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.380155 kubelet[2493]: E0430 03:26:26.380133 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.380236 kubelet[2493]: W0430 03:26:26.380219 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.381773 kubelet[2493]: E0430 03:26:26.380340 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.382019 kubelet[2493]: E0430 03:26:26.381991 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.384551 kubelet[2493]: W0430 03:26:26.382141 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.384551 kubelet[2493]: E0430 03:26:26.382222 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.384921 kubelet[2493]: E0430 03:26:26.384882 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.385222 kubelet[2493]: W0430 03:26:26.385018 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.385222 kubelet[2493]: E0430 03:26:26.385097 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.387667 kubelet[2493]: E0430 03:26:26.387314 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.387667 kubelet[2493]: W0430 03:26:26.387353 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.387667 kubelet[2493]: E0430 03:26:26.387431 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.388337 kubelet[2493]: E0430 03:26:26.388307 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.388881 kubelet[2493]: W0430 03:26:26.388422 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.388881 kubelet[2493]: E0430 03:26:26.388538 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.389108 kubelet[2493]: E0430 03:26:26.389090 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.389182 kubelet[2493]: W0430 03:26:26.389168 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.389696 kubelet[2493]: E0430 03:26:26.389524 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.389992 kubelet[2493]: W0430 03:26:26.389795 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.389992 kubelet[2493]: E0430 03:26:26.389837 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.389992 kubelet[2493]: E0430 03:26:26.389858 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.391441 kubelet[2493]: E0430 03:26:26.391252 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.391441 kubelet[2493]: W0430 03:26:26.391277 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.391795 kubelet[2493]: E0430 03:26:26.391778 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.391950 kubelet[2493]: W0430 03:26:26.391850 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.392184 kubelet[2493]: E0430 03:26:26.392166 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.392346 kubelet[2493]: W0430 03:26:26.392261 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.392538 kubelet[2493]: E0430 03:26:26.392525 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.392818 kubelet[2493]: W0430 03:26:26.392610 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.392818 kubelet[2493]: E0430 03:26:26.392631 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.393177 kubelet[2493]: E0430 03:26:26.393153 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.393278 kubelet[2493]: W0430 03:26:26.393263 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.393339 kubelet[2493]: E0430 03:26:26.393327 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.393765 kubelet[2493]: E0430 03:26:26.393676 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.394787 kubelet[2493]: E0430 03:26:26.394469 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.394787 kubelet[2493]: W0430 03:26:26.394488 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.394787 kubelet[2493]: E0430 03:26:26.394503 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.394787 kubelet[2493]: E0430 03:26:26.394558 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.394787 kubelet[2493]: E0430 03:26:26.394578 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.395246 kubelet[2493]: E0430 03:26:26.395228 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.396818 kubelet[2493]: W0430 03:26:26.396717 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.396818 kubelet[2493]: E0430 03:26:26.396762 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.813877 kubelet[2493]: E0430 03:26:26.811761 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.813877 kubelet[2493]: W0430 03:26:26.811814 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.813877 kubelet[2493]: E0430 03:26:26.811850 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.853099 kubelet[2493]: E0430 03:26:26.851526 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.853099 kubelet[2493]: W0430 03:26:26.851560 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.853099 kubelet[2493]: E0430 03:26:26.851630 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.853707 kubelet[2493]: E0430 03:26:26.853618 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:26.854084 kubelet[2493]: W0430 03:26:26.854042 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:26.854342 kubelet[2493]: E0430 03:26:26.854319 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:26.988054 kubelet[2493]: E0430 03:26:26.987963 2493 projected.go:288] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 30 03:26:26.988054 kubelet[2493]: E0430 03:26:26.988053 2493 projected.go:194] Error preparing data for projected volume kube-api-access-zgpjx for pod calico-system/calico-typha-9599479cb-mtzld: failed to sync configmap cache: timed out waiting for the condition Apr 30 03:26:26.988372 kubelet[2493]: E0430 03:26:26.988186 2493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f984b4f5-7db5-484f-b52b-868225ed0fd8-kube-api-access-zgpjx podName:f984b4f5-7db5-484f-b52b-868225ed0fd8 nodeName:}" failed. No retries permitted until 2025-04-30 03:26:27.488148515 +0000 UTC m=+16.888468299 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zgpjx" (UniqueName: "kubernetes.io/projected/f984b4f5-7db5-484f-b52b-868225ed0fd8-kube-api-access-zgpjx") pod "calico-typha-9599479cb-mtzld" (UID: "f984b4f5-7db5-484f-b52b-868225ed0fd8") : failed to sync configmap cache: timed out waiting for the condition Apr 30 03:26:27.087042 kubelet[2493]: E0430 03:26:27.086530 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:27.087042 kubelet[2493]: W0430 03:26:27.086570 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:27.087042 kubelet[2493]: E0430 03:26:27.086600 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:27.181731 kubelet[2493]: E0430 03:26:27.180519 2493 projected.go:288] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 30 03:26:27.181731 kubelet[2493]: E0430 03:26:27.180584 2493 projected.go:194] Error preparing data for projected volume kube-api-access-hn8db for pod calico-system/calico-node-tt4wb: failed to sync configmap cache: timed out waiting for the condition Apr 30 03:26:27.181731 kubelet[2493]: E0430 03:26:27.180677 2493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b145449a-96c4-48cc-acaa-5d67b046b6d9-kube-api-access-hn8db podName:b145449a-96c4-48cc-acaa-5d67b046b6d9 nodeName:}" failed. No retries permitted until 2025-04-30 03:26:27.680651728 +0000 UTC m=+17.080971502 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hn8db" (UniqueName: "kubernetes.io/projected/b145449a-96c4-48cc-acaa-5d67b046b6d9-kube-api-access-hn8db") pod "calico-node-tt4wb" (UID: "b145449a-96c4-48cc-acaa-5d67b046b6d9") : failed to sync configmap cache: timed out waiting for the condition Apr 30 03:26:27.187896 kubelet[2493]: E0430 03:26:27.187843 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:27.187896 kubelet[2493]: W0430 03:26:27.187880 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:27.187896 kubelet[2493]: E0430 03:26:27.187909 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:27.188300 kubelet[2493]: E0430 03:26:27.188281 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:27.188300 kubelet[2493]: W0430 03:26:27.188300 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:27.188403 kubelet[2493]: E0430 03:26:27.188319 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:27.289088 kubelet[2493]: E0430 03:26:27.289040 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:27.289088 kubelet[2493]: W0430 03:26:27.289076 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:27.289088 kubelet[2493]: E0430 03:26:27.289102 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:27.289588 kubelet[2493]: E0430 03:26:27.289388 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:27.289588 kubelet[2493]: W0430 03:26:27.289399 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:27.289588 kubelet[2493]: E0430 03:26:27.289411 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:27.353607 kubelet[2493]: E0430 03:26:27.352786 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:27.353607 kubelet[2493]: W0430 03:26:27.352813 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:27.353607 kubelet[2493]: E0430 03:26:27.352839 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:27.390293 kubelet[2493]: E0430 03:26:27.390219 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:27.390293 kubelet[2493]: W0430 03:26:27.390262 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:27.390293 kubelet[2493]: E0430 03:26:27.390294 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:27.390730 kubelet[2493]: E0430 03:26:27.390698 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:27.390730 kubelet[2493]: W0430 03:26:27.390719 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:27.390730 kubelet[2493]: E0430 03:26:27.390740 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:27.492819 kubelet[2493]: E0430 03:26:27.492574 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:27.492819 kubelet[2493]: W0430 03:26:27.492609 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:27.492819 kubelet[2493]: E0430 03:26:27.492640 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:27.493235 kubelet[2493]: E0430 03:26:27.493206 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:27.493235 kubelet[2493]: W0430 03:26:27.493228 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:27.493407 kubelet[2493]: E0430 03:26:27.493255 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:27.493776 kubelet[2493]: E0430 03:26:27.493750 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:27.493867 kubelet[2493]: W0430 03:26:27.493772 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:27.493867 kubelet[2493]: E0430 03:26:27.493855 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:27.494205 kubelet[2493]: E0430 03:26:27.494086 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:27.494483 kubelet[2493]: W0430 03:26:27.494102 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:27.494483 kubelet[2493]: E0430 03:26:27.494380 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:27.494882 kubelet[2493]: E0430 03:26:27.494643 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:27.494882 kubelet[2493]: W0430 03:26:27.494658 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:27.494882 kubelet[2493]: E0430 03:26:27.494673 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:27.495169 kubelet[2493]: E0430 03:26:27.495119 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:27.495169 kubelet[2493]: W0430 03:26:27.495135 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:27.495169 kubelet[2493]: E0430 03:26:27.495155 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:27.505473 kubelet[2493]: E0430 03:26:27.504244 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:27.505473 kubelet[2493]: W0430 03:26:27.504277 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:27.505473 kubelet[2493]: E0430 03:26:27.504302 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:27.594822 kubelet[2493]: E0430 03:26:27.594623 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:27.594822 kubelet[2493]: W0430 03:26:27.594655 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:27.594822 kubelet[2493]: E0430 03:26:27.594686 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:27.600990 kubelet[2493]: E0430 03:26:27.600946 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:27.601759 containerd[1469]: time="2025-04-30T03:26:27.601583629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9599479cb-mtzld,Uid:f984b4f5-7db5-484f-b52b-868225ed0fd8,Namespace:calico-system,Attempt:0,}" Apr 30 03:26:27.634379 containerd[1469]: time="2025-04-30T03:26:27.632805321Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:26:27.634379 containerd[1469]: time="2025-04-30T03:26:27.632873983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:26:27.634379 containerd[1469]: time="2025-04-30T03:26:27.632885681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:26:27.634379 containerd[1469]: time="2025-04-30T03:26:27.632997189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:26:27.666797 systemd[1]: Started cri-containerd-c151004e946ce1f03172ad257f38c5872bc25456e8284fb88d30ac8071c9c338.scope - libcontainer container c151004e946ce1f03172ad257f38c5872bc25456e8284fb88d30ac8071c9c338. Apr 30 03:26:27.697786 kubelet[2493]: E0430 03:26:27.697632 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:27.697786 kubelet[2493]: W0430 03:26:27.697781 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:27.698128 kubelet[2493]: E0430 03:26:27.697870 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:27.699316 kubelet[2493]: E0430 03:26:27.699226 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:27.699316 kubelet[2493]: W0430 03:26:27.699310 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:27.699516 kubelet[2493]: E0430 03:26:27.699344 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:27.699824 kubelet[2493]: E0430 03:26:27.699800 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:27.699873 kubelet[2493]: W0430 03:26:27.699823 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:27.699873 kubelet[2493]: E0430 03:26:27.699847 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:27.700239 kubelet[2493]: E0430 03:26:27.700218 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:27.700286 kubelet[2493]: W0430 03:26:27.700239 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:27.700286 kubelet[2493]: E0430 03:26:27.700270 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:27.700887 kubelet[2493]: E0430 03:26:27.700865 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:27.700887 kubelet[2493]: W0430 03:26:27.700886 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:27.700992 kubelet[2493]: E0430 03:26:27.700904 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:27.714385 kubelet[2493]: E0430 03:26:27.714338 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:27.714986 kubelet[2493]: W0430 03:26:27.714853 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:27.715095 kubelet[2493]: E0430 03:26:27.715013 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:27.734318 containerd[1469]: time="2025-04-30T03:26:27.734258296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9599479cb-mtzld,Uid:f984b4f5-7db5-484f-b52b-868225ed0fd8,Namespace:calico-system,Attempt:0,} returns sandbox id \"c151004e946ce1f03172ad257f38c5872bc25456e8284fb88d30ac8071c9c338\"" Apr 30 03:26:27.737165 kubelet[2493]: E0430 03:26:27.736935 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:27.739898 containerd[1469]: time="2025-04-30T03:26:27.739628839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" Apr 30 03:26:27.742302 kubelet[2493]: E0430 03:26:27.742236 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kld5k" podUID="b718a743-a03f-4839-ab75-67d0237668cd" Apr 30 03:26:27.780507 kubelet[2493]: E0430 03:26:27.778122 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:27.780801 containerd[1469]: time="2025-04-30T03:26:27.780751195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tt4wb,Uid:b145449a-96c4-48cc-acaa-5d67b046b6d9,Namespace:calico-system,Attempt:0,}" Apr 30 03:26:27.822396 containerd[1469]: time="2025-04-30T03:26:27.821849553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:26:27.822396 containerd[1469]: time="2025-04-30T03:26:27.822027456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:26:27.822396 containerd[1469]: time="2025-04-30T03:26:27.822089749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:26:27.825904 containerd[1469]: time="2025-04-30T03:26:27.825615468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:26:27.850767 systemd[1]: Started cri-containerd-9ce0eea2fe8538f61e9c565a3fa808deb7fc7e39ce30801bf36df8fdc067f189.scope - libcontainer container 9ce0eea2fe8538f61e9c565a3fa808deb7fc7e39ce30801bf36df8fdc067f189. Apr 30 03:26:27.894431 containerd[1469]: time="2025-04-30T03:26:27.892775208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tt4wb,Uid:b145449a-96c4-48cc-acaa-5d67b046b6d9,Namespace:calico-system,Attempt:0,} returns sandbox id \"9ce0eea2fe8538f61e9c565a3fa808deb7fc7e39ce30801bf36df8fdc067f189\"" Apr 30 03:26:27.895615 kubelet[2493]: E0430 03:26:27.895567 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:29.741950 kubelet[2493]: E0430 03:26:29.741902 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kld5k" podUID="b718a743-a03f-4839-ab75-67d0237668cd" Apr 30 03:26:31.312389 containerd[1469]: time="2025-04-30T03:26:31.311341617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:26:31.312389 containerd[1469]: time="2025-04-30T03:26:31.312340029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" Apr 30 03:26:31.313084 containerd[1469]: time="2025-04-30T03:26:31.313045725Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:26:31.315640 containerd[1469]: time="2025-04-30T03:26:31.315560369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:26:31.316721 containerd[1469]: time="2025-04-30T03:26:31.316684942Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 3.576979524s" Apr 30 03:26:31.316870 containerd[1469]: time="2025-04-30T03:26:31.316855658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" Apr 30 03:26:31.322775 containerd[1469]: time="2025-04-30T03:26:31.322732810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 03:26:31.345830 containerd[1469]: time="2025-04-30T03:26:31.344991278Z" level=info msg="CreateContainer within sandbox \"c151004e946ce1f03172ad257f38c5872bc25456e8284fb88d30ac8071c9c338\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 03:26:31.381973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3346239738.mount: Deactivated successfully. Apr 30 03:26:31.385869 containerd[1469]: time="2025-04-30T03:26:31.385791258Z" level=info msg="CreateContainer within sandbox \"c151004e946ce1f03172ad257f38c5872bc25456e8284fb88d30ac8071c9c338\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9ae9c3a979523dea080ab73373a97fff4dd1bf4794821a207bdbb8ccaa6a59f6\"" Apr 30 03:26:31.388523 containerd[1469]: time="2025-04-30T03:26:31.386841824Z" level=info msg="StartContainer for \"9ae9c3a979523dea080ab73373a97fff4dd1bf4794821a207bdbb8ccaa6a59f6\"" Apr 30 03:26:31.435730 systemd[1]: Started cri-containerd-9ae9c3a979523dea080ab73373a97fff4dd1bf4794821a207bdbb8ccaa6a59f6.scope - libcontainer container 9ae9c3a979523dea080ab73373a97fff4dd1bf4794821a207bdbb8ccaa6a59f6. Apr 30 03:26:31.507088 containerd[1469]: time="2025-04-30T03:26:31.507000768Z" level=info msg="StartContainer for \"9ae9c3a979523dea080ab73373a97fff4dd1bf4794821a207bdbb8ccaa6a59f6\" returns successfully" Apr 30 03:26:31.743569 kubelet[2493]: E0430 03:26:31.741580 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kld5k" podUID="b718a743-a03f-4839-ab75-67d0237668cd" Apr 30 03:26:31.881086 kubelet[2493]: E0430 03:26:31.881030 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:31.891227 kubelet[2493]: E0430 03:26:31.891077 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.891227 kubelet[2493]: W0430 03:26:31.891136 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.892877 kubelet[2493]: E0430 03:26:31.892556 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.893259 kubelet[2493]: E0430 03:26:31.893240 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.893339 kubelet[2493]: W0430 03:26:31.893326 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.893405 kubelet[2493]: E0430 03:26:31.893396 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.893887 kubelet[2493]: E0430 03:26:31.893859 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.894004 kubelet[2493]: W0430 03:26:31.893991 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.894258 kubelet[2493]: E0430 03:26:31.894069 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.894511 kubelet[2493]: E0430 03:26:31.894496 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.894607 kubelet[2493]: W0430 03:26:31.894592 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.894692 kubelet[2493]: E0430 03:26:31.894677 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.895439 kubelet[2493]: E0430 03:26:31.895211 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.895439 kubelet[2493]: W0430 03:26:31.895226 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.895439 kubelet[2493]: E0430 03:26:31.895244 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.895800 kubelet[2493]: E0430 03:26:31.895785 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.895878 kubelet[2493]: W0430 03:26:31.895867 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.896182 kubelet[2493]: E0430 03:26:31.895941 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.896929 kubelet[2493]: E0430 03:26:31.896636 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.897354 kubelet[2493]: W0430 03:26:31.897037 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.897354 kubelet[2493]: E0430 03:26:31.897067 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.897619 kubelet[2493]: E0430 03:26:31.897606 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.897728 kubelet[2493]: W0430 03:26:31.897703 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.897798 kubelet[2493]: E0430 03:26:31.897787 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.898289 kubelet[2493]: E0430 03:26:31.898273 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.898778 kubelet[2493]: W0430 03:26:31.898374 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.898778 kubelet[2493]: E0430 03:26:31.898584 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.899402 kubelet[2493]: E0430 03:26:31.899374 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.899518 kubelet[2493]: W0430 03:26:31.899505 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.899609 kubelet[2493]: E0430 03:26:31.899598 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.900152 kubelet[2493]: E0430 03:26:31.900136 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.900280 kubelet[2493]: W0430 03:26:31.900266 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.900414 kubelet[2493]: E0430 03:26:31.900346 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.901067 kubelet[2493]: E0430 03:26:31.901046 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.901329 kubelet[2493]: W0430 03:26:31.901122 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.901329 kubelet[2493]: E0430 03:26:31.901154 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.903527 kubelet[2493]: E0430 03:26:31.902573 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.903527 kubelet[2493]: W0430 03:26:31.902624 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.903527 kubelet[2493]: E0430 03:26:31.902650 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.905802 kubelet[2493]: E0430 03:26:31.905681 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.905802 kubelet[2493]: W0430 03:26:31.905727 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.905802 kubelet[2493]: E0430 03:26:31.905762 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.906728 kubelet[2493]: E0430 03:26:31.906688 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.906728 kubelet[2493]: W0430 03:26:31.906719 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.906971 kubelet[2493]: E0430 03:26:31.906763 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.921265 kubelet[2493]: I0430 03:26:31.920897 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-9599479cb-mtzld" podStartSLOduration=3.341738786 podStartE2EDuration="6.920879255s" podCreationTimestamp="2025-04-30 03:26:25 +0000 UTC" firstStartedPulling="2025-04-30 03:26:27.73923083 +0000 UTC m=+17.139550607" lastFinishedPulling="2025-04-30 03:26:31.318371296 +0000 UTC m=+20.718691076" observedRunningTime="2025-04-30 03:26:31.914690071 +0000 UTC m=+21.315009855" watchObservedRunningTime="2025-04-30 03:26:31.920879255 +0000 UTC m=+21.321199106" Apr 30 03:26:31.937273 kubelet[2493]: E0430 03:26:31.936989 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.937273 kubelet[2493]: W0430 03:26:31.937021 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.937273 kubelet[2493]: E0430 03:26:31.937048 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.939316 kubelet[2493]: E0430 03:26:31.939161 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.939316 kubelet[2493]: W0430 03:26:31.939189 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.939316 kubelet[2493]: E0430 03:26:31.939221 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.940701 kubelet[2493]: E0430 03:26:31.940533 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.940701 kubelet[2493]: W0430 03:26:31.940558 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.941046 kubelet[2493]: E0430 03:26:31.940990 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.941329 kubelet[2493]: E0430 03:26:31.941294 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.941329 kubelet[2493]: W0430 03:26:31.941310 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.941812 kubelet[2493]: E0430 03:26:31.941571 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.942166 kubelet[2493]: E0430 03:26:31.942066 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.942410 kubelet[2493]: W0430 03:26:31.942243 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.942631 kubelet[2493]: E0430 03:26:31.942560 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.943655 kubelet[2493]: E0430 03:26:31.943511 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.943655 kubelet[2493]: W0430 03:26:31.943532 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.943757 kubelet[2493]: E0430 03:26:31.943683 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.944912 kubelet[2493]: E0430 03:26:31.944699 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.944912 kubelet[2493]: W0430 03:26:31.944721 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.945492 kubelet[2493]: E0430 03:26:31.945310 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.945826 kubelet[2493]: E0430 03:26:31.945740 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.945826 kubelet[2493]: W0430 03:26:31.945754 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.946300 kubelet[2493]: E0430 03:26:31.946182 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.947480 kubelet[2493]: E0430 03:26:31.946899 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.947480 kubelet[2493]: W0430 03:26:31.946918 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.947480 kubelet[2493]: E0430 03:26:31.946970 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.948413 kubelet[2493]: E0430 03:26:31.948267 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.948413 kubelet[2493]: W0430 03:26:31.948340 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.948413 kubelet[2493]: E0430 03:26:31.948397 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.949690 kubelet[2493]: E0430 03:26:31.949531 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.949690 kubelet[2493]: W0430 03:26:31.949549 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.949690 kubelet[2493]: E0430 03:26:31.949585 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.950729 kubelet[2493]: E0430 03:26:31.950417 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.950729 kubelet[2493]: W0430 03:26:31.950432 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.950729 kubelet[2493]: E0430 03:26:31.950583 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.951597 kubelet[2493]: E0430 03:26:31.951573 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.951597 kubelet[2493]: W0430 03:26:31.951591 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.952178 kubelet[2493]: E0430 03:26:31.952147 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.952745 kubelet[2493]: E0430 03:26:31.952724 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.952745 kubelet[2493]: W0430 03:26:31.952745 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.953405 kubelet[2493]: E0430 03:26:31.952884 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.953405 kubelet[2493]: E0430 03:26:31.953162 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.953405 kubelet[2493]: W0430 03:26:31.953174 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.954373 kubelet[2493]: E0430 03:26:31.953955 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.954562 kubelet[2493]: E0430 03:26:31.954468 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.954562 kubelet[2493]: W0430 03:26:31.954485 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.954562 kubelet[2493]: E0430 03:26:31.954514 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.954825 kubelet[2493]: E0430 03:26:31.954809 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.954825 kubelet[2493]: W0430 03:26:31.954824 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.955014 kubelet[2493]: E0430 03:26:31.954959 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:31.955679 kubelet[2493]: E0430 03:26:31.955661 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:31.955679 kubelet[2493]: W0430 03:26:31.955676 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:31.956070 kubelet[2493]: E0430 03:26:31.955689 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.883949 kubelet[2493]: E0430 03:26:32.883896 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:32.914409 kubelet[2493]: E0430 03:26:32.914358 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.914409 kubelet[2493]: W0430 03:26:32.914389 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.914409 kubelet[2493]: E0430 03:26:32.914423 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.914731 kubelet[2493]: E0430 03:26:32.914702 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.914731 kubelet[2493]: W0430 03:26:32.914712 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.914731 kubelet[2493]: E0430 03:26:32.914728 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.915279 kubelet[2493]: E0430 03:26:32.915067 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.915279 kubelet[2493]: W0430 03:26:32.915082 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.915279 kubelet[2493]: E0430 03:26:32.915100 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.915434 kubelet[2493]: E0430 03:26:32.915309 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.915434 kubelet[2493]: W0430 03:26:32.915317 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.915434 kubelet[2493]: E0430 03:26:32.915325 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.915684 kubelet[2493]: E0430 03:26:32.915634 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.915684 kubelet[2493]: W0430 03:26:32.915649 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.915684 kubelet[2493]: E0430 03:26:32.915660 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.915986 kubelet[2493]: E0430 03:26:32.915961 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.915986 kubelet[2493]: W0430 03:26:32.915979 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.915986 kubelet[2493]: E0430 03:26:32.915993 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.916247 kubelet[2493]: E0430 03:26:32.916230 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.916247 kubelet[2493]: W0430 03:26:32.916242 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.916350 kubelet[2493]: E0430 03:26:32.916262 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.916494 kubelet[2493]: E0430 03:26:32.916480 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.916494 kubelet[2493]: W0430 03:26:32.916495 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.916603 kubelet[2493]: E0430 03:26:32.916504 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.916713 kubelet[2493]: E0430 03:26:32.916700 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.916713 kubelet[2493]: W0430 03:26:32.916712 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.916816 kubelet[2493]: E0430 03:26:32.916724 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.916991 kubelet[2493]: E0430 03:26:32.916971 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.916991 kubelet[2493]: W0430 03:26:32.916986 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.917067 kubelet[2493]: E0430 03:26:32.916999 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.917286 kubelet[2493]: E0430 03:26:32.917271 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.917326 kubelet[2493]: W0430 03:26:32.917286 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.917326 kubelet[2493]: E0430 03:26:32.917300 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.917582 kubelet[2493]: E0430 03:26:32.917566 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.917619 kubelet[2493]: W0430 03:26:32.917583 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.917619 kubelet[2493]: E0430 03:26:32.917596 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.917837 kubelet[2493]: E0430 03:26:32.917822 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.917872 kubelet[2493]: W0430 03:26:32.917844 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.917872 kubelet[2493]: E0430 03:26:32.917857 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.918104 kubelet[2493]: E0430 03:26:32.918087 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.918161 kubelet[2493]: W0430 03:26:32.918102 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.918161 kubelet[2493]: E0430 03:26:32.918123 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.918362 kubelet[2493]: E0430 03:26:32.918347 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.918401 kubelet[2493]: W0430 03:26:32.918362 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.918401 kubelet[2493]: E0430 03:26:32.918375 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.948276 kubelet[2493]: E0430 03:26:32.948229 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.948276 kubelet[2493]: W0430 03:26:32.948260 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.948276 kubelet[2493]: E0430 03:26:32.948288 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.950743 kubelet[2493]: E0430 03:26:32.948696 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.950743 kubelet[2493]: W0430 03:26:32.948713 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.950743 kubelet[2493]: E0430 03:26:32.948732 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.950743 kubelet[2493]: E0430 03:26:32.948998 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.950743 kubelet[2493]: W0430 03:26:32.949010 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.950743 kubelet[2493]: E0430 03:26:32.949024 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.950743 kubelet[2493]: E0430 03:26:32.949298 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.950743 kubelet[2493]: W0430 03:26:32.949311 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.950743 kubelet[2493]: E0430 03:26:32.949324 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.950743 kubelet[2493]: E0430 03:26:32.949566 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.953566 kubelet[2493]: W0430 03:26:32.949580 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.953566 kubelet[2493]: E0430 03:26:32.949594 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.953566 kubelet[2493]: E0430 03:26:32.949811 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.953566 kubelet[2493]: W0430 03:26:32.949822 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.953566 kubelet[2493]: E0430 03:26:32.949835 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.953566 kubelet[2493]: E0430 03:26:32.950094 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.953566 kubelet[2493]: W0430 03:26:32.950106 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.953566 kubelet[2493]: E0430 03:26:32.950118 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.953566 kubelet[2493]: E0430 03:26:32.950665 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.953566 kubelet[2493]: W0430 03:26:32.950679 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.954135 kubelet[2493]: E0430 03:26:32.950711 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.954135 kubelet[2493]: E0430 03:26:32.951083 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.954135 kubelet[2493]: W0430 03:26:32.951105 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.954135 kubelet[2493]: E0430 03:26:32.951128 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.954135 kubelet[2493]: E0430 03:26:32.951385 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.954135 kubelet[2493]: W0430 03:26:32.951399 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.954135 kubelet[2493]: E0430 03:26:32.951421 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.954135 kubelet[2493]: E0430 03:26:32.951698 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.954135 kubelet[2493]: W0430 03:26:32.951712 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.954135 kubelet[2493]: E0430 03:26:32.951732 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.954609 kubelet[2493]: E0430 03:26:32.951991 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.954609 kubelet[2493]: W0430 03:26:32.952002 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.954609 kubelet[2493]: E0430 03:26:32.952026 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.954609 kubelet[2493]: E0430 03:26:32.952944 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.954609 kubelet[2493]: W0430 03:26:32.952957 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.954609 kubelet[2493]: E0430 03:26:32.952971 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.954609 kubelet[2493]: E0430 03:26:32.953184 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.954609 kubelet[2493]: W0430 03:26:32.953193 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.954609 kubelet[2493]: E0430 03:26:32.953205 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.954609 kubelet[2493]: E0430 03:26:32.953408 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.955047 kubelet[2493]: W0430 03:26:32.953418 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.955047 kubelet[2493]: E0430 03:26:32.953430 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.955047 kubelet[2493]: E0430 03:26:32.953662 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.955047 kubelet[2493]: W0430 03:26:32.953674 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.955047 kubelet[2493]: E0430 03:26:32.953687 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.955047 kubelet[2493]: E0430 03:26:32.953954 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.955047 kubelet[2493]: W0430 03:26:32.953966 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.955047 kubelet[2493]: E0430 03:26:32.953980 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:32.955047 kubelet[2493]: E0430 03:26:32.954394 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:26:32.955047 kubelet[2493]: W0430 03:26:32.954407 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:26:32.955443 kubelet[2493]: E0430 03:26:32.954421 2493 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:26:33.640604 containerd[1469]: time="2025-04-30T03:26:33.640539869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:26:33.642025 containerd[1469]: time="2025-04-30T03:26:33.641950874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" Apr 30 03:26:33.643000 containerd[1469]: time="2025-04-30T03:26:33.642954293Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:26:33.645336 containerd[1469]: time="2025-04-30T03:26:33.645295570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:26:33.646852 containerd[1469]: time="2025-04-30T03:26:33.646771475Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 2.323748326s" Apr 30 03:26:33.646852 containerd[1469]: time="2025-04-30T03:26:33.646823730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" Apr 30 03:26:33.651855 containerd[1469]: time="2025-04-30T03:26:33.651627345Z" level=info msg="CreateContainer within sandbox \"9ce0eea2fe8538f61e9c565a3fa808deb7fc7e39ce30801bf36df8fdc067f189\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 03:26:33.665020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1295683662.mount: Deactivated successfully. Apr 30 03:26:33.668561 containerd[1469]: time="2025-04-30T03:26:33.667516755Z" level=info msg="CreateContainer within sandbox \"9ce0eea2fe8538f61e9c565a3fa808deb7fc7e39ce30801bf36df8fdc067f189\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"58208f6cab24a2486bcc7e9a67c248b2e149de73605c23327acc2bcb8baebbbe\"" Apr 30 03:26:33.669224 containerd[1469]: time="2025-04-30T03:26:33.669172799Z" level=info msg="StartContainer for \"58208f6cab24a2486bcc7e9a67c248b2e149de73605c23327acc2bcb8baebbbe\"" Apr 30 03:26:33.723798 systemd[1]: Started cri-containerd-58208f6cab24a2486bcc7e9a67c248b2e149de73605c23327acc2bcb8baebbbe.scope - libcontainer container 58208f6cab24a2486bcc7e9a67c248b2e149de73605c23327acc2bcb8baebbbe. Apr 30 03:26:33.742310 kubelet[2493]: E0430 03:26:33.741116 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kld5k" podUID="b718a743-a03f-4839-ab75-67d0237668cd" Apr 30 03:26:33.766599 containerd[1469]: time="2025-04-30T03:26:33.766542301Z" level=info msg="StartContainer for \"58208f6cab24a2486bcc7e9a67c248b2e149de73605c23327acc2bcb8baebbbe\" returns successfully" Apr 30 03:26:33.785045 systemd[1]: cri-containerd-58208f6cab24a2486bcc7e9a67c248b2e149de73605c23327acc2bcb8baebbbe.scope: Deactivated successfully. Apr 30 03:26:33.819771 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58208f6cab24a2486bcc7e9a67c248b2e149de73605c23327acc2bcb8baebbbe-rootfs.mount: Deactivated successfully. Apr 30 03:26:33.821108 containerd[1469]: time="2025-04-30T03:26:33.820725878Z" level=info msg="shim disconnected" id=58208f6cab24a2486bcc7e9a67c248b2e149de73605c23327acc2bcb8baebbbe namespace=k8s.io Apr 30 03:26:33.821108 containerd[1469]: time="2025-04-30T03:26:33.820834753Z" level=warning msg="cleaning up after shim disconnected" id=58208f6cab24a2486bcc7e9a67c248b2e149de73605c23327acc2bcb8baebbbe namespace=k8s.io Apr 30 03:26:33.821108 containerd[1469]: time="2025-04-30T03:26:33.820848391Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:26:33.888389 kubelet[2493]: E0430 03:26:33.888335 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:33.888839 kubelet[2493]: E0430 03:26:33.888510 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:33.890435 containerd[1469]: time="2025-04-30T03:26:33.890373533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 03:26:35.742619 kubelet[2493]: E0430 03:26:35.741145 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kld5k" podUID="b718a743-a03f-4839-ab75-67d0237668cd" Apr 30 03:26:37.742703 kubelet[2493]: E0430 03:26:37.742604 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kld5k" podUID="b718a743-a03f-4839-ab75-67d0237668cd" Apr 30 03:26:38.669642 containerd[1469]: time="2025-04-30T03:26:38.669569693Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:26:38.671754 containerd[1469]: time="2025-04-30T03:26:38.671674930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" Apr 30 03:26:38.673525 containerd[1469]: time="2025-04-30T03:26:38.672546306Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:26:38.676484 containerd[1469]: time="2025-04-30T03:26:38.676280948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:26:38.677736 containerd[1469]: time="2025-04-30T03:26:38.677558577Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 4.787138406s" Apr 30 03:26:38.677736 containerd[1469]: time="2025-04-30T03:26:38.677609025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" Apr 30 03:26:38.681317 containerd[1469]: time="2025-04-30T03:26:38.681132122Z" level=info msg="CreateContainer within sandbox \"9ce0eea2fe8538f61e9c565a3fa808deb7fc7e39ce30801bf36df8fdc067f189\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 03:26:38.699095 containerd[1469]: time="2025-04-30T03:26:38.699027244Z" level=info msg="CreateContainer within sandbox \"9ce0eea2fe8538f61e9c565a3fa808deb7fc7e39ce30801bf36df8fdc067f189\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e5294c8dc5ca972532f2291cbf0938b1681da601e975ecb60a8a619030baf42a\"" Apr 30 03:26:38.701786 containerd[1469]: time="2025-04-30T03:26:38.701733058Z" level=info msg="StartContainer for \"e5294c8dc5ca972532f2291cbf0938b1681da601e975ecb60a8a619030baf42a\"" Apr 30 03:26:38.818896 systemd[1]: Started cri-containerd-e5294c8dc5ca972532f2291cbf0938b1681da601e975ecb60a8a619030baf42a.scope - libcontainer container e5294c8dc5ca972532f2291cbf0938b1681da601e975ecb60a8a619030baf42a. Apr 30 03:26:38.860105 containerd[1469]: time="2025-04-30T03:26:38.858557686Z" level=info msg="StartContainer for \"e5294c8dc5ca972532f2291cbf0938b1681da601e975ecb60a8a619030baf42a\" returns successfully" Apr 30 03:26:38.902902 kubelet[2493]: E0430 03:26:38.902837 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:39.449984 systemd[1]: cri-containerd-e5294c8dc5ca972532f2291cbf0938b1681da601e975ecb60a8a619030baf42a.scope: Deactivated successfully. Apr 30 03:26:39.493806 containerd[1469]: time="2025-04-30T03:26:39.492536456Z" level=info msg="shim disconnected" id=e5294c8dc5ca972532f2291cbf0938b1681da601e975ecb60a8a619030baf42a namespace=k8s.io Apr 30 03:26:39.493806 containerd[1469]: time="2025-04-30T03:26:39.492599405Z" level=warning msg="cleaning up after shim disconnected" id=e5294c8dc5ca972532f2291cbf0938b1681da601e975ecb60a8a619030baf42a namespace=k8s.io Apr 30 03:26:39.493806 containerd[1469]: time="2025-04-30T03:26:39.492610744Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:26:39.494087 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5294c8dc5ca972532f2291cbf0938b1681da601e975ecb60a8a619030baf42a-rootfs.mount: Deactivated successfully. Apr 30 03:26:39.517322 containerd[1469]: time="2025-04-30T03:26:39.517247231Z" level=warning msg="cleanup warnings time=\"2025-04-30T03:26:39Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 03:26:39.539622 kubelet[2493]: I0430 03:26:39.539584 2493 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Apr 30 03:26:39.591268 systemd[1]: Created slice kubepods-burstable-pod73a01c76_53ce_4689_98ef_5958ffbdd467.slice - libcontainer container kubepods-burstable-pod73a01c76_53ce_4689_98ef_5958ffbdd467.slice. Apr 30 03:26:39.601783 kubelet[2493]: W0430 03:26:39.601179 2493 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4081.3.3-2-e7e0406ed5" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.3-2-e7e0406ed5' and this object Apr 30 03:26:39.601783 kubelet[2493]: E0430 03:26:39.601457 2493 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4081.3.3-2-e7e0406ed5\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081.3.3-2-e7e0406ed5' and this object" logger="UnhandledError" Apr 30 03:26:39.604714 kubelet[2493]: W0430 03:26:39.603195 2493 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.3-2-e7e0406ed5" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.3-2-e7e0406ed5' and this object Apr 30 03:26:39.604714 kubelet[2493]: E0430 03:26:39.603245 2493 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081.3.3-2-e7e0406ed5\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081.3.3-2-e7e0406ed5' and this object" logger="UnhandledError" Apr 30 03:26:39.620325 systemd[1]: Created slice kubepods-besteffort-pod49d8fe7d_2d8b_449c_8513_592e9baca4b5.slice - libcontainer container kubepods-besteffort-pod49d8fe7d_2d8b_449c_8513_592e9baca4b5.slice. Apr 30 03:26:39.638473 systemd[1]: Created slice kubepods-burstable-pod107c0e05_c671_4f02_9024_81afd867d87a.slice - libcontainer container kubepods-burstable-pod107c0e05_c671_4f02_9024_81afd867d87a.slice. Apr 30 03:26:39.653273 systemd[1]: Created slice kubepods-besteffort-pod7b15cb4e_c1af_493f_b9f9_4cb6e0146639.slice - libcontainer container kubepods-besteffort-pod7b15cb4e_c1af_493f_b9f9_4cb6e0146639.slice. Apr 30 03:26:39.664153 systemd[1]: Created slice kubepods-besteffort-podfc52b32e_625c_4cbe_89c1_725d343029fc.slice - libcontainer container kubepods-besteffort-podfc52b32e_625c_4cbe_89c1_725d343029fc.slice. Apr 30 03:26:39.703307 kubelet[2493]: I0430 03:26:39.702954 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qccf2\" (UniqueName: \"kubernetes.io/projected/107c0e05-c671-4f02-9024-81afd867d87a-kube-api-access-qccf2\") pod \"coredns-668d6bf9bc-fj257\" (UID: \"107c0e05-c671-4f02-9024-81afd867d87a\") " pod="kube-system/coredns-668d6bf9bc-fj257" Apr 30 03:26:39.704503 kubelet[2493]: I0430 03:26:39.703973 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw96c\" (UniqueName: \"kubernetes.io/projected/73a01c76-53ce-4689-98ef-5958ffbdd467-kube-api-access-tw96c\") pod \"coredns-668d6bf9bc-c4b7t\" (UID: \"73a01c76-53ce-4689-98ef-5958ffbdd467\") " pod="kube-system/coredns-668d6bf9bc-c4b7t" Apr 30 03:26:39.704503 kubelet[2493]: I0430 03:26:39.704033 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzvmd\" (UniqueName: \"kubernetes.io/projected/49d8fe7d-2d8b-449c-8513-592e9baca4b5-kube-api-access-gzvmd\") pod \"calico-apiserver-ccf698587-gjwrf\" (UID: \"49d8fe7d-2d8b-449c-8513-592e9baca4b5\") " pod="calico-apiserver/calico-apiserver-ccf698587-gjwrf" Apr 30 03:26:39.704503 kubelet[2493]: I0430 03:26:39.704060 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7b15cb4e-c1af-493f-b9f9-4cb6e0146639-calico-apiserver-certs\") pod \"calico-apiserver-ccf698587-5rqrm\" (UID: \"7b15cb4e-c1af-493f-b9f9-4cb6e0146639\") " pod="calico-apiserver/calico-apiserver-ccf698587-5rqrm" Apr 30 03:26:39.704503 kubelet[2493]: I0430 03:26:39.704098 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rcxw\" (UniqueName: \"kubernetes.io/projected/fc52b32e-625c-4cbe-89c1-725d343029fc-kube-api-access-9rcxw\") pod \"calico-kube-controllers-566d49dfb-hxnld\" (UID: \"fc52b32e-625c-4cbe-89c1-725d343029fc\") " pod="calico-system/calico-kube-controllers-566d49dfb-hxnld" Apr 30 03:26:39.704503 kubelet[2493]: I0430 03:26:39.704129 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/49d8fe7d-2d8b-449c-8513-592e9baca4b5-calico-apiserver-certs\") pod \"calico-apiserver-ccf698587-gjwrf\" (UID: \"49d8fe7d-2d8b-449c-8513-592e9baca4b5\") " pod="calico-apiserver/calico-apiserver-ccf698587-gjwrf" Apr 30 03:26:39.704864 kubelet[2493]: I0430 03:26:39.704155 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/107c0e05-c671-4f02-9024-81afd867d87a-config-volume\") pod \"coredns-668d6bf9bc-fj257\" (UID: \"107c0e05-c671-4f02-9024-81afd867d87a\") " pod="kube-system/coredns-668d6bf9bc-fj257" Apr 30 03:26:39.704864 kubelet[2493]: I0430 03:26:39.704195 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73a01c76-53ce-4689-98ef-5958ffbdd467-config-volume\") pod \"coredns-668d6bf9bc-c4b7t\" (UID: \"73a01c76-53ce-4689-98ef-5958ffbdd467\") " pod="kube-system/coredns-668d6bf9bc-c4b7t" Apr 30 03:26:39.704864 kubelet[2493]: I0430 03:26:39.704227 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc52b32e-625c-4cbe-89c1-725d343029fc-tigera-ca-bundle\") pod \"calico-kube-controllers-566d49dfb-hxnld\" (UID: \"fc52b32e-625c-4cbe-89c1-725d343029fc\") " pod="calico-system/calico-kube-controllers-566d49dfb-hxnld" Apr 30 03:26:39.704864 kubelet[2493]: I0430 03:26:39.704262 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9hd5\" (UniqueName: \"kubernetes.io/projected/7b15cb4e-c1af-493f-b9f9-4cb6e0146639-kube-api-access-h9hd5\") pod \"calico-apiserver-ccf698587-5rqrm\" (UID: \"7b15cb4e-c1af-493f-b9f9-4cb6e0146639\") " pod="calico-apiserver/calico-apiserver-ccf698587-5rqrm" Apr 30 03:26:39.751118 systemd[1]: Created slice kubepods-besteffort-podb718a743_a03f_4839_ab75_67d0237668cd.slice - libcontainer container kubepods-besteffort-podb718a743_a03f_4839_ab75_67d0237668cd.slice. Apr 30 03:26:39.754862 containerd[1469]: time="2025-04-30T03:26:39.754807275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kld5k,Uid:b718a743-a03f-4839-ab75-67d0237668cd,Namespace:calico-system,Attempt:0,}" Apr 30 03:26:39.905608 kubelet[2493]: E0430 03:26:39.903565 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:39.906114 containerd[1469]: time="2025-04-30T03:26:39.905202475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c4b7t,Uid:73a01c76-53ce-4689-98ef-5958ffbdd467,Namespace:kube-system,Attempt:0,}" Apr 30 03:26:39.943239 kubelet[2493]: E0430 03:26:39.942074 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:39.946343 kubelet[2493]: E0430 03:26:39.946300 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:39.947805 containerd[1469]: time="2025-04-30T03:26:39.947714551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 03:26:39.948347 containerd[1469]: time="2025-04-30T03:26:39.948301919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fj257,Uid:107c0e05-c671-4f02-9024-81afd867d87a,Namespace:kube-system,Attempt:0,}" Apr 30 03:26:39.990479 containerd[1469]: time="2025-04-30T03:26:39.990267555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-566d49dfb-hxnld,Uid:fc52b32e-625c-4cbe-89c1-725d343029fc,Namespace:calico-system,Attempt:0,}" Apr 30 03:26:40.192682 containerd[1469]: time="2025-04-30T03:26:40.192624356Z" level=error msg="Failed to destroy network for sandbox \"ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:26:40.198320 containerd[1469]: time="2025-04-30T03:26:40.198237367Z" level=error msg="encountered an error cleaning up failed sandbox \"ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:26:40.198619 containerd[1469]: time="2025-04-30T03:26:40.198586942Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c4b7t,Uid:73a01c76-53ce-4689-98ef-5958ffbdd467,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:26:40.205372 containerd[1469]: time="2025-04-30T03:26:40.204223763Z" level=error msg="Failed to destroy network for sandbox \"c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:26:40.205564 containerd[1469]: time="2025-04-30T03:26:40.205412003Z" level=error msg="encountered an error cleaning up failed sandbox \"c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:26:40.205861 containerd[1469]: time="2025-04-30T03:26:40.205752744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kld5k,Uid:b718a743-a03f-4839-ab75-67d0237668cd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:26:40.207144 containerd[1469]: time="2025-04-30T03:26:40.205941753Z" level=error msg="Failed to destroy network for sandbox \"023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:26:40.207144 containerd[1469]: time="2025-04-30T03:26:40.206607239Z" level=error msg="encountered an error cleaning up failed sandbox \"023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:26:40.207144 containerd[1469]: time="2025-04-30T03:26:40.206666878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fj257,Uid:107c0e05-c671-4f02-9024-81afd867d87a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:26:40.230363 kubelet[2493]: E0430 03:26:40.230225 2493 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:26:40.230363 kubelet[2493]: E0430 03:26:40.230286 2493 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:26:40.230363 kubelet[2493]: E0430 03:26:40.230321 2493 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-c4b7t" Apr 30 03:26:40.230363 kubelet[2493]: E0430 03:26:40.230357 2493 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-c4b7t" Apr 30 03:26:40.231057 kubelet[2493]: E0430 03:26:40.230409 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-c4b7t_kube-system(73a01c76-53ce-4689-98ef-5958ffbdd467)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-c4b7t_kube-system(73a01c76-53ce-4689-98ef-5958ffbdd467)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-c4b7t" podUID="73a01c76-53ce-4689-98ef-5958ffbdd467" Apr 30 03:26:40.231057 kubelet[2493]: E0430 03:26:40.230225 2493 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:26:40.231057 kubelet[2493]: E0430 03:26:40.230736 2493 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fj257" Apr 30 03:26:40.231315 kubelet[2493]: E0430 03:26:40.230765 2493 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fj257" Apr 30 03:26:40.231315 kubelet[2493]: E0430 03:26:40.230862 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-fj257_kube-system(107c0e05-c671-4f02-9024-81afd867d87a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-fj257_kube-system(107c0e05-c671-4f02-9024-81afd867d87a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fj257" podUID="107c0e05-c671-4f02-9024-81afd867d87a" Apr 30 03:26:40.231315 kubelet[2493]: E0430 03:26:40.230873 2493 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kld5k" Apr 30 03:26:40.232350 kubelet[2493]: E0430 03:26:40.230924 2493 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kld5k" Apr 30 03:26:40.232350 kubelet[2493]: E0430 03:26:40.231549 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kld5k_calico-system(b718a743-a03f-4839-ab75-67d0237668cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kld5k_calico-system(b718a743-a03f-4839-ab75-67d0237668cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kld5k" podUID="b718a743-a03f-4839-ab75-67d0237668cd" Apr 30 03:26:40.277693 containerd[1469]: time="2025-04-30T03:26:40.277471343Z" level=error msg="Failed to destroy network for sandbox \"063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:26:40.278510 containerd[1469]: time="2025-04-30T03:26:40.278191422Z" level=error msg="encountered an error cleaning up failed sandbox \"063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:26:40.278510 containerd[1469]: time="2025-04-30T03:26:40.278255633Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-566d49dfb-hxnld,Uid:fc52b32e-625c-4cbe-89c1-725d343029fc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:26:40.278996 kubelet[2493]: E0430 03:26:40.278941 2493 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:26:40.279074 kubelet[2493]: E0430 03:26:40.279018 2493 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-566d49dfb-hxnld" Apr 30 03:26:40.279074 kubelet[2493]: E0430 03:26:40.279044 2493 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-566d49dfb-hxnld" Apr 30 03:26:40.279166 kubelet[2493]: E0430 03:26:40.279106 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-566d49dfb-hxnld_calico-system(fc52b32e-625c-4cbe-89c1-725d343029fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-566d49dfb-hxnld_calico-system(fc52b32e-625c-4cbe-89c1-725d343029fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-566d49dfb-hxnld" podUID="fc52b32e-625c-4cbe-89c1-725d343029fc" Apr 30 03:26:40.772936 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c-shm.mount: Deactivated successfully. Apr 30 03:26:40.833415 containerd[1469]: time="2025-04-30T03:26:40.833347465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccf698587-gjwrf,Uid:49d8fe7d-2d8b-449c-8513-592e9baca4b5,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:26:40.863185 containerd[1469]: time="2025-04-30T03:26:40.863139341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccf698587-5rqrm,Uid:7b15cb4e-c1af-493f-b9f9-4cb6e0146639,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:26:40.946514 kubelet[2493]: I0430 03:26:40.946121 2493 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" Apr 30 03:26:40.952079 kubelet[2493]: I0430 03:26:40.951323 2493 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" Apr 30 03:26:40.955989 containerd[1469]: time="2025-04-30T03:26:40.955619189Z" level=info msg="StopPodSandbox for \"063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067\"" Apr 30 03:26:40.956289 containerd[1469]: time="2025-04-30T03:26:40.956181022Z" level=info msg="StopPodSandbox for \"023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4\"" Apr 30 03:26:40.958511 containerd[1469]: time="2025-04-30T03:26:40.958230354Z" level=info msg="Ensure that sandbox 063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067 in task-service has been cleanup successfully" Apr 30 03:26:40.958724 containerd[1469]: time="2025-04-30T03:26:40.958691239Z" level=info msg="Ensure that sandbox 023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4 in task-service has been cleanup successfully" Apr 30 03:26:40.960413 kubelet[2493]: I0430 03:26:40.960371 2493 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" Apr 30 03:26:40.961405 containerd[1469]: time="2025-04-30T03:26:40.961206675Z" level=info msg="StopPodSandbox for \"ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0\"" Apr 30 03:26:40.962167 containerd[1469]: time="2025-04-30T03:26:40.962097041Z" level=info msg="Ensure that sandbox ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0 in task-service has been cleanup successfully" Apr 30 03:26:40.974831 kubelet[2493]: I0430 03:26:40.974577 2493 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" Apr 30 03:26:40.977690 containerd[1469]: time="2025-04-30T03:26:40.977581707Z" level=info msg="StopPodSandbox for \"c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c\"" Apr 30 03:26:40.979474 containerd[1469]: time="2025-04-30T03:26:40.979220021Z" level=info msg="Ensure that sandbox c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c in task-service has been cleanup successfully" Apr 30 03:26:40.996966 containerd[1469]: time="2025-04-30T03:26:40.996562810Z" level=error msg="Failed to destroy network for sandbox \"21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:26:40.997098 containerd[1469]: time="2025-04-30T03:26:40.997023250Z" level=error msg="encountered an error cleaning up failed sandbox \"21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:26:40.997152 containerd[1469]: time="2025-04-30T03:26:40.997094969Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccf698587-gjwrf,Uid:49d8fe7d-2d8b-449c-8513-592e9baca4b5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:26:40.997507 kubelet[2493]: E0430 03:26:40.997457 2493 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:26:40.997606 kubelet[2493]: E0430 03:26:40.997537 2493 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccf698587-gjwrf" Apr 30 03:26:40.997606 kubelet[2493]: E0430 03:26:40.997561 2493 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccf698587-gjwrf" Apr 30 03:26:40.997659 kubelet[2493]: E0430 03:26:40.997612 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ccf698587-gjwrf_calico-apiserver(49d8fe7d-2d8b-449c-8513-592e9baca4b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ccf698587-gjwrf_calico-apiserver(49d8fe7d-2d8b-449c-8513-592e9baca4b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ccf698587-gjwrf" podUID="49d8fe7d-2d8b-449c-8513-592e9baca4b5" Apr 30 03:26:41.058681 containerd[1469]: time="2025-04-30T03:26:41.058622529Z" level=error msg="Failed to destroy network for sandbox \"16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:26:41.059157 containerd[1469]: time="2025-04-30T03:26:41.058993687Z" level=error msg="encountered an error cleaning up failed sandbox \"16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:26:41.059157 containerd[1469]: time="2025-04-30T03:26:41.059057670Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccf698587-5rqrm,Uid:7b15cb4e-c1af-493f-b9f9-4cb6e0146639,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:26:41.060389 kubelet[2493]: E0430 03:26:41.059581 2493 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:26:41.060389 kubelet[2493]: E0430 03:26:41.060013 2493 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccf698587-5rqrm" Apr 30 03:26:41.060389 kubelet[2493]: E0430 03:26:41.060041 2493 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccf698587-5rqrm" Apr 30 03:26:41.060576 kubelet[2493]: E0430 03:26:41.060097 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ccf698587-5rqrm_calico-apiserver(7b15cb4e-c1af-493f-b9f9-4cb6e0146639)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ccf698587-5rqrm_calico-apiserver(7b15cb4e-c1af-493f-b9f9-4cb6e0146639)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ccf698587-5rqrm" podUID="7b15cb4e-c1af-493f-b9f9-4cb6e0146639" Apr 30 03:26:41.084080 containerd[1469]: time="2025-04-30T03:26:41.083154914Z" level=error msg="StopPodSandbox for \"ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0\" failed" error="failed to destroy network for sandbox \"ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:26:41.084346 kubelet[2493]: E0430 03:26:41.083513 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" Apr 30 03:26:41.084346 kubelet[2493]: E0430 03:26:41.083598 2493 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0"} Apr 30 03:26:41.084346 kubelet[2493]: E0430 03:26:41.083672 2493 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"73a01c76-53ce-4689-98ef-5958ffbdd467\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:26:41.084346 kubelet[2493]: E0430 03:26:41.083705 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"73a01c76-53ce-4689-98ef-5958ffbdd467\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-c4b7t" podUID="73a01c76-53ce-4689-98ef-5958ffbdd467" Apr 30 03:26:41.091338 containerd[1469]: time="2025-04-30T03:26:41.090829920Z" level=error msg="StopPodSandbox for \"063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067\" failed" error="failed to destroy network for sandbox \"063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:26:41.091575 kubelet[2493]: E0430 03:26:41.091106 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" Apr 30 03:26:41.091575 kubelet[2493]: E0430 03:26:41.091168 2493 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067"} Apr 30 03:26:41.091575 kubelet[2493]: E0430 03:26:41.091205 2493 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fc52b32e-625c-4cbe-89c1-725d343029fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:26:41.091575 kubelet[2493]: E0430 03:26:41.091236 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fc52b32e-625c-4cbe-89c1-725d343029fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-566d49dfb-hxnld" podUID="fc52b32e-625c-4cbe-89c1-725d343029fc" Apr 30 03:26:41.101550 containerd[1469]: time="2025-04-30T03:26:41.101479060Z" level=error msg="StopPodSandbox for \"023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4\" failed" error="failed to destroy network for sandbox \"023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:26:41.102877 kubelet[2493]: E0430 03:26:41.102624 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" Apr 30 03:26:41.102877 kubelet[2493]: E0430 03:26:41.102686 2493 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4"} Apr 30 03:26:41.102877 kubelet[2493]: E0430 03:26:41.102738 2493 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"107c0e05-c671-4f02-9024-81afd867d87a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:26:41.102877 kubelet[2493]: E0430 03:26:41.102818 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"107c0e05-c671-4f02-9024-81afd867d87a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fj257" podUID="107c0e05-c671-4f02-9024-81afd867d87a" Apr 30 03:26:41.105060 containerd[1469]: time="2025-04-30T03:26:41.104999969Z" level=error msg="StopPodSandbox for \"c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c\" failed" error="failed to destroy network for sandbox \"c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:26:41.105634 kubelet[2493]: E0430 03:26:41.105428 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" Apr 30 03:26:41.105634 kubelet[2493]: E0430 03:26:41.105517 2493 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c"} Apr 30 03:26:41.105634 kubelet[2493]: E0430 03:26:41.105572 2493 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b718a743-a03f-4839-ab75-67d0237668cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:26:41.105634 kubelet[2493]: E0430 03:26:41.105594 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b718a743-a03f-4839-ab75-67d0237668cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kld5k" podUID="b718a743-a03f-4839-ab75-67d0237668cd" Apr 30 03:26:41.767743 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b-shm.mount: Deactivated successfully. Apr 30 03:26:41.979331 kubelet[2493]: I0430 03:26:41.979295 2493 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" Apr 30 03:26:41.981141 containerd[1469]: time="2025-04-30T03:26:41.980742672Z" level=info msg="StopPodSandbox for \"21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b\"" Apr 30 03:26:41.981141 containerd[1469]: time="2025-04-30T03:26:41.981058529Z" level=info msg="Ensure that sandbox 21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b in task-service has been cleanup successfully" Apr 30 03:26:41.983724 kubelet[2493]: I0430 03:26:41.982733 2493 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" Apr 30 03:26:41.984928 containerd[1469]: time="2025-04-30T03:26:41.984879413Z" level=info msg="StopPodSandbox for \"16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c\"" Apr 30 03:26:41.985513 containerd[1469]: time="2025-04-30T03:26:41.985132974Z" level=info msg="Ensure that sandbox 16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c in task-service has been cleanup successfully" Apr 30 03:26:42.039818 containerd[1469]: time="2025-04-30T03:26:42.039661843Z" level=error msg="StopPodSandbox for \"21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b\" failed" error="failed to destroy network for sandbox \"21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:26:42.040340 kubelet[2493]: E0430 03:26:42.040272 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" Apr 30 03:26:42.040518 kubelet[2493]: E0430 03:26:42.040357 2493 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b"} Apr 30 03:26:42.040518 kubelet[2493]: E0430 03:26:42.040421 2493 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"49d8fe7d-2d8b-449c-8513-592e9baca4b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:26:42.040518 kubelet[2493]: E0430 03:26:42.040473 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"49d8fe7d-2d8b-449c-8513-592e9baca4b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ccf698587-gjwrf" podUID="49d8fe7d-2d8b-449c-8513-592e9baca4b5" Apr 30 03:26:42.045086 containerd[1469]: time="2025-04-30T03:26:42.045008010Z" level=error msg="StopPodSandbox for \"16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c\" failed" error="failed to destroy network for sandbox \"16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:26:42.045916 kubelet[2493]: E0430 03:26:42.045667 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" Apr 30 03:26:42.045916 kubelet[2493]: E0430 03:26:42.045736 2493 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c"} Apr 30 03:26:42.045916 kubelet[2493]: E0430 03:26:42.045784 2493 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b15cb4e-c1af-493f-b9f9-4cb6e0146639\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:26:42.045916 kubelet[2493]: E0430 03:26:42.045840 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b15cb4e-c1af-493f-b9f9-4cb6e0146639\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ccf698587-5rqrm" podUID="7b15cb4e-c1af-493f-b9f9-4cb6e0146639" Apr 30 03:26:46.781512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4085621873.mount: Deactivated successfully. Apr 30 03:26:46.871196 containerd[1469]: time="2025-04-30T03:26:46.870929593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" Apr 30 03:26:46.882619 containerd[1469]: time="2025-04-30T03:26:46.882395995Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:26:46.892989 containerd[1469]: time="2025-04-30T03:26:46.892932002Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:26:46.894318 containerd[1469]: time="2025-04-30T03:26:46.893585618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:26:46.894800 containerd[1469]: time="2025-04-30T03:26:46.894670305Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 6.946863253s" Apr 30 03:26:46.894969 containerd[1469]: time="2025-04-30T03:26:46.894946739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" Apr 30 03:26:46.946503 containerd[1469]: time="2025-04-30T03:26:46.946431336Z" level=info msg="CreateContainer within sandbox \"9ce0eea2fe8538f61e9c565a3fa808deb7fc7e39ce30801bf36df8fdc067f189\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 03:26:47.048930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount135061387.mount: Deactivated successfully. Apr 30 03:26:47.078369 containerd[1469]: time="2025-04-30T03:26:47.078292641Z" level=info msg="CreateContainer within sandbox \"9ce0eea2fe8538f61e9c565a3fa808deb7fc7e39ce30801bf36df8fdc067f189\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a83c633f1e018664462a06216d91d8228db8fda72eb73366df3a2049211d07f0\"" Apr 30 03:26:47.081379 containerd[1469]: time="2025-04-30T03:26:47.079664629Z" level=info msg="StartContainer for \"a83c633f1e018664462a06216d91d8228db8fda72eb73366df3a2049211d07f0\"" Apr 30 03:26:47.288782 systemd[1]: Started cri-containerd-a83c633f1e018664462a06216d91d8228db8fda72eb73366df3a2049211d07f0.scope - libcontainer container a83c633f1e018664462a06216d91d8228db8fda72eb73366df3a2049211d07f0. Apr 30 03:26:47.336169 containerd[1469]: time="2025-04-30T03:26:47.335632103Z" level=info msg="StartContainer for \"a83c633f1e018664462a06216d91d8228db8fda72eb73366df3a2049211d07f0\" returns successfully" Apr 30 03:26:47.445862 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 03:26:47.447267 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 03:26:48.036783 kubelet[2493]: E0430 03:26:48.036618 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:48.075627 kubelet[2493]: I0430 03:26:48.073756 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tt4wb" podStartSLOduration=4.060276873 podStartE2EDuration="23.073577847s" podCreationTimestamp="2025-04-30 03:26:25 +0000 UTC" firstStartedPulling="2025-04-30 03:26:27.897636056 +0000 UTC m=+17.297955829" lastFinishedPulling="2025-04-30 03:26:46.910937041 +0000 UTC m=+36.311256803" observedRunningTime="2025-04-30 03:26:48.069571356 +0000 UTC m=+37.469891135" watchObservedRunningTime="2025-04-30 03:26:48.073577847 +0000 UTC m=+37.473897665" Apr 30 03:26:49.042513 kubelet[2493]: E0430 03:26:49.038836 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:49.448496 kernel: bpftool[3873]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 03:26:49.773221 systemd-networkd[1372]: vxlan.calico: Link UP Apr 30 03:26:49.773230 systemd-networkd[1372]: vxlan.calico: Gained carrier Apr 30 03:26:50.041747 kubelet[2493]: E0430 03:26:50.041350 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:50.074355 systemd[1]: run-containerd-runc-k8s.io-a83c633f1e018664462a06216d91d8228db8fda72eb73366df3a2049211d07f0-runc.3JJTHs.mount: Deactivated successfully. Apr 30 03:26:50.785703 systemd-networkd[1372]: vxlan.calico: Gained IPv6LL Apr 30 03:26:51.744629 containerd[1469]: time="2025-04-30T03:26:51.744091982Z" level=info msg="StopPodSandbox for \"023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4\"" Apr 30 03:26:52.033982 containerd[1469]: 2025-04-30 03:26:51.843 [INFO][3982] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" Apr 30 03:26:52.033982 containerd[1469]: 2025-04-30 03:26:51.843 [INFO][3982] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" iface="eth0" netns="/var/run/netns/cni-74335a93-a1ab-2d75-0dbd-3341bf3ad376" Apr 30 03:26:52.033982 containerd[1469]: 2025-04-30 03:26:51.844 [INFO][3982] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" iface="eth0" netns="/var/run/netns/cni-74335a93-a1ab-2d75-0dbd-3341bf3ad376" Apr 30 03:26:52.033982 containerd[1469]: 2025-04-30 03:26:51.846 [INFO][3982] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" iface="eth0" netns="/var/run/netns/cni-74335a93-a1ab-2d75-0dbd-3341bf3ad376" Apr 30 03:26:52.033982 containerd[1469]: 2025-04-30 03:26:51.846 [INFO][3982] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" Apr 30 03:26:52.033982 containerd[1469]: 2025-04-30 03:26:51.846 [INFO][3982] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" Apr 30 03:26:52.033982 containerd[1469]: 2025-04-30 03:26:52.008 [INFO][3989] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" HandleID="k8s-pod-network.023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--fj257-eth0" Apr 30 03:26:52.033982 containerd[1469]: 2025-04-30 03:26:52.012 [INFO][3989] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:26:52.033982 containerd[1469]: 2025-04-30 03:26:52.013 [INFO][3989] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:26:52.033982 containerd[1469]: 2025-04-30 03:26:52.025 [WARNING][3989] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" HandleID="k8s-pod-network.023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--fj257-eth0" Apr 30 03:26:52.033982 containerd[1469]: 2025-04-30 03:26:52.026 [INFO][3989] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" HandleID="k8s-pod-network.023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--fj257-eth0" Apr 30 03:26:52.033982 containerd[1469]: 2025-04-30 03:26:52.028 [INFO][3989] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:26:52.033982 containerd[1469]: 2025-04-30 03:26:52.030 [INFO][3982] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" Apr 30 03:26:52.037268 containerd[1469]: time="2025-04-30T03:26:52.036728859Z" level=info msg="TearDown network for sandbox \"023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4\" successfully" Apr 30 03:26:52.037268 containerd[1469]: time="2025-04-30T03:26:52.036770304Z" level=info msg="StopPodSandbox for \"023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4\" returns successfully" Apr 30 03:26:52.039559 kubelet[2493]: E0430 03:26:52.039341 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:52.039654 systemd[1]: run-netns-cni\x2d74335a93\x2da1ab\x2d2d75\x2d0dbd\x2d3341bf3ad376.mount: Deactivated successfully. Apr 30 03:26:52.050518 containerd[1469]: time="2025-04-30T03:26:52.050361712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fj257,Uid:107c0e05-c671-4f02-9024-81afd867d87a,Namespace:kube-system,Attempt:1,}" Apr 30 03:26:52.294249 systemd-networkd[1372]: cali485c638305d: Link UP Apr 30 03:26:52.294660 systemd-networkd[1372]: cali485c638305d: Gained carrier Apr 30 03:26:52.321964 containerd[1469]: 2025-04-30 03:26:52.158 [INFO][3999] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--fj257-eth0 coredns-668d6bf9bc- kube-system 107c0e05-c671-4f02-9024-81afd867d87a 774 0 2025-04-30 03:26:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-2-e7e0406ed5 coredns-668d6bf9bc-fj257 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali485c638305d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="408832c8999b4544ddd35f415064f80c6e1b6ee2d7ae0a5024ce0909cc16f885" Namespace="kube-system" Pod="coredns-668d6bf9bc-fj257" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--fj257-" Apr 30 03:26:52.321964 containerd[1469]: 2025-04-30 03:26:52.159 [INFO][3999] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="408832c8999b4544ddd35f415064f80c6e1b6ee2d7ae0a5024ce0909cc16f885" Namespace="kube-system" Pod="coredns-668d6bf9bc-fj257" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--fj257-eth0" Apr 30 03:26:52.321964 containerd[1469]: 2025-04-30 03:26:52.224 [INFO][4008] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="408832c8999b4544ddd35f415064f80c6e1b6ee2d7ae0a5024ce0909cc16f885" HandleID="k8s-pod-network.408832c8999b4544ddd35f415064f80c6e1b6ee2d7ae0a5024ce0909cc16f885" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--fj257-eth0" Apr 30 03:26:52.321964 containerd[1469]: 2025-04-30 03:26:52.237 [INFO][4008] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="408832c8999b4544ddd35f415064f80c6e1b6ee2d7ae0a5024ce0909cc16f885" HandleID="k8s-pod-network.408832c8999b4544ddd35f415064f80c6e1b6ee2d7ae0a5024ce0909cc16f885" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--fj257-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011bc50), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-2-e7e0406ed5", "pod":"coredns-668d6bf9bc-fj257", "timestamp":"2025-04-30 03:26:52.224027628 +0000 UTC"}, Hostname:"ci-4081.3.3-2-e7e0406ed5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:26:52.321964 containerd[1469]: 2025-04-30 03:26:52.237 [INFO][4008] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:26:52.321964 containerd[1469]: 2025-04-30 03:26:52.237 [INFO][4008] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:26:52.321964 containerd[1469]: 2025-04-30 03:26:52.237 [INFO][4008] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-2-e7e0406ed5' Apr 30 03:26:52.321964 containerd[1469]: 2025-04-30 03:26:52.241 [INFO][4008] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.408832c8999b4544ddd35f415064f80c6e1b6ee2d7ae0a5024ce0909cc16f885" host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:52.321964 containerd[1469]: 2025-04-30 03:26:52.251 [INFO][4008] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:52.321964 containerd[1469]: 2025-04-30 03:26:52.258 [INFO][4008] ipam/ipam.go 489: Trying affinity for 192.168.11.128/26 host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:52.321964 containerd[1469]: 2025-04-30 03:26:52.262 [INFO][4008] ipam/ipam.go 155: Attempting to load block cidr=192.168.11.128/26 host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:52.321964 containerd[1469]: 2025-04-30 03:26:52.266 [INFO][4008] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.11.128/26 host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:52.321964 containerd[1469]: 2025-04-30 03:26:52.266 [INFO][4008] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.11.128/26 handle="k8s-pod-network.408832c8999b4544ddd35f415064f80c6e1b6ee2d7ae0a5024ce0909cc16f885" host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:52.321964 containerd[1469]: 2025-04-30 03:26:52.269 [INFO][4008] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.408832c8999b4544ddd35f415064f80c6e1b6ee2d7ae0a5024ce0909cc16f885 Apr 30 03:26:52.321964 containerd[1469]: 2025-04-30 03:26:52.275 [INFO][4008] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.11.128/26 handle="k8s-pod-network.408832c8999b4544ddd35f415064f80c6e1b6ee2d7ae0a5024ce0909cc16f885" host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:52.321964 containerd[1469]: 2025-04-30 03:26:52.284 [INFO][4008] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.11.129/26] block=192.168.11.128/26 handle="k8s-pod-network.408832c8999b4544ddd35f415064f80c6e1b6ee2d7ae0a5024ce0909cc16f885" host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:52.321964 containerd[1469]: 2025-04-30 03:26:52.284 [INFO][4008] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.11.129/26] handle="k8s-pod-network.408832c8999b4544ddd35f415064f80c6e1b6ee2d7ae0a5024ce0909cc16f885" host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:52.321964 containerd[1469]: 2025-04-30 03:26:52.284 [INFO][4008] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:26:52.321964 containerd[1469]: 2025-04-30 03:26:52.284 [INFO][4008] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.11.129/26] IPv6=[] ContainerID="408832c8999b4544ddd35f415064f80c6e1b6ee2d7ae0a5024ce0909cc16f885" HandleID="k8s-pod-network.408832c8999b4544ddd35f415064f80c6e1b6ee2d7ae0a5024ce0909cc16f885" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--fj257-eth0" Apr 30 03:26:52.323796 containerd[1469]: 2025-04-30 03:26:52.289 [INFO][3999] cni-plugin/k8s.go 386: Populated endpoint ContainerID="408832c8999b4544ddd35f415064f80c6e1b6ee2d7ae0a5024ce0909cc16f885" Namespace="kube-system" Pod="coredns-668d6bf9bc-fj257" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--fj257-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--fj257-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"107c0e05-c671-4f02-9024-81afd867d87a", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-2-e7e0406ed5", ContainerID:"", Pod:"coredns-668d6bf9bc-fj257", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali485c638305d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:26:52.323796 containerd[1469]: 2025-04-30 03:26:52.290 [INFO][3999] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.11.129/32] ContainerID="408832c8999b4544ddd35f415064f80c6e1b6ee2d7ae0a5024ce0909cc16f885" Namespace="kube-system" Pod="coredns-668d6bf9bc-fj257" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--fj257-eth0" Apr 30 03:26:52.323796 containerd[1469]: 2025-04-30 03:26:52.290 [INFO][3999] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali485c638305d ContainerID="408832c8999b4544ddd35f415064f80c6e1b6ee2d7ae0a5024ce0909cc16f885" Namespace="kube-system" Pod="coredns-668d6bf9bc-fj257" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--fj257-eth0" Apr 30 03:26:52.323796 containerd[1469]: 2025-04-30 03:26:52.293 [INFO][3999] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="408832c8999b4544ddd35f415064f80c6e1b6ee2d7ae0a5024ce0909cc16f885" Namespace="kube-system" Pod="coredns-668d6bf9bc-fj257" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--fj257-eth0" Apr 30 03:26:52.323796 containerd[1469]: 2025-04-30 03:26:52.295 [INFO][3999] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="408832c8999b4544ddd35f415064f80c6e1b6ee2d7ae0a5024ce0909cc16f885" Namespace="kube-system" Pod="coredns-668d6bf9bc-fj257" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--fj257-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--fj257-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"107c0e05-c671-4f02-9024-81afd867d87a", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-2-e7e0406ed5", ContainerID:"408832c8999b4544ddd35f415064f80c6e1b6ee2d7ae0a5024ce0909cc16f885", Pod:"coredns-668d6bf9bc-fj257", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali485c638305d", MAC:"ea:e9:ba:13:e5:72", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:26:52.323796 containerd[1469]: 2025-04-30 03:26:52.310 [INFO][3999] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="408832c8999b4544ddd35f415064f80c6e1b6ee2d7ae0a5024ce0909cc16f885" Namespace="kube-system" Pod="coredns-668d6bf9bc-fj257" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--fj257-eth0" Apr 30 03:26:52.363526 containerd[1469]: time="2025-04-30T03:26:52.363059580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:26:52.363526 containerd[1469]: time="2025-04-30T03:26:52.363158936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:26:52.363526 containerd[1469]: time="2025-04-30T03:26:52.363176692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:26:52.364831 containerd[1469]: time="2025-04-30T03:26:52.363373785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:26:52.402743 systemd[1]: Started cri-containerd-408832c8999b4544ddd35f415064f80c6e1b6ee2d7ae0a5024ce0909cc16f885.scope - libcontainer container 408832c8999b4544ddd35f415064f80c6e1b6ee2d7ae0a5024ce0909cc16f885. Apr 30 03:26:52.462256 containerd[1469]: time="2025-04-30T03:26:52.462202349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fj257,Uid:107c0e05-c671-4f02-9024-81afd867d87a,Namespace:kube-system,Attempt:1,} returns sandbox id \"408832c8999b4544ddd35f415064f80c6e1b6ee2d7ae0a5024ce0909cc16f885\"" Apr 30 03:26:52.463510 kubelet[2493]: E0430 03:26:52.463423 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:52.466901 containerd[1469]: time="2025-04-30T03:26:52.466850423Z" level=info msg="CreateContainer within sandbox \"408832c8999b4544ddd35f415064f80c6e1b6ee2d7ae0a5024ce0909cc16f885\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:26:52.493788 containerd[1469]: time="2025-04-30T03:26:52.493727764Z" level=info msg="CreateContainer within sandbox \"408832c8999b4544ddd35f415064f80c6e1b6ee2d7ae0a5024ce0909cc16f885\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a871985553efcced4f93694acf55c5a17eabd88e62097f557cf83c0faeb32fc4\"" Apr 30 03:26:52.495701 containerd[1469]: time="2025-04-30T03:26:52.494828609Z" level=info msg="StartContainer for \"a871985553efcced4f93694acf55c5a17eabd88e62097f557cf83c0faeb32fc4\"" Apr 30 03:26:52.532689 systemd[1]: Started cri-containerd-a871985553efcced4f93694acf55c5a17eabd88e62097f557cf83c0faeb32fc4.scope - libcontainer container a871985553efcced4f93694acf55c5a17eabd88e62097f557cf83c0faeb32fc4. Apr 30 03:26:52.572294 containerd[1469]: time="2025-04-30T03:26:52.572164004Z" level=info msg="StartContainer for \"a871985553efcced4f93694acf55c5a17eabd88e62097f557cf83c0faeb32fc4\" returns successfully" Apr 30 03:26:52.744758 containerd[1469]: time="2025-04-30T03:26:52.743845158Z" level=info msg="StopPodSandbox for \"c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c\"" Apr 30 03:26:52.746019 containerd[1469]: time="2025-04-30T03:26:52.745918275Z" level=info msg="StopPodSandbox for \"16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c\"" Apr 30 03:26:52.956177 containerd[1469]: 2025-04-30 03:26:52.879 [INFO][4126] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" Apr 30 03:26:52.956177 containerd[1469]: 2025-04-30 03:26:52.879 [INFO][4126] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" iface="eth0" netns="/var/run/netns/cni-f4f7d81b-2c9d-ef99-f86a-907c82aca72e" Apr 30 03:26:52.956177 containerd[1469]: 2025-04-30 03:26:52.879 [INFO][4126] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" iface="eth0" netns="/var/run/netns/cni-f4f7d81b-2c9d-ef99-f86a-907c82aca72e" Apr 30 03:26:52.956177 containerd[1469]: 2025-04-30 03:26:52.880 [INFO][4126] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" iface="eth0" netns="/var/run/netns/cni-f4f7d81b-2c9d-ef99-f86a-907c82aca72e" Apr 30 03:26:52.956177 containerd[1469]: 2025-04-30 03:26:52.880 [INFO][4126] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" Apr 30 03:26:52.956177 containerd[1469]: 2025-04-30 03:26:52.880 [INFO][4126] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" Apr 30 03:26:52.956177 containerd[1469]: 2025-04-30 03:26:52.931 [INFO][4144] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" HandleID="k8s-pod-network.16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--5rqrm-eth0" Apr 30 03:26:52.956177 containerd[1469]: 2025-04-30 03:26:52.932 [INFO][4144] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:26:52.956177 containerd[1469]: 2025-04-30 03:26:52.932 [INFO][4144] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:26:52.956177 containerd[1469]: 2025-04-30 03:26:52.944 [WARNING][4144] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" HandleID="k8s-pod-network.16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--5rqrm-eth0" Apr 30 03:26:52.956177 containerd[1469]: 2025-04-30 03:26:52.944 [INFO][4144] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" HandleID="k8s-pod-network.16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--5rqrm-eth0" Apr 30 03:26:52.956177 containerd[1469]: 2025-04-30 03:26:52.947 [INFO][4144] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:26:52.956177 containerd[1469]: 2025-04-30 03:26:52.949 [INFO][4126] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" Apr 30 03:26:52.956177 containerd[1469]: time="2025-04-30T03:26:52.952401004Z" level=info msg="TearDown network for sandbox \"16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c\" successfully" Apr 30 03:26:52.956177 containerd[1469]: time="2025-04-30T03:26:52.952435120Z" level=info msg="StopPodSandbox for \"16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c\" returns successfully" Apr 30 03:26:52.959752 containerd[1469]: time="2025-04-30T03:26:52.959646876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccf698587-5rqrm,Uid:7b15cb4e-c1af-493f-b9f9-4cb6e0146639,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:26:52.994030 containerd[1469]: 2025-04-30 03:26:52.898 [INFO][4133] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" Apr 30 03:26:52.994030 containerd[1469]: 2025-04-30 03:26:52.899 [INFO][4133] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" iface="eth0" netns="/var/run/netns/cni-badbad34-d8e6-bb74-17d6-9a29d8aadcdb" Apr 30 03:26:52.994030 containerd[1469]: 2025-04-30 03:26:52.899 [INFO][4133] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" iface="eth0" netns="/var/run/netns/cni-badbad34-d8e6-bb74-17d6-9a29d8aadcdb" Apr 30 03:26:52.994030 containerd[1469]: 2025-04-30 03:26:52.900 [INFO][4133] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" iface="eth0" netns="/var/run/netns/cni-badbad34-d8e6-bb74-17d6-9a29d8aadcdb" Apr 30 03:26:52.994030 containerd[1469]: 2025-04-30 03:26:52.900 [INFO][4133] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" Apr 30 03:26:52.994030 containerd[1469]: 2025-04-30 03:26:52.900 [INFO][4133] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" Apr 30 03:26:52.994030 containerd[1469]: 2025-04-30 03:26:52.956 [INFO][4149] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" HandleID="k8s-pod-network.c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-csi--node--driver--kld5k-eth0" Apr 30 03:26:52.994030 containerd[1469]: 2025-04-30 03:26:52.956 [INFO][4149] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:26:52.994030 containerd[1469]: 2025-04-30 03:26:52.956 [INFO][4149] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:26:52.994030 containerd[1469]: 2025-04-30 03:26:52.970 [WARNING][4149] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" HandleID="k8s-pod-network.c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-csi--node--driver--kld5k-eth0" Apr 30 03:26:52.994030 containerd[1469]: 2025-04-30 03:26:52.970 [INFO][4149] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" HandleID="k8s-pod-network.c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-csi--node--driver--kld5k-eth0" Apr 30 03:26:52.994030 containerd[1469]: 2025-04-30 03:26:52.977 [INFO][4149] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:26:52.994030 containerd[1469]: 2025-04-30 03:26:52.985 [INFO][4133] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" Apr 30 03:26:52.996031 containerd[1469]: time="2025-04-30T03:26:52.995170551Z" level=info msg="TearDown network for sandbox \"c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c\" successfully" Apr 30 03:26:52.996031 containerd[1469]: time="2025-04-30T03:26:52.995217422Z" level=info msg="StopPodSandbox for \"c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c\" returns successfully" Apr 30 03:26:52.996909 containerd[1469]: time="2025-04-30T03:26:52.996837884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kld5k,Uid:b718a743-a03f-4839-ab75-67d0237668cd,Namespace:calico-system,Attempt:1,}" Apr 30 03:26:53.054082 systemd[1]: run-netns-cni\x2df4f7d81b\x2d2c9d\x2def99\x2df86a\x2d907c82aca72e.mount: Deactivated successfully. Apr 30 03:26:53.054196 systemd[1]: run-netns-cni\x2dbadbad34\x2dd8e6\x2dbb74\x2d17d6\x2d9a29d8aadcdb.mount: Deactivated successfully. Apr 30 03:26:53.100509 kubelet[2493]: E0430 03:26:53.099418 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:53.120484 kubelet[2493]: I0430 03:26:53.119285 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fj257" podStartSLOduration=36.119264703 podStartE2EDuration="36.119264703s" podCreationTimestamp="2025-04-30 03:26:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:26:53.11865706 +0000 UTC m=+42.518976844" watchObservedRunningTime="2025-04-30 03:26:53.119264703 +0000 UTC m=+42.519584486" Apr 30 03:26:53.391718 systemd-networkd[1372]: calid398d3a2f42: Link UP Apr 30 03:26:53.392082 systemd-networkd[1372]: calid398d3a2f42: Gained carrier Apr 30 03:26:53.436089 containerd[1469]: 2025-04-30 03:26:53.095 [INFO][4158] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--5rqrm-eth0 calico-apiserver-ccf698587- calico-apiserver 7b15cb4e-c1af-493f-b9f9-4cb6e0146639 787 0 2025-04-30 03:26:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:ccf698587 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-2-e7e0406ed5 calico-apiserver-ccf698587-5rqrm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid398d3a2f42 [] []}} ContainerID="61a5b31479f92f34903874f82e9c59e53dbd7332ef5393e75e0cc2d77dd88fd1" Namespace="calico-apiserver" Pod="calico-apiserver-ccf698587-5rqrm" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--5rqrm-" Apr 30 03:26:53.436089 containerd[1469]: 2025-04-30 03:26:53.095 [INFO][4158] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="61a5b31479f92f34903874f82e9c59e53dbd7332ef5393e75e0cc2d77dd88fd1" Namespace="calico-apiserver" Pod="calico-apiserver-ccf698587-5rqrm" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--5rqrm-eth0" Apr 30 03:26:53.436089 containerd[1469]: 2025-04-30 03:26:53.191 [INFO][4184] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61a5b31479f92f34903874f82e9c59e53dbd7332ef5393e75e0cc2d77dd88fd1" HandleID="k8s-pod-network.61a5b31479f92f34903874f82e9c59e53dbd7332ef5393e75e0cc2d77dd88fd1" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--5rqrm-eth0" Apr 30 03:26:53.436089 containerd[1469]: 2025-04-30 03:26:53.216 [INFO][4184] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="61a5b31479f92f34903874f82e9c59e53dbd7332ef5393e75e0cc2d77dd88fd1" HandleID="k8s-pod-network.61a5b31479f92f34903874f82e9c59e53dbd7332ef5393e75e0cc2d77dd88fd1" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--5rqrm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b550), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-2-e7e0406ed5", "pod":"calico-apiserver-ccf698587-5rqrm", "timestamp":"2025-04-30 03:26:53.190936397 +0000 UTC"}, Hostname:"ci-4081.3.3-2-e7e0406ed5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:26:53.436089 containerd[1469]: 2025-04-30 03:26:53.216 [INFO][4184] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:26:53.436089 containerd[1469]: 2025-04-30 03:26:53.216 [INFO][4184] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:26:53.436089 containerd[1469]: 2025-04-30 03:26:53.216 [INFO][4184] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-2-e7e0406ed5' Apr 30 03:26:53.436089 containerd[1469]: 2025-04-30 03:26:53.221 [INFO][4184] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.61a5b31479f92f34903874f82e9c59e53dbd7332ef5393e75e0cc2d77dd88fd1" host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:53.436089 containerd[1469]: 2025-04-30 03:26:53.322 [INFO][4184] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:53.436089 containerd[1469]: 2025-04-30 03:26:53.345 [INFO][4184] ipam/ipam.go 489: Trying affinity for 192.168.11.128/26 host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:53.436089 containerd[1469]: 2025-04-30 03:26:53.348 [INFO][4184] ipam/ipam.go 155: Attempting to load block cidr=192.168.11.128/26 host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:53.436089 containerd[1469]: 2025-04-30 03:26:53.352 [INFO][4184] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.11.128/26 host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:53.436089 containerd[1469]: 2025-04-30 03:26:53.352 [INFO][4184] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.11.128/26 handle="k8s-pod-network.61a5b31479f92f34903874f82e9c59e53dbd7332ef5393e75e0cc2d77dd88fd1" host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:53.436089 containerd[1469]: 2025-04-30 03:26:53.355 [INFO][4184] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.61a5b31479f92f34903874f82e9c59e53dbd7332ef5393e75e0cc2d77dd88fd1 Apr 30 03:26:53.436089 containerd[1469]: 2025-04-30 03:26:53.371 [INFO][4184] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.11.128/26 handle="k8s-pod-network.61a5b31479f92f34903874f82e9c59e53dbd7332ef5393e75e0cc2d77dd88fd1" host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:53.436089 containerd[1469]: 2025-04-30 03:26:53.382 [INFO][4184] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.11.130/26] block=192.168.11.128/26 handle="k8s-pod-network.61a5b31479f92f34903874f82e9c59e53dbd7332ef5393e75e0cc2d77dd88fd1" host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:53.436089 containerd[1469]: 2025-04-30 03:26:53.382 [INFO][4184] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.11.130/26] handle="k8s-pod-network.61a5b31479f92f34903874f82e9c59e53dbd7332ef5393e75e0cc2d77dd88fd1" host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:53.436089 containerd[1469]: 2025-04-30 03:26:53.382 [INFO][4184] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:26:53.436089 containerd[1469]: 2025-04-30 03:26:53.382 [INFO][4184] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.11.130/26] IPv6=[] ContainerID="61a5b31479f92f34903874f82e9c59e53dbd7332ef5393e75e0cc2d77dd88fd1" HandleID="k8s-pod-network.61a5b31479f92f34903874f82e9c59e53dbd7332ef5393e75e0cc2d77dd88fd1" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--5rqrm-eth0" Apr 30 03:26:53.439419 containerd[1469]: 2025-04-30 03:26:53.386 [INFO][4158] cni-plugin/k8s.go 386: Populated endpoint ContainerID="61a5b31479f92f34903874f82e9c59e53dbd7332ef5393e75e0cc2d77dd88fd1" Namespace="calico-apiserver" Pod="calico-apiserver-ccf698587-5rqrm" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--5rqrm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--5rqrm-eth0", GenerateName:"calico-apiserver-ccf698587-", Namespace:"calico-apiserver", SelfLink:"", UID:"7b15cb4e-c1af-493f-b9f9-4cb6e0146639", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 26, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ccf698587", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-2-e7e0406ed5", ContainerID:"", Pod:"calico-apiserver-ccf698587-5rqrm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid398d3a2f42", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:26:53.439419 containerd[1469]: 2025-04-30 03:26:53.386 [INFO][4158] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.11.130/32] ContainerID="61a5b31479f92f34903874f82e9c59e53dbd7332ef5393e75e0cc2d77dd88fd1" Namespace="calico-apiserver" Pod="calico-apiserver-ccf698587-5rqrm" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--5rqrm-eth0" Apr 30 03:26:53.439419 containerd[1469]: 2025-04-30 03:26:53.387 [INFO][4158] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid398d3a2f42 ContainerID="61a5b31479f92f34903874f82e9c59e53dbd7332ef5393e75e0cc2d77dd88fd1" Namespace="calico-apiserver" Pod="calico-apiserver-ccf698587-5rqrm" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--5rqrm-eth0" Apr 30 03:26:53.439419 containerd[1469]: 2025-04-30 03:26:53.389 [INFO][4158] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="61a5b31479f92f34903874f82e9c59e53dbd7332ef5393e75e0cc2d77dd88fd1" Namespace="calico-apiserver" Pod="calico-apiserver-ccf698587-5rqrm" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--5rqrm-eth0" Apr 30 03:26:53.439419 containerd[1469]: 2025-04-30 03:26:53.392 [INFO][4158] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="61a5b31479f92f34903874f82e9c59e53dbd7332ef5393e75e0cc2d77dd88fd1" Namespace="calico-apiserver" Pod="calico-apiserver-ccf698587-5rqrm" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--5rqrm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--5rqrm-eth0", GenerateName:"calico-apiserver-ccf698587-", Namespace:"calico-apiserver", SelfLink:"", UID:"7b15cb4e-c1af-493f-b9f9-4cb6e0146639", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 26, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ccf698587", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-2-e7e0406ed5", ContainerID:"61a5b31479f92f34903874f82e9c59e53dbd7332ef5393e75e0cc2d77dd88fd1", Pod:"calico-apiserver-ccf698587-5rqrm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid398d3a2f42", MAC:"fa:ea:2f:0d:67:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:26:53.439419 containerd[1469]: 2025-04-30 03:26:53.428 [INFO][4158] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="61a5b31479f92f34903874f82e9c59e53dbd7332ef5393e75e0cc2d77dd88fd1" Namespace="calico-apiserver" Pod="calico-apiserver-ccf698587-5rqrm" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--5rqrm-eth0" Apr 30 03:26:53.502577 containerd[1469]: time="2025-04-30T03:26:53.501525855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:26:53.502577 containerd[1469]: time="2025-04-30T03:26:53.501596543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:26:53.502577 containerd[1469]: time="2025-04-30T03:26:53.501623706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:26:53.502577 containerd[1469]: time="2025-04-30T03:26:53.501729530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:26:53.539931 systemd-networkd[1372]: cali6545753b4dc: Link UP Apr 30 03:26:53.543020 systemd-networkd[1372]: cali6545753b4dc: Gained carrier Apr 30 03:26:53.594961 containerd[1469]: 2025-04-30 03:26:53.098 [INFO][4167] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--2--e7e0406ed5-k8s-csi--node--driver--kld5k-eth0 csi-node-driver- calico-system b718a743-a03f-4839-ab75-67d0237668cd 788 0 2025-04-30 03:26:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.3-2-e7e0406ed5 csi-node-driver-kld5k eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6545753b4dc [] []}} ContainerID="b1263b258f850bccc1c2cd1483ab03ae6122de077eb8fd479be62c746ceb12c3" Namespace="calico-system" Pod="csi-node-driver-kld5k" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-csi--node--driver--kld5k-" Apr 30 03:26:53.594961 containerd[1469]: 2025-04-30 03:26:53.099 [INFO][4167] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b1263b258f850bccc1c2cd1483ab03ae6122de077eb8fd479be62c746ceb12c3" Namespace="calico-system" Pod="csi-node-driver-kld5k" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-csi--node--driver--kld5k-eth0" Apr 30 03:26:53.594961 containerd[1469]: 2025-04-30 03:26:53.200 [INFO][4182] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b1263b258f850bccc1c2cd1483ab03ae6122de077eb8fd479be62c746ceb12c3" HandleID="k8s-pod-network.b1263b258f850bccc1c2cd1483ab03ae6122de077eb8fd479be62c746ceb12c3" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-csi--node--driver--kld5k-eth0" Apr 30 03:26:53.594961 containerd[1469]: 2025-04-30 03:26:53.221 [INFO][4182] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b1263b258f850bccc1c2cd1483ab03ae6122de077eb8fd479be62c746ceb12c3" HandleID="k8s-pod-network.b1263b258f850bccc1c2cd1483ab03ae6122de077eb8fd479be62c746ceb12c3" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-csi--node--driver--kld5k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000393e80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-2-e7e0406ed5", "pod":"csi-node-driver-kld5k", "timestamp":"2025-04-30 03:26:53.200564582 +0000 UTC"}, Hostname:"ci-4081.3.3-2-e7e0406ed5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:26:53.594961 containerd[1469]: 2025-04-30 03:26:53.221 [INFO][4182] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:26:53.594961 containerd[1469]: 2025-04-30 03:26:53.382 [INFO][4182] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:26:53.594961 containerd[1469]: 2025-04-30 03:26:53.383 [INFO][4182] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-2-e7e0406ed5' Apr 30 03:26:53.594961 containerd[1469]: 2025-04-30 03:26:53.397 [INFO][4182] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b1263b258f850bccc1c2cd1483ab03ae6122de077eb8fd479be62c746ceb12c3" host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:53.594961 containerd[1469]: 2025-04-30 03:26:53.426 [INFO][4182] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:53.594961 containerd[1469]: 2025-04-30 03:26:53.435 [INFO][4182] ipam/ipam.go 489: Trying affinity for 192.168.11.128/26 host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:53.594961 containerd[1469]: 2025-04-30 03:26:53.444 [INFO][4182] ipam/ipam.go 155: Attempting to load block cidr=192.168.11.128/26 host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:53.594961 containerd[1469]: 2025-04-30 03:26:53.449 [INFO][4182] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.11.128/26 host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:53.594961 containerd[1469]: 2025-04-30 03:26:53.449 [INFO][4182] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.11.128/26 handle="k8s-pod-network.b1263b258f850bccc1c2cd1483ab03ae6122de077eb8fd479be62c746ceb12c3" host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:53.594961 containerd[1469]: 2025-04-30 03:26:53.455 [INFO][4182] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b1263b258f850bccc1c2cd1483ab03ae6122de077eb8fd479be62c746ceb12c3 Apr 30 03:26:53.594961 containerd[1469]: 2025-04-30 03:26:53.486 [INFO][4182] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.11.128/26 handle="k8s-pod-network.b1263b258f850bccc1c2cd1483ab03ae6122de077eb8fd479be62c746ceb12c3" host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:53.594961 containerd[1469]: 2025-04-30 03:26:53.515 [INFO][4182] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.11.131/26] block=192.168.11.128/26 handle="k8s-pod-network.b1263b258f850bccc1c2cd1483ab03ae6122de077eb8fd479be62c746ceb12c3" host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:53.594961 containerd[1469]: 2025-04-30 03:26:53.515 [INFO][4182] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.11.131/26] handle="k8s-pod-network.b1263b258f850bccc1c2cd1483ab03ae6122de077eb8fd479be62c746ceb12c3" host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:53.594961 containerd[1469]: 2025-04-30 03:26:53.515 [INFO][4182] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:26:53.594961 containerd[1469]: 2025-04-30 03:26:53.515 [INFO][4182] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.11.131/26] IPv6=[] ContainerID="b1263b258f850bccc1c2cd1483ab03ae6122de077eb8fd479be62c746ceb12c3" HandleID="k8s-pod-network.b1263b258f850bccc1c2cd1483ab03ae6122de077eb8fd479be62c746ceb12c3" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-csi--node--driver--kld5k-eth0" Apr 30 03:26:53.598099 containerd[1469]: 2025-04-30 03:26:53.526 [INFO][4167] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b1263b258f850bccc1c2cd1483ab03ae6122de077eb8fd479be62c746ceb12c3" Namespace="calico-system" Pod="csi-node-driver-kld5k" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-csi--node--driver--kld5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--2--e7e0406ed5-k8s-csi--node--driver--kld5k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b718a743-a03f-4839-ab75-67d0237668cd", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 26, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-2-e7e0406ed5", ContainerID:"", Pod:"csi-node-driver-kld5k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.11.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6545753b4dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:26:53.598099 containerd[1469]: 2025-04-30 03:26:53.528 [INFO][4167] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.11.131/32] ContainerID="b1263b258f850bccc1c2cd1483ab03ae6122de077eb8fd479be62c746ceb12c3" Namespace="calico-system" Pod="csi-node-driver-kld5k" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-csi--node--driver--kld5k-eth0" Apr 30 03:26:53.598099 containerd[1469]: 2025-04-30 03:26:53.528 [INFO][4167] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6545753b4dc ContainerID="b1263b258f850bccc1c2cd1483ab03ae6122de077eb8fd479be62c746ceb12c3" Namespace="calico-system" Pod="csi-node-driver-kld5k" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-csi--node--driver--kld5k-eth0" Apr 30 03:26:53.598099 containerd[1469]: 2025-04-30 03:26:53.543 [INFO][4167] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b1263b258f850bccc1c2cd1483ab03ae6122de077eb8fd479be62c746ceb12c3" Namespace="calico-system" Pod="csi-node-driver-kld5k" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-csi--node--driver--kld5k-eth0" Apr 30 03:26:53.598099 containerd[1469]: 2025-04-30 03:26:53.550 [INFO][4167] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b1263b258f850bccc1c2cd1483ab03ae6122de077eb8fd479be62c746ceb12c3" Namespace="calico-system" Pod="csi-node-driver-kld5k" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-csi--node--driver--kld5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--2--e7e0406ed5-k8s-csi--node--driver--kld5k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b718a743-a03f-4839-ab75-67d0237668cd", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 26, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-2-e7e0406ed5", ContainerID:"b1263b258f850bccc1c2cd1483ab03ae6122de077eb8fd479be62c746ceb12c3", Pod:"csi-node-driver-kld5k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.11.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6545753b4dc", MAC:"a2:04:0f:e3:2a:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:26:53.598099 containerd[1469]: 2025-04-30 03:26:53.585 [INFO][4167] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b1263b258f850bccc1c2cd1483ab03ae6122de077eb8fd479be62c746ceb12c3" Namespace="calico-system" Pod="csi-node-driver-kld5k" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-csi--node--driver--kld5k-eth0" Apr 30 03:26:53.600876 systemd[1]: Started cri-containerd-61a5b31479f92f34903874f82e9c59e53dbd7332ef5393e75e0cc2d77dd88fd1.scope - libcontainer container 61a5b31479f92f34903874f82e9c59e53dbd7332ef5393e75e0cc2d77dd88fd1. Apr 30 03:26:53.666267 containerd[1469]: time="2025-04-30T03:26:53.666074805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:26:53.666705 containerd[1469]: time="2025-04-30T03:26:53.666188796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:26:53.666705 containerd[1469]: time="2025-04-30T03:26:53.666211536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:26:53.666705 containerd[1469]: time="2025-04-30T03:26:53.666385505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:26:53.716825 systemd[1]: Started cri-containerd-b1263b258f850bccc1c2cd1483ab03ae6122de077eb8fd479be62c746ceb12c3.scope - libcontainer container b1263b258f850bccc1c2cd1483ab03ae6122de077eb8fd479be62c746ceb12c3. Apr 30 03:26:53.727287 systemd[1]: Started sshd@7-24.199.113.144:22-139.178.89.65:55540.service - OpenSSH per-connection server daemon (139.178.89.65:55540). Apr 30 03:26:53.804748 containerd[1469]: time="2025-04-30T03:26:53.804694052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kld5k,Uid:b718a743-a03f-4839-ab75-67d0237668cd,Namespace:calico-system,Attempt:1,} returns sandbox id \"b1263b258f850bccc1c2cd1483ab03ae6122de077eb8fd479be62c746ceb12c3\"" Apr 30 03:26:53.820078 containerd[1469]: time="2025-04-30T03:26:53.819724025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 03:26:53.820078 containerd[1469]: time="2025-04-30T03:26:53.819880381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccf698587-5rqrm,Uid:7b15cb4e-c1af-493f-b9f9-4cb6e0146639,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"61a5b31479f92f34903874f82e9c59e53dbd7332ef5393e75e0cc2d77dd88fd1\"" Apr 30 03:26:53.863353 sshd[4290]: Accepted publickey for core from 139.178.89.65 port 55540 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:26:53.866530 sshd[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:26:53.875021 systemd-logind[1446]: New session 8 of user core. Apr 30 03:26:53.886795 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 03:26:54.052874 systemd-networkd[1372]: cali485c638305d: Gained IPv6LL Apr 30 03:26:54.119819 kubelet[2493]: E0430 03:26:54.119686 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:54.255433 sshd[4290]: pam_unix(sshd:session): session closed for user core Apr 30 03:26:54.261306 systemd[1]: sshd@7-24.199.113.144:22-139.178.89.65:55540.service: Deactivated successfully. Apr 30 03:26:54.263832 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 03:26:54.264780 systemd-logind[1446]: Session 8 logged out. Waiting for processes to exit. Apr 30 03:26:54.266615 systemd-logind[1446]: Removed session 8. Apr 30 03:26:54.688688 systemd-networkd[1372]: calid398d3a2f42: Gained IPv6LL Apr 30 03:26:54.745517 containerd[1469]: time="2025-04-30T03:26:54.744427384Z" level=info msg="StopPodSandbox for \"063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067\"" Apr 30 03:26:54.746087 containerd[1469]: time="2025-04-30T03:26:54.745785120Z" level=info msg="StopPodSandbox for \"ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0\"" Apr 30 03:26:54.927765 containerd[1469]: 2025-04-30 03:26:54.865 [INFO][4350] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" Apr 30 03:26:54.927765 containerd[1469]: 2025-04-30 03:26:54.867 [INFO][4350] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" iface="eth0" netns="/var/run/netns/cni-d38b2ff5-c4cb-0b84-9f5c-80e5342bd903" Apr 30 03:26:54.927765 containerd[1469]: 2025-04-30 03:26:54.867 [INFO][4350] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" iface="eth0" netns="/var/run/netns/cni-d38b2ff5-c4cb-0b84-9f5c-80e5342bd903" Apr 30 03:26:54.927765 containerd[1469]: 2025-04-30 03:26:54.868 [INFO][4350] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" iface="eth0" netns="/var/run/netns/cni-d38b2ff5-c4cb-0b84-9f5c-80e5342bd903" Apr 30 03:26:54.927765 containerd[1469]: 2025-04-30 03:26:54.868 [INFO][4350] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" Apr 30 03:26:54.927765 containerd[1469]: 2025-04-30 03:26:54.868 [INFO][4350] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" Apr 30 03:26:54.927765 containerd[1469]: 2025-04-30 03:26:54.910 [INFO][4368] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" HandleID="k8s-pod-network.ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--c4b7t-eth0" Apr 30 03:26:54.927765 containerd[1469]: 2025-04-30 03:26:54.910 [INFO][4368] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:26:54.927765 containerd[1469]: 2025-04-30 03:26:54.910 [INFO][4368] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:26:54.927765 containerd[1469]: 2025-04-30 03:26:54.919 [WARNING][4368] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" HandleID="k8s-pod-network.ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--c4b7t-eth0" Apr 30 03:26:54.927765 containerd[1469]: 2025-04-30 03:26:54.919 [INFO][4368] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" HandleID="k8s-pod-network.ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--c4b7t-eth0" Apr 30 03:26:54.927765 containerd[1469]: 2025-04-30 03:26:54.921 [INFO][4368] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:26:54.927765 containerd[1469]: 2025-04-30 03:26:54.924 [INFO][4350] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" Apr 30 03:26:54.930864 containerd[1469]: time="2025-04-30T03:26:54.930359750Z" level=info msg="TearDown network for sandbox \"ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0\" successfully" Apr 30 03:26:54.931349 containerd[1469]: time="2025-04-30T03:26:54.931082604Z" level=info msg="StopPodSandbox for \"ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0\" returns successfully" Apr 30 03:26:54.934062 kubelet[2493]: E0430 03:26:54.934025 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:54.936121 containerd[1469]: time="2025-04-30T03:26:54.934839247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c4b7t,Uid:73a01c76-53ce-4689-98ef-5958ffbdd467,Namespace:kube-system,Attempt:1,}" Apr 30 03:26:54.935722 systemd[1]: run-netns-cni\x2dd38b2ff5\x2dc4cb\x2d0b84\x2d9f5c\x2d80e5342bd903.mount: Deactivated successfully. Apr 30 03:26:54.944640 systemd-networkd[1372]: cali6545753b4dc: Gained IPv6LL Apr 30 03:26:54.959788 containerd[1469]: 2025-04-30 03:26:54.860 [INFO][4354] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" Apr 30 03:26:54.959788 containerd[1469]: 2025-04-30 03:26:54.860 [INFO][4354] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" iface="eth0" netns="/var/run/netns/cni-0082c647-4600-43fd-38d2-ce2e910e31d7" Apr 30 03:26:54.959788 containerd[1469]: 2025-04-30 03:26:54.860 [INFO][4354] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" iface="eth0" netns="/var/run/netns/cni-0082c647-4600-43fd-38d2-ce2e910e31d7" Apr 30 03:26:54.959788 containerd[1469]: 2025-04-30 03:26:54.864 [INFO][4354] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" iface="eth0" netns="/var/run/netns/cni-0082c647-4600-43fd-38d2-ce2e910e31d7" Apr 30 03:26:54.959788 containerd[1469]: 2025-04-30 03:26:54.864 [INFO][4354] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" Apr 30 03:26:54.959788 containerd[1469]: 2025-04-30 03:26:54.864 [INFO][4354] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" Apr 30 03:26:54.959788 containerd[1469]: 2025-04-30 03:26:54.913 [INFO][4363] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" HandleID="k8s-pod-network.063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--kube--controllers--566d49dfb--hxnld-eth0" Apr 30 03:26:54.959788 containerd[1469]: 2025-04-30 03:26:54.913 [INFO][4363] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:26:54.959788 containerd[1469]: 2025-04-30 03:26:54.921 [INFO][4363] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:26:54.959788 containerd[1469]: 2025-04-30 03:26:54.937 [WARNING][4363] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" HandleID="k8s-pod-network.063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--kube--controllers--566d49dfb--hxnld-eth0" Apr 30 03:26:54.959788 containerd[1469]: 2025-04-30 03:26:54.938 [INFO][4363] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" HandleID="k8s-pod-network.063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--kube--controllers--566d49dfb--hxnld-eth0" Apr 30 03:26:54.959788 containerd[1469]: 2025-04-30 03:26:54.949 [INFO][4363] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:26:54.959788 containerd[1469]: 2025-04-30 03:26:54.954 [INFO][4354] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" Apr 30 03:26:54.961333 containerd[1469]: time="2025-04-30T03:26:54.960099738Z" level=info msg="TearDown network for sandbox \"063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067\" successfully" Apr 30 03:26:54.961333 containerd[1469]: time="2025-04-30T03:26:54.960135619Z" level=info msg="StopPodSandbox for \"063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067\" returns successfully" Apr 30 03:26:54.963589 containerd[1469]: time="2025-04-30T03:26:54.962792227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-566d49dfb-hxnld,Uid:fc52b32e-625c-4cbe-89c1-725d343029fc,Namespace:calico-system,Attempt:1,}" Apr 30 03:26:54.964125 systemd[1]: run-netns-cni\x2d0082c647\x2d4600\x2d43fd\x2d38d2\x2dce2e910e31d7.mount: Deactivated successfully. Apr 30 03:26:55.123477 kubelet[2493]: E0430 03:26:55.123214 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:55.266622 systemd-networkd[1372]: cali096f8454293: Link UP Apr 30 03:26:55.269882 systemd-networkd[1372]: cali096f8454293: Gained carrier Apr 30 03:26:55.290642 containerd[1469]: 2025-04-30 03:26:55.095 [INFO][4377] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--c4b7t-eth0 coredns-668d6bf9bc- kube-system 73a01c76-53ce-4689-98ef-5958ffbdd467 846 0 2025-04-30 03:26:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-2-e7e0406ed5 coredns-668d6bf9bc-c4b7t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali096f8454293 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9541ff616e5ccb1a56fdf13ade542c5b0e6f7f37304c0832e6260ab8cfc8300e" Namespace="kube-system" Pod="coredns-668d6bf9bc-c4b7t" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--c4b7t-" Apr 30 03:26:55.290642 containerd[1469]: 2025-04-30 03:26:55.096 [INFO][4377] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9541ff616e5ccb1a56fdf13ade542c5b0e6f7f37304c0832e6260ab8cfc8300e" Namespace="kube-system" Pod="coredns-668d6bf9bc-c4b7t" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--c4b7t-eth0" Apr 30 03:26:55.290642 containerd[1469]: 2025-04-30 03:26:55.182 [INFO][4403] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9541ff616e5ccb1a56fdf13ade542c5b0e6f7f37304c0832e6260ab8cfc8300e" HandleID="k8s-pod-network.9541ff616e5ccb1a56fdf13ade542c5b0e6f7f37304c0832e6260ab8cfc8300e" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--c4b7t-eth0" Apr 30 03:26:55.290642 containerd[1469]: 2025-04-30 03:26:55.205 [INFO][4403] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9541ff616e5ccb1a56fdf13ade542c5b0e6f7f37304c0832e6260ab8cfc8300e" HandleID="k8s-pod-network.9541ff616e5ccb1a56fdf13ade542c5b0e6f7f37304c0832e6260ab8cfc8300e" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--c4b7t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051b40), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-2-e7e0406ed5", "pod":"coredns-668d6bf9bc-c4b7t", "timestamp":"2025-04-30 03:26:55.182296905 +0000 UTC"}, Hostname:"ci-4081.3.3-2-e7e0406ed5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:26:55.290642 containerd[1469]: 2025-04-30 03:26:55.205 [INFO][4403] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:26:55.290642 containerd[1469]: 2025-04-30 03:26:55.206 [INFO][4403] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:26:55.290642 containerd[1469]: 2025-04-30 03:26:55.206 [INFO][4403] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-2-e7e0406ed5' Apr 30 03:26:55.290642 containerd[1469]: 2025-04-30 03:26:55.212 [INFO][4403] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9541ff616e5ccb1a56fdf13ade542c5b0e6f7f37304c0832e6260ab8cfc8300e" host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:55.290642 containerd[1469]: 2025-04-30 03:26:55.220 [INFO][4403] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:55.290642 containerd[1469]: 2025-04-30 03:26:55.229 [INFO][4403] ipam/ipam.go 489: Trying affinity for 192.168.11.128/26 host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:55.290642 containerd[1469]: 2025-04-30 03:26:55.233 [INFO][4403] ipam/ipam.go 155: Attempting to load block cidr=192.168.11.128/26 host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:55.290642 containerd[1469]: 2025-04-30 03:26:55.238 [INFO][4403] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.11.128/26 host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:55.290642 containerd[1469]: 2025-04-30 03:26:55.238 [INFO][4403] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.11.128/26 handle="k8s-pod-network.9541ff616e5ccb1a56fdf13ade542c5b0e6f7f37304c0832e6260ab8cfc8300e" host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:55.290642 containerd[1469]: 2025-04-30 03:26:55.242 [INFO][4403] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9541ff616e5ccb1a56fdf13ade542c5b0e6f7f37304c0832e6260ab8cfc8300e Apr 30 03:26:55.290642 containerd[1469]: 2025-04-30 03:26:55.248 [INFO][4403] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.11.128/26 handle="k8s-pod-network.9541ff616e5ccb1a56fdf13ade542c5b0e6f7f37304c0832e6260ab8cfc8300e" host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:55.290642 containerd[1469]: 2025-04-30 03:26:55.257 [INFO][4403] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.11.132/26] block=192.168.11.128/26 handle="k8s-pod-network.9541ff616e5ccb1a56fdf13ade542c5b0e6f7f37304c0832e6260ab8cfc8300e" host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:55.290642 containerd[1469]: 2025-04-30 03:26:55.258 [INFO][4403] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.11.132/26] handle="k8s-pod-network.9541ff616e5ccb1a56fdf13ade542c5b0e6f7f37304c0832e6260ab8cfc8300e" host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:55.290642 containerd[1469]: 2025-04-30 03:26:55.258 [INFO][4403] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:26:55.290642 containerd[1469]: 2025-04-30 03:26:55.258 [INFO][4403] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.11.132/26] IPv6=[] ContainerID="9541ff616e5ccb1a56fdf13ade542c5b0e6f7f37304c0832e6260ab8cfc8300e" HandleID="k8s-pod-network.9541ff616e5ccb1a56fdf13ade542c5b0e6f7f37304c0832e6260ab8cfc8300e" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--c4b7t-eth0" Apr 30 03:26:55.293114 containerd[1469]: 2025-04-30 03:26:55.262 [INFO][4377] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9541ff616e5ccb1a56fdf13ade542c5b0e6f7f37304c0832e6260ab8cfc8300e" Namespace="kube-system" Pod="coredns-668d6bf9bc-c4b7t" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--c4b7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--c4b7t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"73a01c76-53ce-4689-98ef-5958ffbdd467", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-2-e7e0406ed5", ContainerID:"", Pod:"coredns-668d6bf9bc-c4b7t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali096f8454293", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:26:55.293114 containerd[1469]: 2025-04-30 03:26:55.262 [INFO][4377] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.11.132/32] ContainerID="9541ff616e5ccb1a56fdf13ade542c5b0e6f7f37304c0832e6260ab8cfc8300e" Namespace="kube-system" Pod="coredns-668d6bf9bc-c4b7t" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--c4b7t-eth0" Apr 30 03:26:55.293114 containerd[1469]: 2025-04-30 03:26:55.262 [INFO][4377] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali096f8454293 ContainerID="9541ff616e5ccb1a56fdf13ade542c5b0e6f7f37304c0832e6260ab8cfc8300e" Namespace="kube-system" Pod="coredns-668d6bf9bc-c4b7t" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--c4b7t-eth0" Apr 30 03:26:55.293114 containerd[1469]: 2025-04-30 03:26:55.269 [INFO][4377] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9541ff616e5ccb1a56fdf13ade542c5b0e6f7f37304c0832e6260ab8cfc8300e" Namespace="kube-system" Pod="coredns-668d6bf9bc-c4b7t" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--c4b7t-eth0" Apr 30 03:26:55.293114 containerd[1469]: 2025-04-30 03:26:55.269 [INFO][4377] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9541ff616e5ccb1a56fdf13ade542c5b0e6f7f37304c0832e6260ab8cfc8300e" Namespace="kube-system" Pod="coredns-668d6bf9bc-c4b7t" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--c4b7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--c4b7t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"73a01c76-53ce-4689-98ef-5958ffbdd467", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-2-e7e0406ed5", ContainerID:"9541ff616e5ccb1a56fdf13ade542c5b0e6f7f37304c0832e6260ab8cfc8300e", Pod:"coredns-668d6bf9bc-c4b7t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali096f8454293", MAC:"16:66:28:2f:f4:87", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:26:55.293114 containerd[1469]: 2025-04-30 03:26:55.285 [INFO][4377] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9541ff616e5ccb1a56fdf13ade542c5b0e6f7f37304c0832e6260ab8cfc8300e" Namespace="kube-system" Pod="coredns-668d6bf9bc-c4b7t" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--c4b7t-eth0" Apr 30 03:26:55.355313 containerd[1469]: time="2025-04-30T03:26:55.350847346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:26:55.355313 containerd[1469]: time="2025-04-30T03:26:55.350947161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:26:55.355313 containerd[1469]: time="2025-04-30T03:26:55.350969401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:26:55.355313 containerd[1469]: time="2025-04-30T03:26:55.351112770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:26:55.393740 systemd[1]: Started cri-containerd-9541ff616e5ccb1a56fdf13ade542c5b0e6f7f37304c0832e6260ab8cfc8300e.scope - libcontainer container 9541ff616e5ccb1a56fdf13ade542c5b0e6f7f37304c0832e6260ab8cfc8300e. Apr 30 03:26:55.414094 systemd-networkd[1372]: caliae5aea1ad1c: Link UP Apr 30 03:26:55.415781 systemd-networkd[1372]: caliae5aea1ad1c: Gained carrier Apr 30 03:26:55.459000 containerd[1469]: 2025-04-30 03:26:55.119 [INFO][4382] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--2--e7e0406ed5-k8s-calico--kube--controllers--566d49dfb--hxnld-eth0 calico-kube-controllers-566d49dfb- calico-system fc52b32e-625c-4cbe-89c1-725d343029fc 845 0 2025-04-30 03:26:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:566d49dfb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.3-2-e7e0406ed5 calico-kube-controllers-566d49dfb-hxnld eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliae5aea1ad1c [] []}} ContainerID="adea3e6fb6b52a306328528c6322b25f8890c1e39892bb177c13cca172e9556b" Namespace="calico-system" Pod="calico-kube-controllers-566d49dfb-hxnld" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-calico--kube--controllers--566d49dfb--hxnld-" Apr 30 03:26:55.459000 containerd[1469]: 2025-04-30 03:26:55.119 [INFO][4382] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="adea3e6fb6b52a306328528c6322b25f8890c1e39892bb177c13cca172e9556b" Namespace="calico-system" Pod="calico-kube-controllers-566d49dfb-hxnld" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-calico--kube--controllers--566d49dfb--hxnld-eth0" Apr 30 03:26:55.459000 containerd[1469]: 2025-04-30 03:26:55.196 [INFO][4408] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="adea3e6fb6b52a306328528c6322b25f8890c1e39892bb177c13cca172e9556b" HandleID="k8s-pod-network.adea3e6fb6b52a306328528c6322b25f8890c1e39892bb177c13cca172e9556b" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--kube--controllers--566d49dfb--hxnld-eth0" Apr 30 03:26:55.459000 containerd[1469]: 2025-04-30 03:26:55.216 [INFO][4408] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="adea3e6fb6b52a306328528c6322b25f8890c1e39892bb177c13cca172e9556b" HandleID="k8s-pod-network.adea3e6fb6b52a306328528c6322b25f8890c1e39892bb177c13cca172e9556b" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--kube--controllers--566d49dfb--hxnld-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290f60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-2-e7e0406ed5", "pod":"calico-kube-controllers-566d49dfb-hxnld", "timestamp":"2025-04-30 03:26:55.196158779 +0000 UTC"}, Hostname:"ci-4081.3.3-2-e7e0406ed5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:26:55.459000 containerd[1469]: 2025-04-30 03:26:55.217 [INFO][4408] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:26:55.459000 containerd[1469]: 2025-04-30 03:26:55.258 [INFO][4408] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:26:55.459000 containerd[1469]: 2025-04-30 03:26:55.259 [INFO][4408] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-2-e7e0406ed5' Apr 30 03:26:55.459000 containerd[1469]: 2025-04-30 03:26:55.317 [INFO][4408] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.adea3e6fb6b52a306328528c6322b25f8890c1e39892bb177c13cca172e9556b" host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:55.459000 containerd[1469]: 2025-04-30 03:26:55.328 [INFO][4408] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:55.459000 containerd[1469]: 2025-04-30 03:26:55.347 [INFO][4408] ipam/ipam.go 489: Trying affinity for 192.168.11.128/26 host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:55.459000 containerd[1469]: 2025-04-30 03:26:55.359 [INFO][4408] ipam/ipam.go 155: Attempting to load block cidr=192.168.11.128/26 host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:55.459000 containerd[1469]: 2025-04-30 03:26:55.364 [INFO][4408] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.11.128/26 host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:55.459000 containerd[1469]: 2025-04-30 03:26:55.364 [INFO][4408] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.11.128/26 handle="k8s-pod-network.adea3e6fb6b52a306328528c6322b25f8890c1e39892bb177c13cca172e9556b" host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:55.459000 containerd[1469]: 2025-04-30 03:26:55.369 [INFO][4408] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.adea3e6fb6b52a306328528c6322b25f8890c1e39892bb177c13cca172e9556b Apr 30 03:26:55.459000 containerd[1469]: 2025-04-30 03:26:55.386 [INFO][4408] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.11.128/26 handle="k8s-pod-network.adea3e6fb6b52a306328528c6322b25f8890c1e39892bb177c13cca172e9556b" host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:55.459000 containerd[1469]: 2025-04-30 03:26:55.399 [INFO][4408] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.11.133/26] block=192.168.11.128/26 handle="k8s-pod-network.adea3e6fb6b52a306328528c6322b25f8890c1e39892bb177c13cca172e9556b" host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:55.459000 containerd[1469]: 2025-04-30 03:26:55.400 [INFO][4408] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.11.133/26] handle="k8s-pod-network.adea3e6fb6b52a306328528c6322b25f8890c1e39892bb177c13cca172e9556b" host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:55.459000 containerd[1469]: 2025-04-30 03:26:55.400 [INFO][4408] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:26:55.459000 containerd[1469]: 2025-04-30 03:26:55.400 [INFO][4408] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.11.133/26] IPv6=[] ContainerID="adea3e6fb6b52a306328528c6322b25f8890c1e39892bb177c13cca172e9556b" HandleID="k8s-pod-network.adea3e6fb6b52a306328528c6322b25f8890c1e39892bb177c13cca172e9556b" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--kube--controllers--566d49dfb--hxnld-eth0" Apr 30 03:26:55.460613 containerd[1469]: 2025-04-30 03:26:55.406 [INFO][4382] cni-plugin/k8s.go 386: Populated endpoint ContainerID="adea3e6fb6b52a306328528c6322b25f8890c1e39892bb177c13cca172e9556b" Namespace="calico-system" Pod="calico-kube-controllers-566d49dfb-hxnld" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-calico--kube--controllers--566d49dfb--hxnld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--2--e7e0406ed5-k8s-calico--kube--controllers--566d49dfb--hxnld-eth0", GenerateName:"calico-kube-controllers-566d49dfb-", Namespace:"calico-system", SelfLink:"", UID:"fc52b32e-625c-4cbe-89c1-725d343029fc", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 26, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"566d49dfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-2-e7e0406ed5", ContainerID:"", Pod:"calico-kube-controllers-566d49dfb-hxnld", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.11.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliae5aea1ad1c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:26:55.460613 containerd[1469]: 2025-04-30 03:26:55.407 [INFO][4382] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.11.133/32] ContainerID="adea3e6fb6b52a306328528c6322b25f8890c1e39892bb177c13cca172e9556b" Namespace="calico-system" Pod="calico-kube-controllers-566d49dfb-hxnld" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-calico--kube--controllers--566d49dfb--hxnld-eth0" Apr 30 03:26:55.460613 containerd[1469]: 2025-04-30 03:26:55.407 [INFO][4382] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae5aea1ad1c ContainerID="adea3e6fb6b52a306328528c6322b25f8890c1e39892bb177c13cca172e9556b" Namespace="calico-system" Pod="calico-kube-controllers-566d49dfb-hxnld" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-calico--kube--controllers--566d49dfb--hxnld-eth0" Apr 30 03:26:55.460613 containerd[1469]: 2025-04-30 03:26:55.416 [INFO][4382] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="adea3e6fb6b52a306328528c6322b25f8890c1e39892bb177c13cca172e9556b" Namespace="calico-system" Pod="calico-kube-controllers-566d49dfb-hxnld" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-calico--kube--controllers--566d49dfb--hxnld-eth0" Apr 30 03:26:55.460613 containerd[1469]: 2025-04-30 03:26:55.417 [INFO][4382] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="adea3e6fb6b52a306328528c6322b25f8890c1e39892bb177c13cca172e9556b" Namespace="calico-system" Pod="calico-kube-controllers-566d49dfb-hxnld" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-calico--kube--controllers--566d49dfb--hxnld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--2--e7e0406ed5-k8s-calico--kube--controllers--566d49dfb--hxnld-eth0", GenerateName:"calico-kube-controllers-566d49dfb-", Namespace:"calico-system", SelfLink:"", UID:"fc52b32e-625c-4cbe-89c1-725d343029fc", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 26, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"566d49dfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-2-e7e0406ed5", ContainerID:"adea3e6fb6b52a306328528c6322b25f8890c1e39892bb177c13cca172e9556b", Pod:"calico-kube-controllers-566d49dfb-hxnld", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.11.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliae5aea1ad1c", MAC:"62:ce:d2:ca:57:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:26:55.460613 containerd[1469]: 2025-04-30 03:26:55.449 [INFO][4382] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="adea3e6fb6b52a306328528c6322b25f8890c1e39892bb177c13cca172e9556b" Namespace="calico-system" Pod="calico-kube-controllers-566d49dfb-hxnld" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-calico--kube--controllers--566d49dfb--hxnld-eth0" Apr 30 03:26:55.498478 containerd[1469]: time="2025-04-30T03:26:55.498222651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c4b7t,Uid:73a01c76-53ce-4689-98ef-5958ffbdd467,Namespace:kube-system,Attempt:1,} returns sandbox id \"9541ff616e5ccb1a56fdf13ade542c5b0e6f7f37304c0832e6260ab8cfc8300e\"" Apr 30 03:26:55.499993 kubelet[2493]: E0430 03:26:55.499959 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:55.518931 containerd[1469]: time="2025-04-30T03:26:55.516979126Z" level=info msg="CreateContainer within sandbox \"9541ff616e5ccb1a56fdf13ade542c5b0e6f7f37304c0832e6260ab8cfc8300e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:26:55.541430 containerd[1469]: time="2025-04-30T03:26:55.541370814Z" level=info msg="CreateContainer within sandbox \"9541ff616e5ccb1a56fdf13ade542c5b0e6f7f37304c0832e6260ab8cfc8300e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3f9edad31d1e779e6458500c78005a02f59f1528f2e209e2ee968a06863c9a36\"" Apr 30 03:26:55.544190 containerd[1469]: time="2025-04-30T03:26:55.544122473Z" level=info msg="StartContainer for \"3f9edad31d1e779e6458500c78005a02f59f1528f2e209e2ee968a06863c9a36\"" Apr 30 03:26:55.551609 containerd[1469]: time="2025-04-30T03:26:55.550690522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:26:55.551609 containerd[1469]: time="2025-04-30T03:26:55.550817565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:26:55.551609 containerd[1469]: time="2025-04-30T03:26:55.550863824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:26:55.551609 containerd[1469]: time="2025-04-30T03:26:55.551060551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:26:55.584731 systemd[1]: Started cri-containerd-adea3e6fb6b52a306328528c6322b25f8890c1e39892bb177c13cca172e9556b.scope - libcontainer container adea3e6fb6b52a306328528c6322b25f8890c1e39892bb177c13cca172e9556b. Apr 30 03:26:55.596950 systemd[1]: Started cri-containerd-3f9edad31d1e779e6458500c78005a02f59f1528f2e209e2ee968a06863c9a36.scope - libcontainer container 3f9edad31d1e779e6458500c78005a02f59f1528f2e209e2ee968a06863c9a36. Apr 30 03:26:55.643107 containerd[1469]: time="2025-04-30T03:26:55.643054346Z" level=info msg="StartContainer for \"3f9edad31d1e779e6458500c78005a02f59f1528f2e209e2ee968a06863c9a36\" returns successfully" Apr 30 03:26:55.686592 containerd[1469]: time="2025-04-30T03:26:55.686340576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-566d49dfb-hxnld,Uid:fc52b32e-625c-4cbe-89c1-725d343029fc,Namespace:calico-system,Attempt:1,} returns sandbox id \"adea3e6fb6b52a306328528c6322b25f8890c1e39892bb177c13cca172e9556b\"" Apr 30 03:26:56.130733 kubelet[2493]: E0430 03:26:56.130695 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:56.207595 kubelet[2493]: I0430 03:26:56.205148 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-c4b7t" podStartSLOduration=39.205098639 podStartE2EDuration="39.205098639s" podCreationTimestamp="2025-04-30 03:26:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:26:56.171839642 +0000 UTC m=+45.572159425" watchObservedRunningTime="2025-04-30 03:26:56.205098639 +0000 UTC m=+45.605418423" Apr 30 03:26:56.399900 containerd[1469]: time="2025-04-30T03:26:56.397892691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:26:56.399900 containerd[1469]: time="2025-04-30T03:26:56.399060134Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" Apr 30 03:26:56.400954 containerd[1469]: time="2025-04-30T03:26:56.400818532Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:26:56.411999 containerd[1469]: time="2025-04-30T03:26:56.411937646Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 2.592154072s" Apr 30 03:26:56.412340 containerd[1469]: time="2025-04-30T03:26:56.412285875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" Apr 30 03:26:56.412560 containerd[1469]: time="2025-04-30T03:26:56.412237290Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:26:56.417149 containerd[1469]: time="2025-04-30T03:26:56.416534596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:26:56.417655 systemd-networkd[1372]: cali096f8454293: Gained IPv6LL Apr 30 03:26:56.445099 containerd[1469]: time="2025-04-30T03:26:56.444508096Z" level=info msg="CreateContainer within sandbox \"b1263b258f850bccc1c2cd1483ab03ae6122de077eb8fd479be62c746ceb12c3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 03:26:56.476409 containerd[1469]: time="2025-04-30T03:26:56.475609464Z" level=info msg="CreateContainer within sandbox \"b1263b258f850bccc1c2cd1483ab03ae6122de077eb8fd479be62c746ceb12c3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c42d2a0023a3630649aa4e906de872451795edad87b5c4a49e08405b018ba0b8\"" Apr 30 03:26:56.477692 containerd[1469]: time="2025-04-30T03:26:56.477650421Z" level=info msg="StartContainer for \"c42d2a0023a3630649aa4e906de872451795edad87b5c4a49e08405b018ba0b8\"" Apr 30 03:26:56.537959 systemd[1]: Started cri-containerd-c42d2a0023a3630649aa4e906de872451795edad87b5c4a49e08405b018ba0b8.scope - libcontainer container c42d2a0023a3630649aa4e906de872451795edad87b5c4a49e08405b018ba0b8. Apr 30 03:26:56.619218 containerd[1469]: time="2025-04-30T03:26:56.617652276Z" level=info msg="StartContainer for \"c42d2a0023a3630649aa4e906de872451795edad87b5c4a49e08405b018ba0b8\" returns successfully" Apr 30 03:26:56.744862 containerd[1469]: time="2025-04-30T03:26:56.743367652Z" level=info msg="StopPodSandbox for \"21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b\"" Apr 30 03:26:56.943670 containerd[1469]: 2025-04-30 03:26:56.869 [INFO][4627] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" Apr 30 03:26:56.943670 containerd[1469]: 2025-04-30 03:26:56.870 [INFO][4627] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" iface="eth0" netns="/var/run/netns/cni-f23d682a-19c6-f0d1-af5a-6166e09e89a6" Apr 30 03:26:56.943670 containerd[1469]: 2025-04-30 03:26:56.870 [INFO][4627] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" iface="eth0" netns="/var/run/netns/cni-f23d682a-19c6-f0d1-af5a-6166e09e89a6" Apr 30 03:26:56.943670 containerd[1469]: 2025-04-30 03:26:56.870 [INFO][4627] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" iface="eth0" netns="/var/run/netns/cni-f23d682a-19c6-f0d1-af5a-6166e09e89a6" Apr 30 03:26:56.943670 containerd[1469]: 2025-04-30 03:26:56.870 [INFO][4627] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" Apr 30 03:26:56.943670 containerd[1469]: 2025-04-30 03:26:56.870 [INFO][4627] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" Apr 30 03:26:56.943670 containerd[1469]: 2025-04-30 03:26:56.917 [INFO][4635] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" HandleID="k8s-pod-network.21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--gjwrf-eth0" Apr 30 03:26:56.943670 containerd[1469]: 2025-04-30 03:26:56.917 [INFO][4635] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:26:56.943670 containerd[1469]: 2025-04-30 03:26:56.917 [INFO][4635] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:26:56.943670 containerd[1469]: 2025-04-30 03:26:56.928 [WARNING][4635] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" HandleID="k8s-pod-network.21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--gjwrf-eth0" Apr 30 03:26:56.943670 containerd[1469]: 2025-04-30 03:26:56.928 [INFO][4635] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" HandleID="k8s-pod-network.21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--gjwrf-eth0" Apr 30 03:26:56.943670 containerd[1469]: 2025-04-30 03:26:56.937 [INFO][4635] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:26:56.943670 containerd[1469]: 2025-04-30 03:26:56.940 [INFO][4627] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" Apr 30 03:26:56.945281 containerd[1469]: time="2025-04-30T03:26:56.944565933Z" level=info msg="TearDown network for sandbox \"21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b\" successfully" Apr 30 03:26:56.945281 containerd[1469]: time="2025-04-30T03:26:56.944609744Z" level=info msg="StopPodSandbox for \"21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b\" returns successfully" Apr 30 03:26:56.946277 containerd[1469]: time="2025-04-30T03:26:56.945742011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccf698587-gjwrf,Uid:49d8fe7d-2d8b-449c-8513-592e9baca4b5,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:26:56.953876 systemd[1]: run-netns-cni\x2df23d682a\x2d19c6\x2df0d1\x2daf5a\x2d6166e09e89a6.mount: Deactivated successfully. Apr 30 03:26:57.140167 kubelet[2493]: E0430 03:26:57.140132 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:57.178038 systemd-networkd[1372]: cali07700dbb7a8: Link UP Apr 30 03:26:57.179719 systemd-networkd[1372]: cali07700dbb7a8: Gained carrier Apr 30 03:26:57.186218 systemd-networkd[1372]: caliae5aea1ad1c: Gained IPv6LL Apr 30 03:26:57.216559 containerd[1469]: 2025-04-30 03:26:57.038 [INFO][4641] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--gjwrf-eth0 calico-apiserver-ccf698587- calico-apiserver 49d8fe7d-2d8b-449c-8513-592e9baca4b5 889 0 2025-04-30 03:26:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:ccf698587 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-2-e7e0406ed5 calico-apiserver-ccf698587-gjwrf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali07700dbb7a8 [] []}} ContainerID="19e02a367f528aeea1655cb7fd46db7eee44b041710967bf0d6e4d8785455355" Namespace="calico-apiserver" Pod="calico-apiserver-ccf698587-gjwrf" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--gjwrf-" Apr 30 03:26:57.216559 containerd[1469]: 2025-04-30 03:26:57.038 [INFO][4641] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="19e02a367f528aeea1655cb7fd46db7eee44b041710967bf0d6e4d8785455355" Namespace="calico-apiserver" Pod="calico-apiserver-ccf698587-gjwrf" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--gjwrf-eth0" Apr 30 03:26:57.216559 containerd[1469]: 2025-04-30 03:26:57.092 [INFO][4653] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="19e02a367f528aeea1655cb7fd46db7eee44b041710967bf0d6e4d8785455355" HandleID="k8s-pod-network.19e02a367f528aeea1655cb7fd46db7eee44b041710967bf0d6e4d8785455355" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--gjwrf-eth0" Apr 30 03:26:57.216559 containerd[1469]: 2025-04-30 03:26:57.105 [INFO][4653] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="19e02a367f528aeea1655cb7fd46db7eee44b041710967bf0d6e4d8785455355" HandleID="k8s-pod-network.19e02a367f528aeea1655cb7fd46db7eee44b041710967bf0d6e4d8785455355" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--gjwrf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000200140), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-2-e7e0406ed5", "pod":"calico-apiserver-ccf698587-gjwrf", "timestamp":"2025-04-30 03:26:57.092952796 +0000 UTC"}, Hostname:"ci-4081.3.3-2-e7e0406ed5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:26:57.216559 containerd[1469]: 2025-04-30 03:26:57.106 [INFO][4653] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:26:57.216559 containerd[1469]: 2025-04-30 03:26:57.106 [INFO][4653] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:26:57.216559 containerd[1469]: 2025-04-30 03:26:57.106 [INFO][4653] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-2-e7e0406ed5' Apr 30 03:26:57.216559 containerd[1469]: 2025-04-30 03:26:57.110 [INFO][4653] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.19e02a367f528aeea1655cb7fd46db7eee44b041710967bf0d6e4d8785455355" host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:57.216559 containerd[1469]: 2025-04-30 03:26:57.121 [INFO][4653] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:57.216559 containerd[1469]: 2025-04-30 03:26:57.136 [INFO][4653] ipam/ipam.go 489: Trying affinity for 192.168.11.128/26 host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:57.216559 containerd[1469]: 2025-04-30 03:26:57.144 [INFO][4653] ipam/ipam.go 155: Attempting to load block cidr=192.168.11.128/26 host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:57.216559 containerd[1469]: 2025-04-30 03:26:57.147 [INFO][4653] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.11.128/26 host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:57.216559 containerd[1469]: 2025-04-30 03:26:57.147 [INFO][4653] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.11.128/26 handle="k8s-pod-network.19e02a367f528aeea1655cb7fd46db7eee44b041710967bf0d6e4d8785455355" host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:57.216559 containerd[1469]: 2025-04-30 03:26:57.149 [INFO][4653] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.19e02a367f528aeea1655cb7fd46db7eee44b041710967bf0d6e4d8785455355 Apr 30 03:26:57.216559 containerd[1469]: 2025-04-30 03:26:57.155 [INFO][4653] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.11.128/26 handle="k8s-pod-network.19e02a367f528aeea1655cb7fd46db7eee44b041710967bf0d6e4d8785455355" host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:57.216559 containerd[1469]: 2025-04-30 03:26:57.166 [INFO][4653] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.11.134/26] block=192.168.11.128/26 handle="k8s-pod-network.19e02a367f528aeea1655cb7fd46db7eee44b041710967bf0d6e4d8785455355" host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:57.216559 containerd[1469]: 2025-04-30 03:26:57.167 [INFO][4653] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.11.134/26] handle="k8s-pod-network.19e02a367f528aeea1655cb7fd46db7eee44b041710967bf0d6e4d8785455355" host="ci-4081.3.3-2-e7e0406ed5" Apr 30 03:26:57.216559 containerd[1469]: 2025-04-30 03:26:57.167 [INFO][4653] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:26:57.216559 containerd[1469]: 2025-04-30 03:26:57.167 [INFO][4653] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.11.134/26] IPv6=[] ContainerID="19e02a367f528aeea1655cb7fd46db7eee44b041710967bf0d6e4d8785455355" HandleID="k8s-pod-network.19e02a367f528aeea1655cb7fd46db7eee44b041710967bf0d6e4d8785455355" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--gjwrf-eth0" Apr 30 03:26:57.217581 containerd[1469]: 2025-04-30 03:26:57.171 [INFO][4641] cni-plugin/k8s.go 386: Populated endpoint ContainerID="19e02a367f528aeea1655cb7fd46db7eee44b041710967bf0d6e4d8785455355" Namespace="calico-apiserver" Pod="calico-apiserver-ccf698587-gjwrf" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--gjwrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--gjwrf-eth0", GenerateName:"calico-apiserver-ccf698587-", Namespace:"calico-apiserver", SelfLink:"", UID:"49d8fe7d-2d8b-449c-8513-592e9baca4b5", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 26, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ccf698587", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-2-e7e0406ed5", ContainerID:"", Pod:"calico-apiserver-ccf698587-gjwrf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07700dbb7a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:26:57.217581 containerd[1469]: 2025-04-30 03:26:57.171 [INFO][4641] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.11.134/32] ContainerID="19e02a367f528aeea1655cb7fd46db7eee44b041710967bf0d6e4d8785455355" Namespace="calico-apiserver" Pod="calico-apiserver-ccf698587-gjwrf" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--gjwrf-eth0" Apr 30 03:26:57.217581 containerd[1469]: 2025-04-30 03:26:57.171 [INFO][4641] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07700dbb7a8 ContainerID="19e02a367f528aeea1655cb7fd46db7eee44b041710967bf0d6e4d8785455355" Namespace="calico-apiserver" Pod="calico-apiserver-ccf698587-gjwrf" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--gjwrf-eth0" Apr 30 03:26:57.217581 containerd[1469]: 2025-04-30 03:26:57.179 [INFO][4641] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="19e02a367f528aeea1655cb7fd46db7eee44b041710967bf0d6e4d8785455355" Namespace="calico-apiserver" Pod="calico-apiserver-ccf698587-gjwrf" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--gjwrf-eth0" Apr 30 03:26:57.217581 containerd[1469]: 2025-04-30 03:26:57.179 [INFO][4641] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="19e02a367f528aeea1655cb7fd46db7eee44b041710967bf0d6e4d8785455355" Namespace="calico-apiserver" Pod="calico-apiserver-ccf698587-gjwrf" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--gjwrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--gjwrf-eth0", GenerateName:"calico-apiserver-ccf698587-", Namespace:"calico-apiserver", SelfLink:"", UID:"49d8fe7d-2d8b-449c-8513-592e9baca4b5", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 26, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ccf698587", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-2-e7e0406ed5", ContainerID:"19e02a367f528aeea1655cb7fd46db7eee44b041710967bf0d6e4d8785455355", Pod:"calico-apiserver-ccf698587-gjwrf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07700dbb7a8", MAC:"be:a4:f0:d6:32:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:26:57.217581 containerd[1469]: 2025-04-30 03:26:57.209 [INFO][4641] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="19e02a367f528aeea1655cb7fd46db7eee44b041710967bf0d6e4d8785455355" Namespace="calico-apiserver" Pod="calico-apiserver-ccf698587-gjwrf" WorkloadEndpoint="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--gjwrf-eth0" Apr 30 03:26:57.258606 containerd[1469]: time="2025-04-30T03:26:57.257124803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:26:57.259540 containerd[1469]: time="2025-04-30T03:26:57.258639719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:26:57.259540 containerd[1469]: time="2025-04-30T03:26:57.258701505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:26:57.259540 containerd[1469]: time="2025-04-30T03:26:57.258840034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:26:57.311706 systemd[1]: Started cri-containerd-19e02a367f528aeea1655cb7fd46db7eee44b041710967bf0d6e4d8785455355.scope - libcontainer container 19e02a367f528aeea1655cb7fd46db7eee44b041710967bf0d6e4d8785455355. Apr 30 03:26:57.394043 containerd[1469]: time="2025-04-30T03:26:57.391875711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccf698587-gjwrf,Uid:49d8fe7d-2d8b-449c-8513-592e9baca4b5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"19e02a367f528aeea1655cb7fd46db7eee44b041710967bf0d6e4d8785455355\"" Apr 30 03:26:58.146300 kubelet[2493]: E0430 03:26:58.144259 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:26:59.109373 systemd-networkd[1372]: cali07700dbb7a8: Gained IPv6LL Apr 30 03:26:59.284915 systemd[1]: Started sshd@8-24.199.113.144:22-139.178.89.65:47954.service - OpenSSH per-connection server daemon (139.178.89.65:47954). Apr 30 03:26:59.407584 sshd[4720]: Accepted publickey for core from 139.178.89.65 port 47954 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:26:59.413314 sshd[4720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:26:59.428162 systemd-logind[1446]: New session 9 of user core. Apr 30 03:26:59.432713 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 03:26:59.880874 sshd[4720]: pam_unix(sshd:session): session closed for user core Apr 30 03:26:59.886647 systemd-logind[1446]: Session 9 logged out. Waiting for processes to exit. Apr 30 03:26:59.887428 systemd[1]: sshd@8-24.199.113.144:22-139.178.89.65:47954.service: Deactivated successfully. Apr 30 03:26:59.891168 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 03:26:59.892836 systemd-logind[1446]: Removed session 9. Apr 30 03:27:00.391613 containerd[1469]: time="2025-04-30T03:27:00.391544688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:27:00.395386 containerd[1469]: time="2025-04-30T03:27:00.395212294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" Apr 30 03:27:00.395618 containerd[1469]: time="2025-04-30T03:27:00.395556403Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:27:00.399586 containerd[1469]: time="2025-04-30T03:27:00.399365392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:27:00.400724 containerd[1469]: time="2025-04-30T03:27:00.400635653Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 3.984041287s" Apr 30 03:27:00.400724 containerd[1469]: time="2025-04-30T03:27:00.400696789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:27:00.404315 containerd[1469]: time="2025-04-30T03:27:00.404258609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" Apr 30 03:27:00.412668 containerd[1469]: time="2025-04-30T03:27:00.412441625Z" level=info msg="CreateContainer within sandbox \"61a5b31479f92f34903874f82e9c59e53dbd7332ef5393e75e0cc2d77dd88fd1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:27:00.437545 containerd[1469]: time="2025-04-30T03:27:00.437292530Z" level=info msg="CreateContainer within sandbox \"61a5b31479f92f34903874f82e9c59e53dbd7332ef5393e75e0cc2d77dd88fd1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1cfd9445c73019581bd5ff1ed9b236bcb7a37cd17b30190f3c84bd2b93b7c74e\"" Apr 30 03:27:00.439997 containerd[1469]: time="2025-04-30T03:27:00.438668207Z" level=info msg="StartContainer for \"1cfd9445c73019581bd5ff1ed9b236bcb7a37cd17b30190f3c84bd2b93b7c74e\"" Apr 30 03:27:00.561829 systemd[1]: Started cri-containerd-1cfd9445c73019581bd5ff1ed9b236bcb7a37cd17b30190f3c84bd2b93b7c74e.scope - libcontainer container 1cfd9445c73019581bd5ff1ed9b236bcb7a37cd17b30190f3c84bd2b93b7c74e. Apr 30 03:27:00.654996 containerd[1469]: time="2025-04-30T03:27:00.654827802Z" level=info msg="StartContainer for \"1cfd9445c73019581bd5ff1ed9b236bcb7a37cd17b30190f3c84bd2b93b7c74e\" returns successfully" Apr 30 03:27:02.862889 kubelet[2493]: I0430 03:27:02.862727 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-ccf698587-5rqrm" podStartSLOduration=31.28318558 podStartE2EDuration="37.862695079s" podCreationTimestamp="2025-04-30 03:26:25 +0000 UTC" firstStartedPulling="2025-04-30 03:26:53.823977041 +0000 UTC m=+43.224296818" lastFinishedPulling="2025-04-30 03:27:00.403486531 +0000 UTC m=+49.803806317" observedRunningTime="2025-04-30 03:27:01.224554537 +0000 UTC m=+50.624874323" watchObservedRunningTime="2025-04-30 03:27:02.862695079 +0000 UTC m=+52.263014863" Apr 30 03:27:03.724502 containerd[1469]: time="2025-04-30T03:27:03.723701764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:27:03.727388 containerd[1469]: time="2025-04-30T03:27:03.727314049Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:27:03.727931 containerd[1469]: time="2025-04-30T03:27:03.727869620Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" Apr 30 03:27:03.732720 containerd[1469]: time="2025-04-30T03:27:03.732609889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:27:03.734086 containerd[1469]: time="2025-04-30T03:27:03.733365719Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 3.327517829s" Apr 30 03:27:03.734086 containerd[1469]: time="2025-04-30T03:27:03.733417872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" Apr 30 03:27:03.736290 containerd[1469]: time="2025-04-30T03:27:03.736253062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 03:27:03.769646 containerd[1469]: time="2025-04-30T03:27:03.769473630Z" level=info msg="CreateContainer within sandbox \"adea3e6fb6b52a306328528c6322b25f8890c1e39892bb177c13cca172e9556b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 03:27:03.783883 containerd[1469]: time="2025-04-30T03:27:03.783678844Z" level=info msg="CreateContainer within sandbox \"adea3e6fb6b52a306328528c6322b25f8890c1e39892bb177c13cca172e9556b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"fe9c1d552f54d73f020e93158aa84d0ffaf6652340af6f255544d25f348d6110\"" Apr 30 03:27:03.787145 containerd[1469]: time="2025-04-30T03:27:03.786354362Z" level=info msg="StartContainer for \"fe9c1d552f54d73f020e93158aa84d0ffaf6652340af6f255544d25f348d6110\"" Apr 30 03:27:03.791125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1819770169.mount: Deactivated successfully. Apr 30 03:27:03.848893 systemd[1]: Started cri-containerd-fe9c1d552f54d73f020e93158aa84d0ffaf6652340af6f255544d25f348d6110.scope - libcontainer container fe9c1d552f54d73f020e93158aa84d0ffaf6652340af6f255544d25f348d6110. Apr 30 03:27:03.912849 containerd[1469]: time="2025-04-30T03:27:03.912792629Z" level=info msg="StartContainer for \"fe9c1d552f54d73f020e93158aa84d0ffaf6652340af6f255544d25f348d6110\" returns successfully" Apr 30 03:27:04.294501 kubelet[2493]: I0430 03:27:04.294393 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-566d49dfb-hxnld" podStartSLOduration=30.247314319 podStartE2EDuration="38.294372695s" podCreationTimestamp="2025-04-30 03:26:26 +0000 UTC" firstStartedPulling="2025-04-30 03:26:55.689010794 +0000 UTC m=+45.089330556" lastFinishedPulling="2025-04-30 03:27:03.736069166 +0000 UTC m=+53.136388932" observedRunningTime="2025-04-30 03:27:04.22641511 +0000 UTC m=+53.626734887" watchObservedRunningTime="2025-04-30 03:27:04.294372695 +0000 UTC m=+53.694692478" Apr 30 03:27:04.901703 systemd[1]: Started sshd@9-24.199.113.144:22-139.178.89.65:47964.service - OpenSSH per-connection server daemon (139.178.89.65:47964). Apr 30 03:27:05.012126 sshd[4857]: Accepted publickey for core from 139.178.89.65 port 47964 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:27:05.014629 sshd[4857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:27:05.021310 systemd-logind[1446]: New session 10 of user core. Apr 30 03:27:05.031772 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 03:27:05.394001 sshd[4857]: pam_unix(sshd:session): session closed for user core Apr 30 03:27:05.398266 systemd[1]: sshd@9-24.199.113.144:22-139.178.89.65:47964.service: Deactivated successfully. Apr 30 03:27:05.402585 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 03:27:05.403694 systemd-logind[1446]: Session 10 logged out. Waiting for processes to exit. Apr 30 03:27:05.404781 systemd-logind[1446]: Removed session 10. Apr 30 03:27:06.387562 containerd[1469]: time="2025-04-30T03:27:06.387372813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:27:06.388325 containerd[1469]: time="2025-04-30T03:27:06.388165832Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" Apr 30 03:27:06.390218 containerd[1469]: time="2025-04-30T03:27:06.390154171Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:27:06.393572 containerd[1469]: time="2025-04-30T03:27:06.392291705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:27:06.394770 containerd[1469]: time="2025-04-30T03:27:06.394715111Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.65809598s" Apr 30 03:27:06.394997 containerd[1469]: time="2025-04-30T03:27:06.394969766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" Apr 30 03:27:06.398085 containerd[1469]: time="2025-04-30T03:27:06.397064341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:27:06.399193 containerd[1469]: time="2025-04-30T03:27:06.399143034Z" level=info msg="CreateContainer within sandbox \"b1263b258f850bccc1c2cd1483ab03ae6122de077eb8fd479be62c746ceb12c3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 03:27:06.431967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount22473940.mount: Deactivated successfully. Apr 30 03:27:06.436959 containerd[1469]: time="2025-04-30T03:27:06.436871747Z" level=info msg="CreateContainer within sandbox \"b1263b258f850bccc1c2cd1483ab03ae6122de077eb8fd479be62c746ceb12c3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ec483ea720c447fcccf90d1bcdd55b541e4794108fedf0aed8f877ebfc557390\"" Apr 30 03:27:06.437879 containerd[1469]: time="2025-04-30T03:27:06.437818600Z" level=info msg="StartContainer for \"ec483ea720c447fcccf90d1bcdd55b541e4794108fedf0aed8f877ebfc557390\"" Apr 30 03:27:06.498142 systemd[1]: run-containerd-runc-k8s.io-ec483ea720c447fcccf90d1bcdd55b541e4794108fedf0aed8f877ebfc557390-runc.EjTaZE.mount: Deactivated successfully. Apr 30 03:27:06.509844 systemd[1]: Started cri-containerd-ec483ea720c447fcccf90d1bcdd55b541e4794108fedf0aed8f877ebfc557390.scope - libcontainer container ec483ea720c447fcccf90d1bcdd55b541e4794108fedf0aed8f877ebfc557390. Apr 30 03:27:06.555753 containerd[1469]: time="2025-04-30T03:27:06.555517694Z" level=info msg="StartContainer for \"ec483ea720c447fcccf90d1bcdd55b541e4794108fedf0aed8f877ebfc557390\" returns successfully" Apr 30 03:27:06.947978 kubelet[2493]: I0430 03:27:06.947813 2493 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 03:27:06.950795 kubelet[2493]: I0430 03:27:06.950483 2493 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 03:27:07.133676 containerd[1469]: time="2025-04-30T03:27:07.133604047Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:27:07.134257 containerd[1469]: time="2025-04-30T03:27:07.134211821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" Apr 30 03:27:07.137631 containerd[1469]: time="2025-04-30T03:27:07.137170644Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 738.574825ms" Apr 30 03:27:07.137631 containerd[1469]: time="2025-04-30T03:27:07.137223750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:27:07.147258 containerd[1469]: time="2025-04-30T03:27:07.147200932Z" level=info msg="CreateContainer within sandbox \"19e02a367f528aeea1655cb7fd46db7eee44b041710967bf0d6e4d8785455355\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:27:07.157863 containerd[1469]: time="2025-04-30T03:27:07.157778347Z" level=info msg="CreateContainer within sandbox \"19e02a367f528aeea1655cb7fd46db7eee44b041710967bf0d6e4d8785455355\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f327d8806146f12b6d51b5a6f7cc912ad229229b8a757c0cff061aef485a8bcc\"" Apr 30 03:27:07.160161 containerd[1469]: time="2025-04-30T03:27:07.158693013Z" level=info msg="StartContainer for \"f327d8806146f12b6d51b5a6f7cc912ad229229b8a757c0cff061aef485a8bcc\"" Apr 30 03:27:07.195969 systemd[1]: Started cri-containerd-f327d8806146f12b6d51b5a6f7cc912ad229229b8a757c0cff061aef485a8bcc.scope - libcontainer container f327d8806146f12b6d51b5a6f7cc912ad229229b8a757c0cff061aef485a8bcc. Apr 30 03:27:07.326021 containerd[1469]: time="2025-04-30T03:27:07.325978252Z" level=info msg="StartContainer for \"f327d8806146f12b6d51b5a6f7cc912ad229229b8a757c0cff061aef485a8bcc\" returns successfully" Apr 30 03:27:08.258784 kubelet[2493]: I0430 03:27:08.258689 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-ccf698587-gjwrf" podStartSLOduration=33.515587301 podStartE2EDuration="43.258668525s" podCreationTimestamp="2025-04-30 03:26:25 +0000 UTC" firstStartedPulling="2025-04-30 03:26:57.395793338 +0000 UTC m=+46.796113104" lastFinishedPulling="2025-04-30 03:27:07.138874554 +0000 UTC m=+56.539194328" observedRunningTime="2025-04-30 03:27:08.257069424 +0000 UTC m=+57.657389205" watchObservedRunningTime="2025-04-30 03:27:08.258668525 +0000 UTC m=+57.658988306" Apr 30 03:27:08.261217 kubelet[2493]: I0430 03:27:08.260568 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-kld5k" podStartSLOduration=29.672812813 podStartE2EDuration="42.260548094s" podCreationTimestamp="2025-04-30 03:26:26 +0000 UTC" firstStartedPulling="2025-04-30 03:26:53.808319925 +0000 UTC m=+43.208639699" lastFinishedPulling="2025-04-30 03:27:06.3960552 +0000 UTC m=+55.796374980" observedRunningTime="2025-04-30 03:27:07.24911249 +0000 UTC m=+56.649432272" watchObservedRunningTime="2025-04-30 03:27:08.260548094 +0000 UTC m=+57.660867867" Apr 30 03:27:10.414950 systemd[1]: Started sshd@10-24.199.113.144:22-139.178.89.65:50234.service - OpenSSH per-connection server daemon (139.178.89.65:50234). Apr 30 03:27:10.527496 sshd[4967]: Accepted publickey for core from 139.178.89.65 port 50234 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:27:10.528820 sshd[4967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:27:10.535620 systemd-logind[1446]: New session 11 of user core. Apr 30 03:27:10.547980 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 03:27:10.985555 containerd[1469]: time="2025-04-30T03:27:10.983106107Z" level=info msg="StopPodSandbox for \"063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067\"" Apr 30 03:27:11.372915 containerd[1469]: 2025-04-30 03:27:11.204 [WARNING][4994] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--2--e7e0406ed5-k8s-calico--kube--controllers--566d49dfb--hxnld-eth0", GenerateName:"calico-kube-controllers-566d49dfb-", Namespace:"calico-system", SelfLink:"", UID:"fc52b32e-625c-4cbe-89c1-725d343029fc", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 26, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"566d49dfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-2-e7e0406ed5", ContainerID:"adea3e6fb6b52a306328528c6322b25f8890c1e39892bb177c13cca172e9556b", Pod:"calico-kube-controllers-566d49dfb-hxnld", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.11.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliae5aea1ad1c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:27:11.372915 containerd[1469]: 2025-04-30 03:27:11.208 [INFO][4994] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" Apr 30 03:27:11.372915 containerd[1469]: 2025-04-30 03:27:11.208 [INFO][4994] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" iface="eth0" netns="" Apr 30 03:27:11.372915 containerd[1469]: 2025-04-30 03:27:11.208 [INFO][4994] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" Apr 30 03:27:11.372915 containerd[1469]: 2025-04-30 03:27:11.210 [INFO][4994] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" Apr 30 03:27:11.372915 containerd[1469]: 2025-04-30 03:27:11.318 [INFO][5001] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" HandleID="k8s-pod-network.063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--kube--controllers--566d49dfb--hxnld-eth0" Apr 30 03:27:11.372915 containerd[1469]: 2025-04-30 03:27:11.318 [INFO][5001] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:27:11.372915 containerd[1469]: 2025-04-30 03:27:11.318 [INFO][5001] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:27:11.372915 containerd[1469]: 2025-04-30 03:27:11.342 [WARNING][5001] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" HandleID="k8s-pod-network.063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--kube--controllers--566d49dfb--hxnld-eth0" Apr 30 03:27:11.372915 containerd[1469]: 2025-04-30 03:27:11.343 [INFO][5001] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" HandleID="k8s-pod-network.063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--kube--controllers--566d49dfb--hxnld-eth0" Apr 30 03:27:11.372915 containerd[1469]: 2025-04-30 03:27:11.348 [INFO][5001] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:27:11.372915 containerd[1469]: 2025-04-30 03:27:11.360 [INFO][4994] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" Apr 30 03:27:11.377171 containerd[1469]: time="2025-04-30T03:27:11.373290785Z" level=info msg="TearDown network for sandbox \"063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067\" successfully" Apr 30 03:27:11.377171 containerd[1469]: time="2025-04-30T03:27:11.373361918Z" level=info msg="StopPodSandbox for \"063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067\" returns successfully" Apr 30 03:27:11.388819 sshd[4967]: pam_unix(sshd:session): session closed for user core Apr 30 03:27:11.407307 systemd[1]: sshd@10-24.199.113.144:22-139.178.89.65:50234.service: Deactivated successfully. Apr 30 03:27:11.411760 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 03:27:11.419097 systemd-logind[1446]: Session 11 logged out. Waiting for processes to exit. Apr 30 03:27:11.423047 systemd[1]: Started sshd@11-24.199.113.144:22-139.178.89.65:50242.service - OpenSSH per-connection server daemon (139.178.89.65:50242). Apr 30 03:27:11.428526 systemd-logind[1446]: Removed session 11. Apr 30 03:27:11.481318 containerd[1469]: time="2025-04-30T03:27:11.481207144Z" level=info msg="RemovePodSandbox for \"063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067\"" Apr 30 03:27:11.486319 sshd[5011]: Accepted publickey for core from 139.178.89.65 port 50242 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:27:11.488055 sshd[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:27:11.488976 containerd[1469]: time="2025-04-30T03:27:11.488591262Z" level=info msg="Forcibly stopping sandbox \"063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067\"" Apr 30 03:27:11.498159 systemd-logind[1446]: New session 12 of user core. Apr 30 03:27:11.503788 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 03:27:11.626326 containerd[1469]: 2025-04-30 03:27:11.557 [WARNING][5025] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--2--e7e0406ed5-k8s-calico--kube--controllers--566d49dfb--hxnld-eth0", GenerateName:"calico-kube-controllers-566d49dfb-", Namespace:"calico-system", SelfLink:"", UID:"fc52b32e-625c-4cbe-89c1-725d343029fc", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 26, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"566d49dfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-2-e7e0406ed5", ContainerID:"adea3e6fb6b52a306328528c6322b25f8890c1e39892bb177c13cca172e9556b", Pod:"calico-kube-controllers-566d49dfb-hxnld", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.11.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliae5aea1ad1c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:27:11.626326 containerd[1469]: 2025-04-30 03:27:11.557 [INFO][5025] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" Apr 30 03:27:11.626326 containerd[1469]: 2025-04-30 03:27:11.557 [INFO][5025] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" iface="eth0" netns="" Apr 30 03:27:11.626326 containerd[1469]: 2025-04-30 03:27:11.557 [INFO][5025] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" Apr 30 03:27:11.626326 containerd[1469]: 2025-04-30 03:27:11.557 [INFO][5025] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" Apr 30 03:27:11.626326 containerd[1469]: 2025-04-30 03:27:11.601 [INFO][5033] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" HandleID="k8s-pod-network.063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--kube--controllers--566d49dfb--hxnld-eth0" Apr 30 03:27:11.626326 containerd[1469]: 2025-04-30 03:27:11.602 [INFO][5033] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:27:11.626326 containerd[1469]: 2025-04-30 03:27:11.604 [INFO][5033] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:27:11.626326 containerd[1469]: 2025-04-30 03:27:11.614 [WARNING][5033] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" HandleID="k8s-pod-network.063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--kube--controllers--566d49dfb--hxnld-eth0" Apr 30 03:27:11.626326 containerd[1469]: 2025-04-30 03:27:11.614 [INFO][5033] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" HandleID="k8s-pod-network.063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--kube--controllers--566d49dfb--hxnld-eth0" Apr 30 03:27:11.626326 containerd[1469]: 2025-04-30 03:27:11.619 [INFO][5033] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:27:11.626326 containerd[1469]: 2025-04-30 03:27:11.622 [INFO][5025] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067" Apr 30 03:27:11.629419 containerd[1469]: time="2025-04-30T03:27:11.626373587Z" level=info msg="TearDown network for sandbox \"063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067\" successfully" Apr 30 03:27:11.671646 containerd[1469]: time="2025-04-30T03:27:11.671573986Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:27:11.684639 containerd[1469]: time="2025-04-30T03:27:11.684564976Z" level=info msg="RemovePodSandbox \"063251ed60a0e3ebb0249dd83784cec57e02c354c576a5b0150e51a7a4ec6067\" returns successfully" Apr 30 03:27:11.685830 containerd[1469]: time="2025-04-30T03:27:11.685460282Z" level=info msg="StopPodSandbox for \"c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c\"" Apr 30 03:27:11.843654 containerd[1469]: 2025-04-30 03:27:11.769 [WARNING][5056] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--2--e7e0406ed5-k8s-csi--node--driver--kld5k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b718a743-a03f-4839-ab75-67d0237668cd", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 26, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-2-e7e0406ed5", ContainerID:"b1263b258f850bccc1c2cd1483ab03ae6122de077eb8fd479be62c746ceb12c3", Pod:"csi-node-driver-kld5k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.11.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6545753b4dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:27:11.843654 containerd[1469]: 2025-04-30 03:27:11.769 [INFO][5056] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" Apr 30 03:27:11.843654 containerd[1469]: 2025-04-30 03:27:11.769 [INFO][5056] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" iface="eth0" netns="" Apr 30 03:27:11.843654 containerd[1469]: 2025-04-30 03:27:11.769 [INFO][5056] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" Apr 30 03:27:11.843654 containerd[1469]: 2025-04-30 03:27:11.770 [INFO][5056] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" Apr 30 03:27:11.843654 containerd[1469]: 2025-04-30 03:27:11.810 [INFO][5063] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" HandleID="k8s-pod-network.c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-csi--node--driver--kld5k-eth0" Apr 30 03:27:11.843654 containerd[1469]: 2025-04-30 03:27:11.810 [INFO][5063] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:27:11.843654 containerd[1469]: 2025-04-30 03:27:11.810 [INFO][5063] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:27:11.843654 containerd[1469]: 2025-04-30 03:27:11.832 [WARNING][5063] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" HandleID="k8s-pod-network.c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-csi--node--driver--kld5k-eth0" Apr 30 03:27:11.843654 containerd[1469]: 2025-04-30 03:27:11.832 [INFO][5063] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" HandleID="k8s-pod-network.c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-csi--node--driver--kld5k-eth0" Apr 30 03:27:11.843654 containerd[1469]: 2025-04-30 03:27:11.836 [INFO][5063] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:27:11.843654 containerd[1469]: 2025-04-30 03:27:11.840 [INFO][5056] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" Apr 30 03:27:11.844888 containerd[1469]: time="2025-04-30T03:27:11.843736294Z" level=info msg="TearDown network for sandbox \"c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c\" successfully" Apr 30 03:27:11.844888 containerd[1469]: time="2025-04-30T03:27:11.843783935Z" level=info msg="StopPodSandbox for \"c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c\" returns successfully" Apr 30 03:27:11.845167 containerd[1469]: time="2025-04-30T03:27:11.845089518Z" level=info msg="RemovePodSandbox for \"c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c\"" Apr 30 03:27:11.845167 containerd[1469]: time="2025-04-30T03:27:11.845130326Z" level=info msg="Forcibly stopping sandbox \"c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c\"" Apr 30 03:27:11.964808 sshd[5011]: pam_unix(sshd:session): session closed for user core Apr 30 03:27:11.980650 systemd[1]: sshd@11-24.199.113.144:22-139.178.89.65:50242.service: Deactivated successfully. Apr 30 03:27:11.986570 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 03:27:11.988091 systemd-logind[1446]: Session 12 logged out. Waiting for processes to exit. Apr 30 03:27:12.006019 systemd[1]: Started sshd@12-24.199.113.144:22-139.178.89.65:50250.service - OpenSSH per-connection server daemon (139.178.89.65:50250). Apr 30 03:27:12.012850 systemd-logind[1446]: Removed session 12. Apr 30 03:27:12.081686 containerd[1469]: 2025-04-30 03:27:11.934 [WARNING][5083] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--2--e7e0406ed5-k8s-csi--node--driver--kld5k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b718a743-a03f-4839-ab75-67d0237668cd", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 26, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-2-e7e0406ed5", ContainerID:"b1263b258f850bccc1c2cd1483ab03ae6122de077eb8fd479be62c746ceb12c3", Pod:"csi-node-driver-kld5k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.11.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6545753b4dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:27:12.081686 containerd[1469]: 2025-04-30 03:27:11.935 [INFO][5083] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" Apr 30 03:27:12.081686 containerd[1469]: 2025-04-30 03:27:11.935 [INFO][5083] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" iface="eth0" netns="" Apr 30 03:27:12.081686 containerd[1469]: 2025-04-30 03:27:11.935 [INFO][5083] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" Apr 30 03:27:12.081686 containerd[1469]: 2025-04-30 03:27:11.935 [INFO][5083] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" Apr 30 03:27:12.081686 containerd[1469]: 2025-04-30 03:27:12.014 [INFO][5091] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" HandleID="k8s-pod-network.c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-csi--node--driver--kld5k-eth0" Apr 30 03:27:12.081686 containerd[1469]: 2025-04-30 03:27:12.014 [INFO][5091] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:27:12.081686 containerd[1469]: 2025-04-30 03:27:12.014 [INFO][5091] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:27:12.081686 containerd[1469]: 2025-04-30 03:27:12.052 [WARNING][5091] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" HandleID="k8s-pod-network.c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-csi--node--driver--kld5k-eth0" Apr 30 03:27:12.081686 containerd[1469]: 2025-04-30 03:27:12.052 [INFO][5091] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" HandleID="k8s-pod-network.c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-csi--node--driver--kld5k-eth0" Apr 30 03:27:12.081686 containerd[1469]: 2025-04-30 03:27:12.064 [INFO][5091] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:27:12.081686 containerd[1469]: 2025-04-30 03:27:12.073 [INFO][5083] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c" Apr 30 03:27:12.085001 containerd[1469]: time="2025-04-30T03:27:12.081735553Z" level=info msg="TearDown network for sandbox \"c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c\" successfully" Apr 30 03:27:12.097377 containerd[1469]: time="2025-04-30T03:27:12.097184489Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:27:12.097673 containerd[1469]: time="2025-04-30T03:27:12.097632344Z" level=info msg="RemovePodSandbox \"c9a794eb09b39963f2551bdd9c3666e602450e1f2d3d13b880522cc55d8e422c\" returns successfully" Apr 30 03:27:12.099190 sshd[5099]: Accepted publickey for core from 139.178.89.65 port 50250 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:27:12.102807 containerd[1469]: time="2025-04-30T03:27:12.101191541Z" level=info msg="StopPodSandbox for \"ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0\"" Apr 30 03:27:12.104533 sshd[5099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:27:12.116973 systemd-logind[1446]: New session 13 of user core. Apr 30 03:27:12.123760 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 03:27:12.269665 containerd[1469]: 2025-04-30 03:27:12.186 [WARNING][5115] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--c4b7t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"73a01c76-53ce-4689-98ef-5958ffbdd467", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-2-e7e0406ed5", ContainerID:"9541ff616e5ccb1a56fdf13ade542c5b0e6f7f37304c0832e6260ab8cfc8300e", Pod:"coredns-668d6bf9bc-c4b7t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali096f8454293", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:27:12.269665 containerd[1469]: 2025-04-30 03:27:12.187 [INFO][5115] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" Apr 30 03:27:12.269665 containerd[1469]: 2025-04-30 03:27:12.187 [INFO][5115] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" iface="eth0" netns="" Apr 30 03:27:12.269665 containerd[1469]: 2025-04-30 03:27:12.187 [INFO][5115] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" Apr 30 03:27:12.269665 containerd[1469]: 2025-04-30 03:27:12.187 [INFO][5115] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" Apr 30 03:27:12.269665 containerd[1469]: 2025-04-30 03:27:12.230 [INFO][5123] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" HandleID="k8s-pod-network.ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--c4b7t-eth0" Apr 30 03:27:12.269665 containerd[1469]: 2025-04-30 03:27:12.230 [INFO][5123] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:27:12.269665 containerd[1469]: 2025-04-30 03:27:12.230 [INFO][5123] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:27:12.269665 containerd[1469]: 2025-04-30 03:27:12.245 [WARNING][5123] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" HandleID="k8s-pod-network.ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--c4b7t-eth0" Apr 30 03:27:12.269665 containerd[1469]: 2025-04-30 03:27:12.245 [INFO][5123] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" HandleID="k8s-pod-network.ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--c4b7t-eth0" Apr 30 03:27:12.269665 containerd[1469]: 2025-04-30 03:27:12.252 [INFO][5123] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:27:12.269665 containerd[1469]: 2025-04-30 03:27:12.263 [INFO][5115] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" Apr 30 03:27:12.272529 containerd[1469]: time="2025-04-30T03:27:12.269721903Z" level=info msg="TearDown network for sandbox \"ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0\" successfully" Apr 30 03:27:12.272529 containerd[1469]: time="2025-04-30T03:27:12.269784640Z" level=info msg="StopPodSandbox for \"ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0\" returns successfully" Apr 30 03:27:12.272529 containerd[1469]: time="2025-04-30T03:27:12.271485529Z" level=info msg="RemovePodSandbox for \"ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0\"" Apr 30 03:27:12.272529 containerd[1469]: time="2025-04-30T03:27:12.271524303Z" level=info msg="Forcibly stopping sandbox \"ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0\"" Apr 30 03:27:12.364959 sshd[5099]: pam_unix(sshd:session): session closed for user core Apr 30 03:27:12.372661 systemd[1]: sshd@12-24.199.113.144:22-139.178.89.65:50250.service: Deactivated successfully. Apr 30 03:27:12.378200 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 03:27:12.380218 systemd-logind[1446]: Session 13 logged out. Waiting for processes to exit. Apr 30 03:27:12.382968 systemd-logind[1446]: Removed session 13. Apr 30 03:27:12.409882 containerd[1469]: 2025-04-30 03:27:12.347 [WARNING][5147] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--c4b7t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"73a01c76-53ce-4689-98ef-5958ffbdd467", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-2-e7e0406ed5", ContainerID:"9541ff616e5ccb1a56fdf13ade542c5b0e6f7f37304c0832e6260ab8cfc8300e", Pod:"coredns-668d6bf9bc-c4b7t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali096f8454293", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:27:12.409882 containerd[1469]: 2025-04-30 03:27:12.348 [INFO][5147] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" Apr 30 03:27:12.409882 containerd[1469]: 2025-04-30 03:27:12.348 [INFO][5147] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" iface="eth0" netns="" Apr 30 03:27:12.409882 containerd[1469]: 2025-04-30 03:27:12.348 [INFO][5147] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" Apr 30 03:27:12.409882 containerd[1469]: 2025-04-30 03:27:12.348 [INFO][5147] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" Apr 30 03:27:12.409882 containerd[1469]: 2025-04-30 03:27:12.391 [INFO][5154] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" HandleID="k8s-pod-network.ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--c4b7t-eth0" Apr 30 03:27:12.409882 containerd[1469]: 2025-04-30 03:27:12.392 [INFO][5154] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:27:12.409882 containerd[1469]: 2025-04-30 03:27:12.392 [INFO][5154] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:27:12.409882 containerd[1469]: 2025-04-30 03:27:12.402 [WARNING][5154] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" HandleID="k8s-pod-network.ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--c4b7t-eth0" Apr 30 03:27:12.409882 containerd[1469]: 2025-04-30 03:27:12.402 [INFO][5154] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" HandleID="k8s-pod-network.ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--c4b7t-eth0" Apr 30 03:27:12.409882 containerd[1469]: 2025-04-30 03:27:12.405 [INFO][5154] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:27:12.409882 containerd[1469]: 2025-04-30 03:27:12.407 [INFO][5147] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0" Apr 30 03:27:12.411208 containerd[1469]: time="2025-04-30T03:27:12.409944163Z" level=info msg="TearDown network for sandbox \"ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0\" successfully" Apr 30 03:27:12.414730 containerd[1469]: time="2025-04-30T03:27:12.414165590Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:27:12.414730 containerd[1469]: time="2025-04-30T03:27:12.414259575Z" level=info msg="RemovePodSandbox \"ecf1f9c100fa94e3276275fafb46a7f716066ce93d047c5b262ca4cce0e3bff0\" returns successfully" Apr 30 03:27:12.415056 containerd[1469]: time="2025-04-30T03:27:12.414844433Z" level=info msg="StopPodSandbox for \"21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b\"" Apr 30 03:27:12.593623 containerd[1469]: 2025-04-30 03:27:12.497 [WARNING][5174] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--gjwrf-eth0", GenerateName:"calico-apiserver-ccf698587-", Namespace:"calico-apiserver", SelfLink:"", UID:"49d8fe7d-2d8b-449c-8513-592e9baca4b5", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 26, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ccf698587", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-2-e7e0406ed5", ContainerID:"19e02a367f528aeea1655cb7fd46db7eee44b041710967bf0d6e4d8785455355", Pod:"calico-apiserver-ccf698587-gjwrf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07700dbb7a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:27:12.593623 containerd[1469]: 2025-04-30 03:27:12.498 [INFO][5174] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" Apr 30 03:27:12.593623 containerd[1469]: 2025-04-30 03:27:12.498 [INFO][5174] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" iface="eth0" netns="" Apr 30 03:27:12.593623 containerd[1469]: 2025-04-30 03:27:12.498 [INFO][5174] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" Apr 30 03:27:12.593623 containerd[1469]: 2025-04-30 03:27:12.498 [INFO][5174] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" Apr 30 03:27:12.593623 containerd[1469]: 2025-04-30 03:27:12.566 [INFO][5181] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" HandleID="k8s-pod-network.21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--gjwrf-eth0" Apr 30 03:27:12.593623 containerd[1469]: 2025-04-30 03:27:12.566 [INFO][5181] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:27:12.593623 containerd[1469]: 2025-04-30 03:27:12.566 [INFO][5181] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:27:12.593623 containerd[1469]: 2025-04-30 03:27:12.581 [WARNING][5181] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" HandleID="k8s-pod-network.21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--gjwrf-eth0" Apr 30 03:27:12.593623 containerd[1469]: 2025-04-30 03:27:12.581 [INFO][5181] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" HandleID="k8s-pod-network.21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--gjwrf-eth0" Apr 30 03:27:12.593623 containerd[1469]: 2025-04-30 03:27:12.586 [INFO][5181] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:27:12.593623 containerd[1469]: 2025-04-30 03:27:12.590 [INFO][5174] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" Apr 30 03:27:12.595051 containerd[1469]: time="2025-04-30T03:27:12.593690732Z" level=info msg="TearDown network for sandbox \"21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b\" successfully" Apr 30 03:27:12.595051 containerd[1469]: time="2025-04-30T03:27:12.593717321Z" level=info msg="StopPodSandbox for \"21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b\" returns successfully" Apr 30 03:27:12.595051 containerd[1469]: time="2025-04-30T03:27:12.594315298Z" level=info msg="RemovePodSandbox for \"21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b\"" Apr 30 03:27:12.595051 containerd[1469]: time="2025-04-30T03:27:12.594356248Z" level=info msg="Forcibly stopping sandbox \"21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b\"" Apr 30 03:27:12.722526 containerd[1469]: 2025-04-30 03:27:12.671 [WARNING][5199] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--gjwrf-eth0", GenerateName:"calico-apiserver-ccf698587-", Namespace:"calico-apiserver", SelfLink:"", UID:"49d8fe7d-2d8b-449c-8513-592e9baca4b5", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 26, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ccf698587", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-2-e7e0406ed5", ContainerID:"19e02a367f528aeea1655cb7fd46db7eee44b041710967bf0d6e4d8785455355", Pod:"calico-apiserver-ccf698587-gjwrf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07700dbb7a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:27:12.722526 containerd[1469]: 2025-04-30 03:27:12.672 [INFO][5199] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" Apr 30 03:27:12.722526 containerd[1469]: 2025-04-30 03:27:12.672 [INFO][5199] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" iface="eth0" netns="" Apr 30 03:27:12.722526 containerd[1469]: 2025-04-30 03:27:12.672 [INFO][5199] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" Apr 30 03:27:12.722526 containerd[1469]: 2025-04-30 03:27:12.672 [INFO][5199] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" Apr 30 03:27:12.722526 containerd[1469]: 2025-04-30 03:27:12.704 [INFO][5207] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" HandleID="k8s-pod-network.21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--gjwrf-eth0" Apr 30 03:27:12.722526 containerd[1469]: 2025-04-30 03:27:12.705 [INFO][5207] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:27:12.722526 containerd[1469]: 2025-04-30 03:27:12.705 [INFO][5207] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:27:12.722526 containerd[1469]: 2025-04-30 03:27:12.714 [WARNING][5207] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" HandleID="k8s-pod-network.21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--gjwrf-eth0" Apr 30 03:27:12.722526 containerd[1469]: 2025-04-30 03:27:12.714 [INFO][5207] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" HandleID="k8s-pod-network.21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--gjwrf-eth0" Apr 30 03:27:12.722526 containerd[1469]: 2025-04-30 03:27:12.717 [INFO][5207] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:27:12.722526 containerd[1469]: 2025-04-30 03:27:12.719 [INFO][5199] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b" Apr 30 03:27:12.723246 containerd[1469]: time="2025-04-30T03:27:12.722524078Z" level=info msg="TearDown network for sandbox \"21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b\" successfully" Apr 30 03:27:12.725408 containerd[1469]: time="2025-04-30T03:27:12.725342202Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:27:12.725658 containerd[1469]: time="2025-04-30T03:27:12.725516382Z" level=info msg="RemovePodSandbox \"21359fc6a32b6f495ffcd41ba822e7ce6ee8bd1f3c1210dca737d909694b6a7b\" returns successfully" Apr 30 03:27:12.726538 containerd[1469]: time="2025-04-30T03:27:12.726076118Z" level=info msg="StopPodSandbox for \"023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4\"" Apr 30 03:27:12.836629 containerd[1469]: 2025-04-30 03:27:12.784 [WARNING][5226] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--fj257-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"107c0e05-c671-4f02-9024-81afd867d87a", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-2-e7e0406ed5", ContainerID:"408832c8999b4544ddd35f415064f80c6e1b6ee2d7ae0a5024ce0909cc16f885", Pod:"coredns-668d6bf9bc-fj257", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali485c638305d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:27:12.836629 containerd[1469]: 2025-04-30 03:27:12.784 [INFO][5226] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" Apr 30 03:27:12.836629 containerd[1469]: 2025-04-30 03:27:12.784 [INFO][5226] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" iface="eth0" netns="" Apr 30 03:27:12.836629 containerd[1469]: 2025-04-30 03:27:12.784 [INFO][5226] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" Apr 30 03:27:12.836629 containerd[1469]: 2025-04-30 03:27:12.784 [INFO][5226] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" Apr 30 03:27:12.836629 containerd[1469]: 2025-04-30 03:27:12.816 [INFO][5234] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" HandleID="k8s-pod-network.023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--fj257-eth0" Apr 30 03:27:12.836629 containerd[1469]: 2025-04-30 03:27:12.817 [INFO][5234] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:27:12.836629 containerd[1469]: 2025-04-30 03:27:12.817 [INFO][5234] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:27:12.836629 containerd[1469]: 2025-04-30 03:27:12.827 [WARNING][5234] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" HandleID="k8s-pod-network.023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--fj257-eth0" Apr 30 03:27:12.836629 containerd[1469]: 2025-04-30 03:27:12.827 [INFO][5234] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" HandleID="k8s-pod-network.023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--fj257-eth0" Apr 30 03:27:12.836629 containerd[1469]: 2025-04-30 03:27:12.830 [INFO][5234] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:27:12.836629 containerd[1469]: 2025-04-30 03:27:12.833 [INFO][5226] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" Apr 30 03:27:12.837435 containerd[1469]: time="2025-04-30T03:27:12.836707868Z" level=info msg="TearDown network for sandbox \"023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4\" successfully" Apr 30 03:27:12.837435 containerd[1469]: time="2025-04-30T03:27:12.837119475Z" level=info msg="StopPodSandbox for \"023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4\" returns successfully" Apr 30 03:27:12.839148 containerd[1469]: time="2025-04-30T03:27:12.839095246Z" level=info msg="RemovePodSandbox for \"023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4\"" Apr 30 03:27:12.839148 containerd[1469]: time="2025-04-30T03:27:12.839147875Z" level=info msg="Forcibly stopping sandbox \"023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4\"" Apr 30 03:27:12.953340 containerd[1469]: 2025-04-30 03:27:12.905 [WARNING][5252] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--fj257-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"107c0e05-c671-4f02-9024-81afd867d87a", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-2-e7e0406ed5", ContainerID:"408832c8999b4544ddd35f415064f80c6e1b6ee2d7ae0a5024ce0909cc16f885", Pod:"coredns-668d6bf9bc-fj257", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali485c638305d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:27:12.953340 containerd[1469]: 2025-04-30 03:27:12.905 [INFO][5252] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" Apr 30 03:27:12.953340 containerd[1469]: 2025-04-30 03:27:12.906 [INFO][5252] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" iface="eth0" netns="" Apr 30 03:27:12.953340 containerd[1469]: 2025-04-30 03:27:12.906 [INFO][5252] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" Apr 30 03:27:12.953340 containerd[1469]: 2025-04-30 03:27:12.906 [INFO][5252] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" Apr 30 03:27:12.953340 containerd[1469]: 2025-04-30 03:27:12.936 [INFO][5259] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" HandleID="k8s-pod-network.023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--fj257-eth0" Apr 30 03:27:12.953340 containerd[1469]: 2025-04-30 03:27:12.936 [INFO][5259] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:27:12.953340 containerd[1469]: 2025-04-30 03:27:12.936 [INFO][5259] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:27:12.953340 containerd[1469]: 2025-04-30 03:27:12.944 [WARNING][5259] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" HandleID="k8s-pod-network.023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--fj257-eth0" Apr 30 03:27:12.953340 containerd[1469]: 2025-04-30 03:27:12.944 [INFO][5259] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" HandleID="k8s-pod-network.023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-coredns--668d6bf9bc--fj257-eth0" Apr 30 03:27:12.953340 containerd[1469]: 2025-04-30 03:27:12.947 [INFO][5259] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:27:12.953340 containerd[1469]: 2025-04-30 03:27:12.949 [INFO][5252] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4" Apr 30 03:27:12.953340 containerd[1469]: time="2025-04-30T03:27:12.951868668Z" level=info msg="TearDown network for sandbox \"023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4\" successfully" Apr 30 03:27:12.955776 containerd[1469]: time="2025-04-30T03:27:12.955713030Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:27:12.955956 containerd[1469]: time="2025-04-30T03:27:12.955797866Z" level=info msg="RemovePodSandbox \"023dc3692c8101fe93ec7547c285af6320ad077dd90fad2c29036848f20138e4\" returns successfully" Apr 30 03:27:12.956653 containerd[1469]: time="2025-04-30T03:27:12.956613614Z" level=info msg="StopPodSandbox for \"16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c\"" Apr 30 03:27:13.063406 containerd[1469]: 2025-04-30 03:27:13.012 [WARNING][5277] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--5rqrm-eth0", GenerateName:"calico-apiserver-ccf698587-", Namespace:"calico-apiserver", SelfLink:"", UID:"7b15cb4e-c1af-493f-b9f9-4cb6e0146639", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 26, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ccf698587", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-2-e7e0406ed5", ContainerID:"61a5b31479f92f34903874f82e9c59e53dbd7332ef5393e75e0cc2d77dd88fd1", Pod:"calico-apiserver-ccf698587-5rqrm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid398d3a2f42", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:27:13.063406 containerd[1469]: 2025-04-30 03:27:13.012 [INFO][5277] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" Apr 30 03:27:13.063406 containerd[1469]: 2025-04-30 03:27:13.012 [INFO][5277] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" iface="eth0" netns="" Apr 30 03:27:13.063406 containerd[1469]: 2025-04-30 03:27:13.012 [INFO][5277] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" Apr 30 03:27:13.063406 containerd[1469]: 2025-04-30 03:27:13.012 [INFO][5277] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" Apr 30 03:27:13.063406 containerd[1469]: 2025-04-30 03:27:13.048 [INFO][5284] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" HandleID="k8s-pod-network.16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--5rqrm-eth0" Apr 30 03:27:13.063406 containerd[1469]: 2025-04-30 03:27:13.048 [INFO][5284] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:27:13.063406 containerd[1469]: 2025-04-30 03:27:13.048 [INFO][5284] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:27:13.063406 containerd[1469]: 2025-04-30 03:27:13.056 [WARNING][5284] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" HandleID="k8s-pod-network.16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--5rqrm-eth0" Apr 30 03:27:13.063406 containerd[1469]: 2025-04-30 03:27:13.056 [INFO][5284] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" HandleID="k8s-pod-network.16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--5rqrm-eth0" Apr 30 03:27:13.063406 containerd[1469]: 2025-04-30 03:27:13.058 [INFO][5284] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:27:13.063406 containerd[1469]: 2025-04-30 03:27:13.061 [INFO][5277] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" Apr 30 03:27:13.063406 containerd[1469]: time="2025-04-30T03:27:13.063188887Z" level=info msg="TearDown network for sandbox \"16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c\" successfully" Apr 30 03:27:13.063406 containerd[1469]: time="2025-04-30T03:27:13.063217818Z" level=info msg="StopPodSandbox for \"16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c\" returns successfully" Apr 30 03:27:13.064039 containerd[1469]: time="2025-04-30T03:27:13.063854677Z" level=info msg="RemovePodSandbox for \"16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c\"" Apr 30 03:27:13.064039 containerd[1469]: time="2025-04-30T03:27:13.063884383Z" level=info msg="Forcibly stopping sandbox \"16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c\"" Apr 30 03:27:13.183129 containerd[1469]: 2025-04-30 03:27:13.126 [WARNING][5303] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--5rqrm-eth0", GenerateName:"calico-apiserver-ccf698587-", Namespace:"calico-apiserver", SelfLink:"", UID:"7b15cb4e-c1af-493f-b9f9-4cb6e0146639", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 26, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ccf698587", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-2-e7e0406ed5", ContainerID:"61a5b31479f92f34903874f82e9c59e53dbd7332ef5393e75e0cc2d77dd88fd1", Pod:"calico-apiserver-ccf698587-5rqrm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid398d3a2f42", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:27:13.183129 containerd[1469]: 2025-04-30 03:27:13.126 [INFO][5303] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" Apr 30 03:27:13.183129 containerd[1469]: 2025-04-30 03:27:13.126 [INFO][5303] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" iface="eth0" netns="" Apr 30 03:27:13.183129 containerd[1469]: 2025-04-30 03:27:13.126 [INFO][5303] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" Apr 30 03:27:13.183129 containerd[1469]: 2025-04-30 03:27:13.127 [INFO][5303] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" Apr 30 03:27:13.183129 containerd[1469]: 2025-04-30 03:27:13.161 [INFO][5310] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" HandleID="k8s-pod-network.16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--5rqrm-eth0" Apr 30 03:27:13.183129 containerd[1469]: 2025-04-30 03:27:13.162 [INFO][5310] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:27:13.183129 containerd[1469]: 2025-04-30 03:27:13.162 [INFO][5310] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:27:13.183129 containerd[1469]: 2025-04-30 03:27:13.172 [WARNING][5310] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" HandleID="k8s-pod-network.16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--5rqrm-eth0" Apr 30 03:27:13.183129 containerd[1469]: 2025-04-30 03:27:13.172 [INFO][5310] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" HandleID="k8s-pod-network.16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" Workload="ci--4081.3.3--2--e7e0406ed5-k8s-calico--apiserver--ccf698587--5rqrm-eth0" Apr 30 03:27:13.183129 containerd[1469]: 2025-04-30 03:27:13.175 [INFO][5310] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:27:13.183129 containerd[1469]: 2025-04-30 03:27:13.179 [INFO][5303] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c" Apr 30 03:27:13.184691 containerd[1469]: time="2025-04-30T03:27:13.183180700Z" level=info msg="TearDown network for sandbox \"16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c\" successfully" Apr 30 03:27:13.192378 containerd[1469]: time="2025-04-30T03:27:13.192317435Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:27:13.192847 containerd[1469]: time="2025-04-30T03:27:13.192403674Z" level=info msg="RemovePodSandbox \"16eb00c74093b5090346bf4157b0f441598807a375fcdb32e1d431977f3a389c\" returns successfully" Apr 30 03:27:17.281581 systemd[1]: run-containerd-runc-k8s.io-fe9c1d552f54d73f020e93158aa84d0ffaf6652340af6f255544d25f348d6110-runc.Rcy3TK.mount: Deactivated successfully. Apr 30 03:27:17.385976 systemd[1]: Started sshd@13-24.199.113.144:22-139.178.89.65:55648.service - OpenSSH per-connection server daemon (139.178.89.65:55648). Apr 30 03:27:17.434351 sshd[5342]: Accepted publickey for core from 139.178.89.65 port 55648 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:27:17.436637 sshd[5342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:27:17.443192 systemd-logind[1446]: New session 14 of user core. Apr 30 03:27:17.455838 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 03:27:17.633610 sshd[5342]: pam_unix(sshd:session): session closed for user core Apr 30 03:27:17.640440 systemd[1]: sshd@13-24.199.113.144:22-139.178.89.65:55648.service: Deactivated successfully. Apr 30 03:27:17.647285 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 03:27:17.651509 systemd-logind[1446]: Session 14 logged out. Waiting for processes to exit. Apr 30 03:27:17.655821 systemd-logind[1446]: Removed session 14. Apr 30 03:27:22.656027 systemd[1]: Started sshd@14-24.199.113.144:22-139.178.89.65:55658.service - OpenSSH per-connection server daemon (139.178.89.65:55658). Apr 30 03:27:22.721953 sshd[5379]: Accepted publickey for core from 139.178.89.65 port 55658 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:27:22.724593 sshd[5379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:27:22.733741 systemd-logind[1446]: New session 15 of user core. Apr 30 03:27:22.742774 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 03:27:22.751446 kubelet[2493]: E0430 03:27:22.751160 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:27:22.934822 sshd[5379]: pam_unix(sshd:session): session closed for user core Apr 30 03:27:22.941134 systemd-logind[1446]: Session 15 logged out. Waiting for processes to exit. Apr 30 03:27:22.942316 systemd[1]: sshd@14-24.199.113.144:22-139.178.89.65:55658.service: Deactivated successfully. Apr 30 03:27:22.947345 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 03:27:22.950801 systemd-logind[1446]: Removed session 15. Apr 30 03:27:27.741863 kubelet[2493]: E0430 03:27:27.741748 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:27:27.953924 systemd[1]: Started sshd@15-24.199.113.144:22-139.178.89.65:56662.service - OpenSSH per-connection server daemon (139.178.89.65:56662). Apr 30 03:27:27.996389 sshd[5392]: Accepted publickey for core from 139.178.89.65 port 56662 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:27:27.998390 sshd[5392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:27:28.005687 systemd-logind[1446]: New session 16 of user core. Apr 30 03:27:28.015764 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 03:27:28.230299 sshd[5392]: pam_unix(sshd:session): session closed for user core Apr 30 03:27:28.234607 systemd[1]: sshd@15-24.199.113.144:22-139.178.89.65:56662.service: Deactivated successfully. Apr 30 03:27:28.238370 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 03:27:28.242904 systemd-logind[1446]: Session 16 logged out. Waiting for processes to exit. Apr 30 03:27:28.245133 systemd-logind[1446]: Removed session 16. Apr 30 03:27:33.257190 systemd[1]: Started sshd@16-24.199.113.144:22-139.178.89.65:56666.service - OpenSSH per-connection server daemon (139.178.89.65:56666). Apr 30 03:27:33.323675 sshd[5411]: Accepted publickey for core from 139.178.89.65 port 56666 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:27:33.325946 sshd[5411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:27:33.331710 systemd-logind[1446]: New session 17 of user core. Apr 30 03:27:33.342799 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 03:27:33.549087 sshd[5411]: pam_unix(sshd:session): session closed for user core Apr 30 03:27:33.554768 systemd-logind[1446]: Session 17 logged out. Waiting for processes to exit. Apr 30 03:27:33.555375 systemd[1]: sshd@16-24.199.113.144:22-139.178.89.65:56666.service: Deactivated successfully. Apr 30 03:27:33.559619 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 03:27:33.561637 systemd-logind[1446]: Removed session 17. Apr 30 03:27:38.567911 systemd[1]: Started sshd@17-24.199.113.144:22-139.178.89.65:40040.service - OpenSSH per-connection server daemon (139.178.89.65:40040). Apr 30 03:27:38.624014 sshd[5445]: Accepted publickey for core from 139.178.89.65 port 40040 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:27:38.626531 sshd[5445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:27:38.632781 systemd-logind[1446]: New session 18 of user core. Apr 30 03:27:38.636800 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 03:27:38.841953 sshd[5445]: pam_unix(sshd:session): session closed for user core Apr 30 03:27:38.854246 systemd[1]: sshd@17-24.199.113.144:22-139.178.89.65:40040.service: Deactivated successfully. Apr 30 03:27:38.857196 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 03:27:38.859670 systemd-logind[1446]: Session 18 logged out. Waiting for processes to exit. Apr 30 03:27:38.867108 systemd[1]: Started sshd@18-24.199.113.144:22-139.178.89.65:40044.service - OpenSSH per-connection server daemon (139.178.89.65:40044). Apr 30 03:27:38.869610 systemd-logind[1446]: Removed session 18. Apr 30 03:27:38.950857 sshd[5457]: Accepted publickey for core from 139.178.89.65 port 40044 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:27:38.953516 sshd[5457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:27:38.961759 systemd-logind[1446]: New session 19 of user core. Apr 30 03:27:38.966813 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 03:27:39.354597 sshd[5457]: pam_unix(sshd:session): session closed for user core Apr 30 03:27:39.374970 systemd[1]: Started sshd@19-24.199.113.144:22-139.178.89.65:40056.service - OpenSSH per-connection server daemon (139.178.89.65:40056). Apr 30 03:27:39.377497 systemd[1]: sshd@18-24.199.113.144:22-139.178.89.65:40044.service: Deactivated successfully. Apr 30 03:27:39.384146 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 03:27:39.386962 systemd-logind[1446]: Session 19 logged out. Waiting for processes to exit. Apr 30 03:27:39.389726 systemd-logind[1446]: Removed session 19. Apr 30 03:27:39.444297 sshd[5466]: Accepted publickey for core from 139.178.89.65 port 40056 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:27:39.446943 sshd[5466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:27:39.455787 systemd-logind[1446]: New session 20 of user core. Apr 30 03:27:39.461788 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 03:27:39.741574 kubelet[2493]: E0430 03:27:39.741390 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:27:40.646421 sshd[5466]: pam_unix(sshd:session): session closed for user core Apr 30 03:27:40.666382 systemd[1]: sshd@19-24.199.113.144:22-139.178.89.65:40056.service: Deactivated successfully. Apr 30 03:27:40.674969 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 03:27:40.681972 systemd-logind[1446]: Session 20 logged out. Waiting for processes to exit. Apr 30 03:27:40.696652 systemd[1]: Started sshd@20-24.199.113.144:22-139.178.89.65:40058.service - OpenSSH per-connection server daemon (139.178.89.65:40058). Apr 30 03:27:40.704862 systemd-logind[1446]: Removed session 20. Apr 30 03:27:40.777660 sshd[5483]: Accepted publickey for core from 139.178.89.65 port 40058 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:27:40.780379 sshd[5483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:27:40.789605 systemd-logind[1446]: New session 21 of user core. Apr 30 03:27:40.797770 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 03:27:41.455096 sshd[5483]: pam_unix(sshd:session): session closed for user core Apr 30 03:27:41.467610 systemd[1]: sshd@20-24.199.113.144:22-139.178.89.65:40058.service: Deactivated successfully. Apr 30 03:27:41.473707 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 03:27:41.479248 systemd-logind[1446]: Session 21 logged out. Waiting for processes to exit. Apr 30 03:27:41.490919 systemd[1]: Started sshd@21-24.199.113.144:22-139.178.89.65:40074.service - OpenSSH per-connection server daemon (139.178.89.65:40074). Apr 30 03:27:41.495901 systemd-logind[1446]: Removed session 21. Apr 30 03:27:41.615084 sshd[5497]: Accepted publickey for core from 139.178.89.65 port 40074 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:27:41.617377 sshd[5497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:27:41.625183 systemd-logind[1446]: New session 22 of user core. Apr 30 03:27:41.633724 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 03:27:41.821564 sshd[5497]: pam_unix(sshd:session): session closed for user core Apr 30 03:27:41.827624 systemd[1]: sshd@21-24.199.113.144:22-139.178.89.65:40074.service: Deactivated successfully. Apr 30 03:27:41.835331 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 03:27:41.837012 systemd-logind[1446]: Session 22 logged out. Waiting for processes to exit. Apr 30 03:27:41.839066 systemd-logind[1446]: Removed session 22. Apr 30 03:27:46.844127 systemd[1]: Started sshd@22-24.199.113.144:22-139.178.89.65:37632.service - OpenSSH per-connection server daemon (139.178.89.65:37632). Apr 30 03:27:46.916278 sshd[5512]: Accepted publickey for core from 139.178.89.65 port 37632 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:27:46.918810 sshd[5512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:27:46.924093 systemd-logind[1446]: New session 23 of user core. Apr 30 03:27:46.930729 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 03:27:47.306774 sshd[5512]: pam_unix(sshd:session): session closed for user core Apr 30 03:27:47.312022 systemd[1]: sshd@22-24.199.113.144:22-139.178.89.65:37632.service: Deactivated successfully. Apr 30 03:27:47.315223 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 03:27:47.317254 systemd-logind[1446]: Session 23 logged out. Waiting for processes to exit. Apr 30 03:27:47.318563 systemd-logind[1446]: Removed session 23. Apr 30 03:27:49.742513 kubelet[2493]: E0430 03:27:49.742345 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:27:50.113246 systemd[1]: run-containerd-runc-k8s.io-a83c633f1e018664462a06216d91d8228db8fda72eb73366df3a2049211d07f0-runc.28EEul.mount: Deactivated successfully. Apr 30 03:27:50.197780 kubelet[2493]: E0430 03:27:50.197729 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:27:52.325843 systemd[1]: Started sshd@23-24.199.113.144:22-139.178.89.65:37644.service - OpenSSH per-connection server daemon (139.178.89.65:37644). Apr 30 03:27:52.402111 sshd[5548]: Accepted publickey for core from 139.178.89.65 port 37644 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:27:52.405458 sshd[5548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:27:52.423247 systemd-logind[1446]: New session 24 of user core. Apr 30 03:27:52.426761 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 03:27:53.073511 sshd[5548]: pam_unix(sshd:session): session closed for user core Apr 30 03:27:53.079709 systemd-logind[1446]: Session 24 logged out. Waiting for processes to exit. Apr 30 03:27:53.080121 systemd[1]: sshd@23-24.199.113.144:22-139.178.89.65:37644.service: Deactivated successfully. Apr 30 03:27:53.085486 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 03:27:53.087015 systemd-logind[1446]: Removed session 24. Apr 30 03:27:57.742091 kubelet[2493]: E0430 03:27:57.741961 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:27:58.095054 systemd[1]: Started sshd@24-24.199.113.144:22-139.178.89.65:34154.service - OpenSSH per-connection server daemon (139.178.89.65:34154). Apr 30 03:27:58.154751 sshd[5562]: Accepted publickey for core from 139.178.89.65 port 34154 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:27:58.157010 sshd[5562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:27:58.164688 systemd-logind[1446]: New session 25 of user core. Apr 30 03:27:58.171799 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 03:27:58.507296 sshd[5562]: pam_unix(sshd:session): session closed for user core Apr 30 03:27:58.512862 systemd[1]: sshd@24-24.199.113.144:22-139.178.89.65:34154.service: Deactivated successfully. Apr 30 03:27:58.515964 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 03:27:58.517491 systemd-logind[1446]: Session 25 logged out. Waiting for processes to exit. Apr 30 03:27:58.519007 systemd-logind[1446]: Removed session 25. Apr 30 03:27:58.744052 kubelet[2493]: E0430 03:27:58.744005 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:28:03.530136 systemd[1]: Started sshd@25-24.199.113.144:22-139.178.89.65:34160.service - OpenSSH per-connection server daemon (139.178.89.65:34160). Apr 30 03:28:03.598254 sshd[5575]: Accepted publickey for core from 139.178.89.65 port 34160 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:28:03.600084 sshd[5575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:03.606545 systemd-logind[1446]: New session 26 of user core. Apr 30 03:28:03.614943 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 03:28:03.811229 sshd[5575]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:03.816189 systemd[1]: sshd@25-24.199.113.144:22-139.178.89.65:34160.service: Deactivated successfully. Apr 30 03:28:03.820164 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 03:28:03.821595 systemd-logind[1446]: Session 26 logged out. Waiting for processes to exit. Apr 30 03:28:03.823039 systemd-logind[1446]: Removed session 26.