Jan 30 13:55:10.938884 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:55:10.938914 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:55:10.938927 kernel: BIOS-provided physical RAM map: Jan 30 13:55:10.938934 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 13:55:10.938940 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 13:55:10.938946 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 13:55:10.938954 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Jan 30 13:55:10.938961 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Jan 30 13:55:10.938967 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 13:55:10.938977 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 13:55:10.938984 kernel: NX (Execute Disable) protection: active Jan 30 13:55:10.938991 kernel: APIC: Static calls initialized Jan 30 13:55:10.938998 kernel: SMBIOS 2.8 present. Jan 30 13:55:10.939005 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 30 13:55:10.939013 kernel: Hypervisor detected: KVM Jan 30 13:55:10.939024 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:55:10.939032 kernel: kvm-clock: using sched offset of 4502162415 cycles Jan 30 13:55:10.939040 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:55:10.939048 kernel: tsc: Detected 2494.138 MHz processor Jan 30 13:55:10.939056 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:55:10.939064 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:55:10.939072 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Jan 30 13:55:10.939079 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 13:55:10.939087 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:55:10.939098 kernel: ACPI: Early table checksum verification disabled Jan 30 13:55:10.939105 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Jan 30 13:55:10.939114 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:55:10.939128 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:55:10.939139 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:55:10.939150 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 30 13:55:10.939160 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:55:10.939173 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:55:10.939181 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:55:10.939192 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:55:10.939200 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 30 13:55:10.939207 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 30 13:55:10.939215 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 30 13:55:10.939223 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 30 13:55:10.939230 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 30 13:55:10.939238 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 30 13:55:10.939252 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 30 13:55:10.939260 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:55:10.939268 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 13:55:10.939277 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 13:55:10.939285 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 30 13:55:10.939293 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Jan 30 13:55:10.939301 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Jan 30 13:55:10.939312 kernel: Zone ranges: Jan 30 13:55:10.939320 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:55:10.939328 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Jan 30 13:55:10.939336 kernel: Normal empty Jan 30 13:55:10.939344 kernel: Movable zone start for each node Jan 30 13:55:10.939352 kernel: Early memory node ranges Jan 30 13:55:10.939360 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 13:55:10.939368 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Jan 30 13:55:10.939376 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Jan 30 13:55:10.939387 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:55:10.939432 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 13:55:10.939441 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Jan 30 13:55:10.939449 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 13:55:10.939458 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:55:10.939466 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:55:10.939474 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:55:10.939482 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:55:10.939490 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:55:10.939502 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:55:10.939510 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:55:10.939518 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:55:10.939526 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:55:10.939535 kernel: TSC deadline timer available Jan 30 13:55:10.939543 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:55:10.939551 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:55:10.939559 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 30 13:55:10.939568 kernel: Booting paravirtualized kernel on KVM Jan 30 13:55:10.939579 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:55:10.939587 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:55:10.939595 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:55:10.939604 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:55:10.939612 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:55:10.939620 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 30 13:55:10.939629 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:55:10.939638 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:55:10.939648 kernel: random: crng init done Jan 30 13:55:10.939656 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:55:10.939664 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:55:10.939673 kernel: Fallback order for Node 0: 0 Jan 30 13:55:10.939681 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Jan 30 13:55:10.939689 kernel: Policy zone: DMA32 Jan 30 13:55:10.939697 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:55:10.939705 kernel: Memory: 1971192K/2096600K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Jan 30 13:55:10.939713 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:55:10.939724 kernel: Kernel/User page tables isolation: enabled Jan 30 13:55:10.939732 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:55:10.939741 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:55:10.939749 kernel: Dynamic Preempt: voluntary Jan 30 13:55:10.939757 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:55:10.939766 kernel: rcu: RCU event tracing is enabled. Jan 30 13:55:10.939775 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:55:10.939783 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:55:10.939791 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:55:10.939799 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:55:10.939810 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:55:10.939818 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:55:10.939826 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 13:55:10.939834 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:55:10.939842 kernel: Console: colour VGA+ 80x25 Jan 30 13:55:10.939851 kernel: printk: console [tty0] enabled Jan 30 13:55:10.939859 kernel: printk: console [ttyS0] enabled Jan 30 13:55:10.939867 kernel: ACPI: Core revision 20230628 Jan 30 13:55:10.939876 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 13:55:10.939887 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:55:10.939895 kernel: x2apic enabled Jan 30 13:55:10.939903 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:55:10.939911 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 13:55:10.939919 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jan 30 13:55:10.939928 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Jan 30 13:55:10.939936 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 30 13:55:10.939944 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 30 13:55:10.939967 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:55:10.939976 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:55:10.939984 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:55:10.939996 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:55:10.940004 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 30 13:55:10.940013 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:55:10.940022 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:55:10.940030 kernel: MDS: Mitigation: Clear CPU buffers Jan 30 13:55:10.940039 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:55:10.940050 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:55:10.940059 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:55:10.940068 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:55:10.940077 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:55:10.940085 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 30 13:55:10.940094 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:55:10.940102 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:55:10.940119 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:55:10.940135 kernel: landlock: Up and running. Jan 30 13:55:10.940147 kernel: SELinux: Initializing. Jan 30 13:55:10.940160 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:55:10.940174 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:55:10.940186 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 30 13:55:10.940194 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:55:10.940203 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:55:10.940212 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:55:10.940224 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 30 13:55:10.940233 kernel: signal: max sigframe size: 1776 Jan 30 13:55:10.940241 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:55:10.940250 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:55:10.940259 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:55:10.940268 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:55:10.940277 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:55:10.940285 kernel: .... node #0, CPUs: #1 Jan 30 13:55:10.940294 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:55:10.940303 kernel: smpboot: Max logical packages: 1 Jan 30 13:55:10.940314 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Jan 30 13:55:10.940323 kernel: devtmpfs: initialized Jan 30 13:55:10.940331 kernel: x86/mm: Memory block size: 128MB Jan 30 13:55:10.940340 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:55:10.940349 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:55:10.940358 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:55:10.940366 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:55:10.940375 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:55:10.940384 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:55:10.942432 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:55:10.942566 kernel: audit: type=2000 audit(1738245309.136:1): state=initialized audit_enabled=0 res=1 Jan 30 13:55:10.942579 kernel: cpuidle: using governor menu Jan 30 13:55:10.942588 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:55:10.942597 kernel: dca service started, version 1.12.1 Jan 30 13:55:10.942606 kernel: PCI: Using configuration type 1 for base access Jan 30 13:55:10.942616 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:55:10.942625 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:55:10.942634 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:55:10.942652 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:55:10.942661 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:55:10.942669 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:55:10.942678 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:55:10.942687 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:55:10.942696 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:55:10.942705 kernel: ACPI: Interpreter enabled Jan 30 13:55:10.942714 kernel: ACPI: PM: (supports S0 S5) Jan 30 13:55:10.942723 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:55:10.942735 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:55:10.942744 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:55:10.942753 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 30 13:55:10.942762 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:55:10.942975 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:55:10.943082 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 13:55:10.943175 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 13:55:10.943191 kernel: acpiphp: Slot [3] registered Jan 30 13:55:10.943200 kernel: acpiphp: Slot [4] registered Jan 30 13:55:10.943209 kernel: acpiphp: Slot [5] registered Jan 30 13:55:10.943219 kernel: acpiphp: Slot [6] registered Jan 30 13:55:10.943227 kernel: acpiphp: Slot [7] registered Jan 30 13:55:10.943236 kernel: acpiphp: Slot [8] registered Jan 30 13:55:10.943244 kernel: acpiphp: Slot [9] registered Jan 30 13:55:10.943253 kernel: acpiphp: Slot [10] registered Jan 30 13:55:10.943262 kernel: acpiphp: Slot [11] registered Jan 30 13:55:10.943274 kernel: acpiphp: Slot [12] registered Jan 30 13:55:10.943282 kernel: acpiphp: Slot [13] registered Jan 30 13:55:10.943291 kernel: acpiphp: Slot [14] registered Jan 30 13:55:10.943300 kernel: acpiphp: Slot [15] registered Jan 30 13:55:10.943309 kernel: acpiphp: Slot [16] registered Jan 30 13:55:10.943317 kernel: acpiphp: Slot [17] registered Jan 30 13:55:10.943326 kernel: acpiphp: Slot [18] registered Jan 30 13:55:10.943335 kernel: acpiphp: Slot [19] registered Jan 30 13:55:10.943344 kernel: acpiphp: Slot [20] registered Jan 30 13:55:10.943353 kernel: acpiphp: Slot [21] registered Jan 30 13:55:10.943364 kernel: acpiphp: Slot [22] registered Jan 30 13:55:10.943373 kernel: acpiphp: Slot [23] registered Jan 30 13:55:10.943381 kernel: acpiphp: Slot [24] registered Jan 30 13:55:10.943390 kernel: acpiphp: Slot [25] registered Jan 30 13:55:10.943414 kernel: acpiphp: Slot [26] registered Jan 30 13:55:10.943423 kernel: acpiphp: Slot [27] registered Jan 30 13:55:10.943431 kernel: acpiphp: Slot [28] registered Jan 30 13:55:10.943440 kernel: acpiphp: Slot [29] registered Jan 30 13:55:10.943449 kernel: acpiphp: Slot [30] registered Jan 30 13:55:10.943460 kernel: acpiphp: Slot [31] registered Jan 30 13:55:10.943469 kernel: PCI host bridge to bus 0000:00 Jan 30 13:55:10.943568 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:55:10.943654 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:55:10.943738 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:55:10.943820 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 30 13:55:10.943902 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 30 13:55:10.944019 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:55:10.944139 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 13:55:10.944250 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 30 13:55:10.944364 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 30 13:55:10.946499 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 30 13:55:10.946657 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 30 13:55:10.946758 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 30 13:55:10.946863 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 30 13:55:10.946960 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 30 13:55:10.947082 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 30 13:55:10.947186 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 30 13:55:10.947359 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 13:55:10.948566 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 30 13:55:10.948744 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 30 13:55:10.948863 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 30 13:55:10.948961 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 30 13:55:10.949055 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 30 13:55:10.949190 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 30 13:55:10.949310 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 30 13:55:10.951485 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:55:10.951663 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:55:10.951804 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 30 13:55:10.951903 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 30 13:55:10.952001 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 30 13:55:10.952102 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:55:10.952195 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 30 13:55:10.952296 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 30 13:55:10.952389 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 30 13:55:10.952546 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 30 13:55:10.952638 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 30 13:55:10.952731 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 30 13:55:10.952821 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 30 13:55:10.952919 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:55:10.953013 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 13:55:10.953111 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 30 13:55:10.953202 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 30 13:55:10.953303 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:55:10.954435 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 30 13:55:10.954608 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 30 13:55:10.954769 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 30 13:55:10.954906 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 30 13:55:10.955010 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 30 13:55:10.955104 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 30 13:55:10.955116 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:55:10.955126 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:55:10.955135 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:55:10.955146 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:55:10.955163 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 13:55:10.955180 kernel: iommu: Default domain type: Translated Jan 30 13:55:10.955192 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:55:10.955205 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:55:10.955219 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:55:10.955232 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 13:55:10.955247 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Jan 30 13:55:10.955364 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 30 13:55:10.955499 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 30 13:55:10.955597 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:55:10.955609 kernel: vgaarb: loaded Jan 30 13:55:10.955618 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 13:55:10.955627 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 13:55:10.955636 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:55:10.955645 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:55:10.955655 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:55:10.955664 kernel: pnp: PnP ACPI init Jan 30 13:55:10.955673 kernel: pnp: PnP ACPI: found 4 devices Jan 30 13:55:10.955685 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:55:10.955694 kernel: NET: Registered PF_INET protocol family Jan 30 13:55:10.955703 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:55:10.955711 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 13:55:10.955720 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:55:10.955729 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:55:10.955738 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 13:55:10.955747 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 13:55:10.955756 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:55:10.955768 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:55:10.955777 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:55:10.955786 kernel: NET: Registered PF_XDP protocol family Jan 30 13:55:10.955877 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:55:10.955961 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:55:10.956043 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:55:10.956125 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 30 13:55:10.956207 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 30 13:55:10.956308 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 30 13:55:10.958515 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 13:55:10.958547 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 30 13:55:10.958677 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 34851 usecs Jan 30 13:55:10.958691 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:55:10.958701 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:55:10.958710 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jan 30 13:55:10.958719 kernel: Initialise system trusted keyrings Jan 30 13:55:10.958737 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 13:55:10.958746 kernel: Key type asymmetric registered Jan 30 13:55:10.958755 kernel: Asymmetric key parser 'x509' registered Jan 30 13:55:10.958764 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:55:10.958773 kernel: io scheduler mq-deadline registered Jan 30 13:55:10.958782 kernel: io scheduler kyber registered Jan 30 13:55:10.958791 kernel: io scheduler bfq registered Jan 30 13:55:10.958799 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:55:10.958809 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 30 13:55:10.958821 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 13:55:10.958829 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 13:55:10.958838 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:55:10.958847 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:55:10.958856 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:55:10.958865 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:55:10.958874 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:55:10.959000 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 30 13:55:10.959014 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:55:10.959102 kernel: rtc_cmos 00:03: registered as rtc0 Jan 30 13:55:10.959187 kernel: rtc_cmos 00:03: setting system clock to 2025-01-30T13:55:10 UTC (1738245310) Jan 30 13:55:10.959271 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 30 13:55:10.959282 kernel: intel_pstate: CPU model not supported Jan 30 13:55:10.959291 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:55:10.959300 kernel: Segment Routing with IPv6 Jan 30 13:55:10.959309 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:55:10.959317 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:55:10.959330 kernel: Key type dns_resolver registered Jan 30 13:55:10.959339 kernel: IPI shorthand broadcast: enabled Jan 30 13:55:10.959347 kernel: sched_clock: Marking stable (983005342, 104355330)->(1117519699, -30159027) Jan 30 13:55:10.959356 kernel: registered taskstats version 1 Jan 30 13:55:10.959365 kernel: Loading compiled-in X.509 certificates Jan 30 13:55:10.959374 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:55:10.959383 kernel: Key type .fscrypt registered Jan 30 13:55:10.959392 kernel: Key type fscrypt-provisioning registered Jan 30 13:55:10.960476 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:55:10.960491 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:55:10.960500 kernel: ima: No architecture policies found Jan 30 13:55:10.960509 kernel: clk: Disabling unused clocks Jan 30 13:55:10.960518 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:55:10.960528 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:55:10.960559 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:55:10.960571 kernel: Run /init as init process Jan 30 13:55:10.960580 kernel: with arguments: Jan 30 13:55:10.960589 kernel: /init Jan 30 13:55:10.960601 kernel: with environment: Jan 30 13:55:10.960610 kernel: HOME=/ Jan 30 13:55:10.960619 kernel: TERM=linux Jan 30 13:55:10.960628 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:55:10.960641 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:55:10.960653 systemd[1]: Detected virtualization kvm. Jan 30 13:55:10.960663 systemd[1]: Detected architecture x86-64. Jan 30 13:55:10.960672 systemd[1]: Running in initrd. Jan 30 13:55:10.960684 systemd[1]: No hostname configured, using default hostname. Jan 30 13:55:10.960693 systemd[1]: Hostname set to . Jan 30 13:55:10.960703 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:55:10.960712 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:55:10.960722 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:55:10.960731 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:55:10.960742 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:55:10.960752 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:55:10.960764 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:55:10.960774 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:55:10.960785 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:55:10.960795 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:55:10.960804 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:55:10.960814 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:55:10.960826 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:55:10.960835 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:55:10.960845 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:55:10.960857 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:55:10.960867 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:55:10.960877 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:55:10.960890 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:55:10.960899 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:55:10.960909 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:55:10.960918 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:55:10.960928 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:55:10.960938 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:55:10.960948 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:55:10.960958 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:55:10.960971 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:55:10.960981 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:55:10.960990 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:55:10.961000 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:55:10.961010 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:55:10.961019 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:55:10.961029 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:55:10.961039 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:55:10.961052 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:55:10.961092 systemd-journald[182]: Collecting audit messages is disabled. Jan 30 13:55:10.961118 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:55:10.961129 systemd-journald[182]: Journal started Jan 30 13:55:10.961151 systemd-journald[182]: Runtime Journal (/run/log/journal/498f10e7bccb4fa6a3475374f5dcf4e9) is 4.9M, max 39.3M, 34.4M free. Jan 30 13:55:10.944837 systemd-modules-load[183]: Inserted module 'overlay' Jan 30 13:55:10.965664 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:55:10.967324 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:55:10.982979 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:55:10.987264 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:55:10.989443 kernel: Bridge firewalling registered Jan 30 13:55:10.989519 systemd-modules-load[183]: Inserted module 'br_netfilter' Jan 30 13:55:10.994763 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:55:10.996747 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:55:10.999167 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:55:11.008674 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:55:11.009674 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:55:11.025867 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:55:11.027242 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:55:11.033670 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:55:11.034364 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:55:11.038800 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:55:11.065941 dracut-cmdline[218]: dracut-dracut-053 Jan 30 13:55:11.066691 systemd-resolved[217]: Positive Trust Anchors: Jan 30 13:55:11.066701 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:55:11.066737 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:55:11.069524 systemd-resolved[217]: Defaulting to hostname 'linux'. Jan 30 13:55:11.071729 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:55:11.075766 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:55:11.077672 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:55:11.171454 kernel: SCSI subsystem initialized Jan 30 13:55:11.182451 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:55:11.195445 kernel: iscsi: registered transport (tcp) Jan 30 13:55:11.218433 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:55:11.218544 kernel: QLogic iSCSI HBA Driver Jan 30 13:55:11.277825 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:55:11.284714 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:55:11.327661 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:55:11.327743 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:55:11.327758 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:55:11.372456 kernel: raid6: avx2x4 gen() 17052 MB/s Jan 30 13:55:11.389478 kernel: raid6: avx2x2 gen() 17329 MB/s Jan 30 13:55:11.406700 kernel: raid6: avx2x1 gen() 12754 MB/s Jan 30 13:55:11.406798 kernel: raid6: using algorithm avx2x2 gen() 17329 MB/s Jan 30 13:55:11.424575 kernel: raid6: .... xor() 15800 MB/s, rmw enabled Jan 30 13:55:11.424678 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:55:11.446443 kernel: xor: automatically using best checksumming function avx Jan 30 13:55:11.620457 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:55:11.636046 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:55:11.642850 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:55:11.666534 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jan 30 13:55:11.672061 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:55:11.680656 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:55:11.698648 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jan 30 13:55:11.735623 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:55:11.741690 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:55:11.803349 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:55:11.811593 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:55:11.849989 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:55:11.854642 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:55:11.855726 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:55:11.856503 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:55:11.865673 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:55:11.888449 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:55:11.897108 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 30 13:55:11.979943 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 30 13:55:11.980149 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:55:11.980170 kernel: GPT:9289727 != 125829119 Jan 30 13:55:11.980198 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:55:11.980210 kernel: GPT:9289727 != 125829119 Jan 30 13:55:11.980221 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:55:11.980232 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:55:11.980245 kernel: scsi host0: Virtio SCSI HBA Jan 30 13:55:11.980391 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:55:11.990079 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 30 13:55:11.999203 kernel: ACPI: bus type USB registered Jan 30 13:55:11.999259 kernel: usbcore: registered new interface driver usbfs Jan 30 13:55:11.999281 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Jan 30 13:55:11.999508 kernel: libata version 3.00 loaded. Jan 30 13:55:11.999526 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 30 13:55:12.008506 kernel: usbcore: registered new interface driver hub Jan 30 13:55:12.008530 kernel: scsi host1: ata_piix Jan 30 13:55:12.008732 kernel: scsi host2: ata_piix Jan 30 13:55:12.008888 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 30 13:55:12.008910 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 30 13:55:12.010441 kernel: usbcore: registered new device driver usb Jan 30 13:55:12.025818 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:55:12.025946 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:55:12.026771 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:55:12.027205 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:55:12.027368 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:55:12.028205 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:55:12.036232 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:55:12.054246 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:55:12.054314 kernel: AES CTR mode by8 optimization enabled Jan 30 13:55:12.055039 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (458) Jan 30 13:55:12.075944 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (451) Jan 30 13:55:12.082680 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:55:12.096205 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:55:12.133426 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:55:12.138219 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:55:12.138862 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:55:12.147373 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:55:12.163734 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:55:12.165845 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:55:12.181429 disk-uuid[532]: Primary Header is updated. Jan 30 13:55:12.181429 disk-uuid[532]: Secondary Entries is updated. Jan 30 13:55:12.181429 disk-uuid[532]: Secondary Header is updated. Jan 30 13:55:12.195019 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:55:12.207449 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:55:12.209440 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:55:12.215487 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 30 13:55:12.224538 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 30 13:55:12.224834 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 30 13:55:12.225030 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 30 13:55:12.225258 kernel: hub 1-0:1.0: USB hub found Jan 30 13:55:12.225712 kernel: hub 1-0:1.0: 2 ports detected Jan 30 13:55:13.205445 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:55:13.207271 disk-uuid[541]: The operation has completed successfully. Jan 30 13:55:13.271320 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:55:13.271520 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:55:13.277731 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:55:13.296653 sh[561]: Success Jan 30 13:55:13.313449 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:55:13.378724 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:55:13.380554 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:55:13.381315 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:55:13.405982 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:55:13.406078 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:55:13.406092 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:55:13.407186 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:55:13.408769 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:55:13.416812 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:55:13.418581 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:55:13.434974 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:55:13.438875 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:55:13.454674 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:55:13.454758 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:55:13.455532 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:55:13.463540 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:55:13.477489 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:55:13.480525 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:55:13.488762 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:55:13.496825 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:55:13.640807 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:55:13.652614 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:55:13.660747 ignition[651]: Ignition 2.19.0 Jan 30 13:55:13.662439 ignition[651]: Stage: fetch-offline Jan 30 13:55:13.663027 ignition[651]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:55:13.663039 ignition[651]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:55:13.663148 ignition[651]: parsed url from cmdline: "" Jan 30 13:55:13.663151 ignition[651]: no config URL provided Jan 30 13:55:13.663157 ignition[651]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:55:13.663165 ignition[651]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:55:13.663171 ignition[651]: failed to fetch config: resource requires networking Jan 30 13:55:13.663430 ignition[651]: Ignition finished successfully Jan 30 13:55:13.667737 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:55:13.676912 systemd-networkd[750]: lo: Link UP Jan 30 13:55:13.676927 systemd-networkd[750]: lo: Gained carrier Jan 30 13:55:13.679366 systemd-networkd[750]: Enumeration completed Jan 30 13:55:13.679516 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:55:13.680000 systemd[1]: Reached target network.target - Network. Jan 30 13:55:13.680518 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 13:55:13.680523 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 30 13:55:13.682663 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:55:13.682668 systemd-networkd[750]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:55:13.684033 systemd-networkd[750]: eth0: Link UP Jan 30 13:55:13.684038 systemd-networkd[750]: eth0: Gained carrier Jan 30 13:55:13.684049 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 13:55:13.687848 systemd-networkd[750]: eth1: Link UP Jan 30 13:55:13.687854 systemd-networkd[750]: eth1: Gained carrier Jan 30 13:55:13.687872 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:55:13.698864 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:55:13.705500 systemd-networkd[750]: eth1: DHCPv4 address 10.124.0.3/20 acquired from 169.254.169.253 Jan 30 13:55:13.709554 systemd-networkd[750]: eth0: DHCPv4 address 64.23.155.240/20, gateway 64.23.144.1 acquired from 169.254.169.253 Jan 30 13:55:13.723569 ignition[754]: Ignition 2.19.0 Jan 30 13:55:13.724256 ignition[754]: Stage: fetch Jan 30 13:55:13.724483 ignition[754]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:55:13.724494 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:55:13.724589 ignition[754]: parsed url from cmdline: "" Jan 30 13:55:13.724593 ignition[754]: no config URL provided Jan 30 13:55:13.724598 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:55:13.724606 ignition[754]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:55:13.724625 ignition[754]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 30 13:55:13.756425 ignition[754]: GET result: OK Jan 30 13:55:13.756599 ignition[754]: parsing config with SHA512: fcd7c04c7491e6473111eb6ef129fb74736e9e58d6885d9be7a1b5fec18e5ccfd018dbcd60dbffff2b972eacd6957ad68604a2775f64105640ce53f38b72643f Jan 30 13:55:13.763002 unknown[754]: fetched base config from "system" Jan 30 13:55:13.763019 unknown[754]: fetched base config from "system" Jan 30 13:55:13.763791 ignition[754]: fetch: fetch complete Jan 30 13:55:13.763031 unknown[754]: fetched user config from "digitalocean" Jan 30 13:55:13.763801 ignition[754]: fetch: fetch passed Jan 30 13:55:13.767041 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:55:13.763889 ignition[754]: Ignition finished successfully Jan 30 13:55:13.772693 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:55:13.807782 ignition[761]: Ignition 2.19.0 Jan 30 13:55:13.807795 ignition[761]: Stage: kargs Jan 30 13:55:13.808010 ignition[761]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:55:13.808023 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:55:13.809144 ignition[761]: kargs: kargs passed Jan 30 13:55:13.811852 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:55:13.809211 ignition[761]: Ignition finished successfully Jan 30 13:55:13.817729 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:55:13.844836 ignition[767]: Ignition 2.19.0 Jan 30 13:55:13.844862 ignition[767]: Stage: disks Jan 30 13:55:13.845190 ignition[767]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:55:13.845207 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:55:13.848817 ignition[767]: disks: disks passed Jan 30 13:55:13.848918 ignition[767]: Ignition finished successfully Jan 30 13:55:13.852607 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:55:13.854697 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:55:13.855185 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:55:13.855644 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:55:13.856560 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:55:13.857473 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:55:13.864725 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:55:13.895326 systemd-fsck[775]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:55:13.900248 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:55:13.904618 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:55:14.025440 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:55:14.026757 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:55:14.028707 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:55:14.039677 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:55:14.043568 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:55:14.046490 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 30 13:55:14.057729 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 13:55:14.060433 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:55:14.070049 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (783) Jan 30 13:55:14.070092 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:55:14.070112 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:55:14.070130 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:55:14.061191 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:55:14.076100 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:55:14.081468 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:55:14.083682 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:55:14.089947 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:55:14.155999 coreos-metadata[785]: Jan 30 13:55:14.155 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:55:14.168328 initrd-setup-root[814]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:55:14.172264 coreos-metadata[785]: Jan 30 13:55:14.171 INFO Fetch successful Jan 30 13:55:14.172983 coreos-metadata[786]: Jan 30 13:55:14.172 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:55:14.178246 initrd-setup-root[821]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:55:14.177879 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 30 13:55:14.177989 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 30 13:55:14.184281 coreos-metadata[786]: Jan 30 13:55:14.184 INFO Fetch successful Jan 30 13:55:14.188390 initrd-setup-root[829]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:55:14.191147 coreos-metadata[786]: Jan 30 13:55:14.191 INFO wrote hostname ci-4081.3.0-a-04505505d0 to /sysroot/etc/hostname Jan 30 13:55:14.194534 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:55:14.198172 initrd-setup-root[837]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:55:14.323919 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:55:14.334797 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:55:14.336745 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:55:14.349495 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:55:14.381472 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:55:14.393985 ignition[904]: INFO : Ignition 2.19.0 Jan 30 13:55:14.393985 ignition[904]: INFO : Stage: mount Jan 30 13:55:14.395373 ignition[904]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:55:14.395373 ignition[904]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:55:14.397087 ignition[904]: INFO : mount: mount passed Jan 30 13:55:14.397087 ignition[904]: INFO : Ignition finished successfully Jan 30 13:55:14.396816 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:55:14.404613 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:55:14.406992 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:55:14.419672 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:55:14.430596 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (917) Jan 30 13:55:14.434634 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:55:14.434705 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:55:14.434719 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:55:14.438423 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:55:14.440985 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:55:14.477362 ignition[933]: INFO : Ignition 2.19.0 Jan 30 13:55:14.477362 ignition[933]: INFO : Stage: files Jan 30 13:55:14.478861 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:55:14.478861 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:55:14.478861 ignition[933]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:55:14.481658 ignition[933]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:55:14.481658 ignition[933]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:55:14.486581 ignition[933]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:55:14.487430 ignition[933]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:55:14.487430 ignition[933]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:55:14.487123 unknown[933]: wrote ssh authorized keys file for user: core Jan 30 13:55:14.494992 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:55:14.495960 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:55:14.536183 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:55:14.611440 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:55:14.611440 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:55:14.613729 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:55:14.613729 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:55:14.613729 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:55:14.613729 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:55:14.613729 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:55:14.613729 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:55:14.613729 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:55:14.613729 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:55:14.613729 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:55:14.613729 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:55:14.613729 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:55:14.613729 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:55:14.613729 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 13:55:15.109443 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 13:55:15.443590 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:55:15.443590 ignition[933]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 13:55:15.445467 ignition[933]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:55:15.445467 ignition[933]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:55:15.445467 ignition[933]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 13:55:15.445467 ignition[933]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:55:15.445467 ignition[933]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:55:15.445467 ignition[933]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:55:15.449972 ignition[933]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:55:15.449972 ignition[933]: INFO : files: files passed Jan 30 13:55:15.449972 ignition[933]: INFO : Ignition finished successfully Jan 30 13:55:15.448362 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:55:15.450739 systemd-networkd[750]: eth1: Gained IPv6LL Jan 30 13:55:15.454725 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:55:15.458997 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:55:15.461537 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:55:15.461654 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:55:15.478211 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:55:15.478211 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:55:15.480391 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:55:15.483419 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:55:15.484062 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:55:15.490758 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:55:15.529636 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:55:15.529784 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:55:15.531467 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:55:15.532061 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:55:15.532965 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:55:15.549731 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:55:15.565131 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:55:15.572612 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:55:15.578703 systemd-networkd[750]: eth0: Gained IPv6LL Jan 30 13:55:15.584440 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:55:15.585584 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:55:15.586057 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:55:15.586518 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:55:15.586666 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:55:15.588203 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:55:15.588859 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:55:15.589727 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:55:15.590626 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:55:15.591335 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:55:15.592016 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:55:15.592685 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:55:15.593628 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:55:15.594324 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:55:15.595072 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:55:15.595996 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:55:15.596134 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:55:15.597215 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:55:15.598082 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:55:15.598815 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:55:15.598965 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:55:15.599677 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:55:15.599808 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:55:15.600926 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:55:15.601089 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:55:15.601983 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:55:15.602114 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:55:15.602934 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 13:55:15.603067 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:55:15.613190 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:55:15.613681 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:55:15.613873 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:55:15.616651 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:55:15.619013 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:55:15.619293 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:55:15.620574 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:55:15.620742 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:55:15.627238 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:55:15.627348 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:55:15.636483 ignition[987]: INFO : Ignition 2.19.0 Jan 30 13:55:15.637328 ignition[987]: INFO : Stage: umount Jan 30 13:55:15.638494 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:55:15.638494 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:55:15.641179 ignition[987]: INFO : umount: umount passed Jan 30 13:55:15.641179 ignition[987]: INFO : Ignition finished successfully Jan 30 13:55:15.642055 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:55:15.642186 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:55:15.643067 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:55:15.643116 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:55:15.647168 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:55:15.647235 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:55:15.648047 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:55:15.648117 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:55:15.649769 systemd[1]: Stopped target network.target - Network. Jan 30 13:55:15.650077 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:55:15.650167 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:55:15.651430 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:55:15.654876 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:55:15.654940 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:55:15.655385 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:55:15.655723 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:55:15.656086 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:55:15.656135 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:55:15.656537 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:55:15.656572 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:55:15.657200 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:55:15.657253 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:55:15.657878 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:55:15.657919 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:55:15.659036 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:55:15.659657 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:55:15.661729 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:55:15.662350 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:55:15.662628 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:55:15.663520 systemd-networkd[750]: eth0: DHCPv6 lease lost Jan 30 13:55:15.663872 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:55:15.664010 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:55:15.667474 systemd-networkd[750]: eth1: DHCPv6 lease lost Jan 30 13:55:15.668583 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:55:15.668738 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:55:15.672075 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:55:15.672216 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:55:15.673819 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:55:15.673894 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:55:15.679557 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:55:15.680023 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:55:15.680098 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:55:15.682292 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:55:15.682372 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:55:15.682905 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:55:15.682969 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:55:15.683673 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:55:15.683715 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:55:15.684555 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:55:15.692920 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:55:15.693889 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:55:15.696067 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:55:15.696168 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:55:15.697093 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:55:15.697131 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:55:15.698156 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:55:15.698215 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:55:15.699629 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:55:15.699694 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:55:15.700556 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:55:15.700615 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:55:15.708757 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:55:15.709385 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:55:15.709499 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:55:15.710016 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:55:15.710068 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:55:15.714060 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:55:15.714196 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:55:15.718770 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:55:15.718953 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:55:15.720181 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:55:15.733708 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:55:15.743444 systemd[1]: Switching root. Jan 30 13:55:15.792330 systemd-journald[182]: Journal stopped Jan 30 13:55:16.940263 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). Jan 30 13:55:16.940358 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:55:16.940380 kernel: SELinux: policy capability open_perms=1 Jan 30 13:55:16.940394 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:55:16.940452 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:55:16.940471 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:55:16.940483 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:55:16.940507 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:55:16.940519 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:55:16.940531 kernel: audit: type=1403 audit(1738245315.982:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:55:16.940545 systemd[1]: Successfully loaded SELinux policy in 38.139ms. Jan 30 13:55:16.940572 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.767ms. Jan 30 13:55:16.940594 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:55:16.940607 systemd[1]: Detected virtualization kvm. Jan 30 13:55:16.940619 systemd[1]: Detected architecture x86-64. Jan 30 13:55:16.940637 systemd[1]: Detected first boot. Jan 30 13:55:16.940664 systemd[1]: Hostname set to . Jan 30 13:55:16.940682 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:55:16.940701 zram_generator::config[1030]: No configuration found. Jan 30 13:55:16.940724 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:55:16.940741 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:55:16.940761 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:55:16.940774 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:55:16.940797 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:55:16.940814 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:55:16.940832 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:55:16.940848 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:55:16.940870 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:55:16.940888 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:55:16.940902 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:55:16.940921 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:55:16.940941 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:55:16.940959 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:55:16.940976 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:55:16.940992 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:55:16.941011 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:55:16.941038 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:55:16.941052 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:55:16.941068 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:55:16.941088 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:55:16.941107 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:55:16.941127 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:55:16.941150 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:55:16.941168 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:55:16.941186 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:55:16.941202 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:55:16.941217 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:55:16.941230 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:55:16.941242 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:55:16.941256 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:55:16.941276 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:55:16.941300 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:55:16.941318 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:55:16.941337 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:55:16.941354 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:55:16.941371 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:55:16.941388 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:55:16.941487 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:55:16.941510 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:55:16.941530 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:55:16.941560 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:55:16.941584 systemd[1]: Reached target machines.target - Containers. Jan 30 13:55:16.941605 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:55:16.941627 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:55:16.941649 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:55:16.941670 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:55:16.941691 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:55:16.941713 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:55:16.941740 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:55:16.941764 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:55:16.941785 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:55:16.941808 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:55:16.941828 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:55:16.941850 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:55:16.941870 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:55:16.941890 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:55:16.941908 kernel: ACPI: bus type drm_connector registered Jan 30 13:55:16.941930 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:55:16.941948 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:55:16.941965 kernel: loop: module loaded Jan 30 13:55:16.941979 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:55:16.941992 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:55:16.942009 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:55:16.942022 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:55:16.942039 systemd[1]: Stopped verity-setup.service. Jan 30 13:55:16.942052 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:55:16.942069 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:55:16.942081 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:55:16.942094 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:55:16.942107 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:55:16.942120 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:55:16.942136 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:55:16.942149 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:55:16.942161 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:55:16.942175 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:55:16.942192 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:55:16.942221 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:55:16.942241 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:55:16.942262 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:55:16.942283 kernel: fuse: init (API version 7.39) Jan 30 13:55:16.942302 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:55:16.942325 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:55:16.942393 systemd-journald[1106]: Collecting audit messages is disabled. Jan 30 13:55:16.942481 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:55:16.942511 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:55:16.942533 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:55:16.942554 systemd-journald[1106]: Journal started Jan 30 13:55:16.942591 systemd-journald[1106]: Runtime Journal (/run/log/journal/498f10e7bccb4fa6a3475374f5dcf4e9) is 4.9M, max 39.3M, 34.4M free. Jan 30 13:55:16.606014 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:55:16.623501 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:55:16.623898 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:55:16.946495 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:55:16.946583 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:55:16.949360 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:55:16.951919 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:55:16.953093 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:55:16.961100 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:55:16.970980 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:55:16.978554 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:55:16.986581 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:55:16.987206 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:55:16.987247 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:55:16.991419 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:55:17.000073 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:55:17.005038 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:55:17.006828 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:55:17.010730 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:55:17.019704 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:55:17.021614 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:55:17.028681 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:55:17.030665 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:55:17.037640 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:55:17.041653 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:55:17.047654 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:55:17.053829 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:55:17.054570 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:55:17.055472 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:55:17.090254 systemd-journald[1106]: Time spent on flushing to /var/log/journal/498f10e7bccb4fa6a3475374f5dcf4e9 is 87.716ms for 986 entries. Jan 30 13:55:17.090254 systemd-journald[1106]: System Journal (/var/log/journal/498f10e7bccb4fa6a3475374f5dcf4e9) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:55:17.204673 systemd-journald[1106]: Received client request to flush runtime journal. Jan 30 13:55:17.204734 kernel: loop0: detected capacity change from 0 to 8 Jan 30 13:55:17.204755 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:55:17.092241 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:55:17.094963 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:55:17.099283 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:55:17.113943 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:55:17.118704 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:55:17.147856 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:55:17.206945 udevadm[1159]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:55:17.207422 kernel: loop1: detected capacity change from 0 to 140768 Jan 30 13:55:17.214143 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:55:17.216049 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:55:17.218624 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:55:17.255871 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:55:17.262616 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:55:17.263818 kernel: loop2: detected capacity change from 0 to 210664 Jan 30 13:55:17.316023 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Jan 30 13:55:17.316046 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Jan 30 13:55:17.319423 kernel: loop3: detected capacity change from 0 to 142488 Jan 30 13:55:17.324136 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:55:17.369894 kernel: loop4: detected capacity change from 0 to 8 Jan 30 13:55:17.379103 kernel: loop5: detected capacity change from 0 to 140768 Jan 30 13:55:17.416456 kernel: loop6: detected capacity change from 0 to 210664 Jan 30 13:55:17.428426 kernel: loop7: detected capacity change from 0 to 142488 Jan 30 13:55:17.443584 (sd-merge)[1175]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 30 13:55:17.446032 (sd-merge)[1175]: Merged extensions into '/usr'. Jan 30 13:55:17.461678 systemd[1]: Reloading requested from client PID 1150 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:55:17.461709 systemd[1]: Reloading... Jan 30 13:55:17.572545 zram_generator::config[1197]: No configuration found. Jan 30 13:55:17.753518 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:55:17.804525 systemd[1]: Reloading finished in 341 ms. Jan 30 13:55:17.846781 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:55:17.868248 ldconfig[1145]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:55:17.868652 systemd[1]: Starting ensure-sysext.service... Jan 30 13:55:17.881648 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:55:17.882792 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:55:17.901575 systemd[1]: Reloading requested from client PID 1243 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:55:17.901595 systemd[1]: Reloading... Jan 30 13:55:17.910282 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:55:17.910838 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:55:17.911816 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:55:17.912077 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jan 30 13:55:17.912139 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jan 30 13:55:17.915886 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:55:17.915904 systemd-tmpfiles[1244]: Skipping /boot Jan 30 13:55:17.929709 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:55:17.929724 systemd-tmpfiles[1244]: Skipping /boot Jan 30 13:55:18.035453 zram_generator::config[1272]: No configuration found. Jan 30 13:55:18.171183 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:55:18.223235 systemd[1]: Reloading finished in 321 ms. Jan 30 13:55:18.236697 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:55:18.243122 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:55:18.259821 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:55:18.276940 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:55:18.281501 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:55:18.292623 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:55:18.299706 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:55:18.306689 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:55:18.319080 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:55:18.323778 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:55:18.324135 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:55:18.331958 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:55:18.335862 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:55:18.345338 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:55:18.346882 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:55:18.347117 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:55:18.352017 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:55:18.353577 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:55:18.353896 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:55:18.354042 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:55:18.360702 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:55:18.361113 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:55:18.369212 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:55:18.371360 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:55:18.371633 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:55:18.374252 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:55:18.376672 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:55:18.378070 systemd[1]: Finished ensure-sysext.service. Jan 30 13:55:18.390980 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:55:18.397328 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:55:18.410769 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:55:18.411470 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:55:18.412133 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:55:18.413332 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:55:18.415804 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:55:18.416083 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:55:18.431894 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:55:18.441208 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:55:18.454649 augenrules[1350]: No rules Jan 30 13:55:18.461045 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:55:18.468802 systemd-udevd[1323]: Using default interface naming scheme 'v255'. Jan 30 13:55:18.471365 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:55:18.472634 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:55:18.480295 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:55:18.485950 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:55:18.507849 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:55:18.518192 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:55:18.526815 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:55:18.527597 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:55:18.696512 systemd-resolved[1321]: Positive Trust Anchors: Jan 30 13:55:18.696530 systemd-resolved[1321]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:55:18.696568 systemd-resolved[1321]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:55:18.722602 systemd-resolved[1321]: Using system hostname 'ci-4081.3.0-a-04505505d0'. Jan 30 13:55:18.733112 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:55:18.735761 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:55:18.754622 systemd-networkd[1365]: lo: Link UP Jan 30 13:55:18.754633 systemd-networkd[1365]: lo: Gained carrier Jan 30 13:55:18.774688 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:55:18.775631 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:55:18.776848 systemd-networkd[1365]: Enumeration completed Jan 30 13:55:18.776967 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:55:18.778584 systemd[1]: Reached target network.target - Network. Jan 30 13:55:18.779256 systemd-networkd[1365]: eth0: Configuring with /run/systemd/network/10-da:5b:40:cf:63:73.network. Jan 30 13:55:18.784859 systemd-networkd[1365]: eth1: Configuring with /run/systemd/network/10-62:d2:d9:7b:d5:03.network. Jan 30 13:55:18.789921 systemd-networkd[1365]: eth0: Link UP Jan 30 13:55:18.789935 systemd-networkd[1365]: eth0: Gained carrier Jan 30 13:55:18.790624 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:55:18.798842 systemd-networkd[1365]: eth1: Link UP Jan 30 13:55:18.798853 systemd-networkd[1365]: eth1: Gained carrier Jan 30 13:55:18.804676 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Jan 30 13:55:18.830719 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 30 13:55:18.832529 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:55:18.832723 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:55:18.840919 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:55:18.851782 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:55:18.862875 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:55:18.864234 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:55:18.864314 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:55:18.864340 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:55:18.864933 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:55:18.883757 systemd-timesyncd[1343]: Contacted time server 23.168.136.132:123 (0.flatcar.pool.ntp.org). Jan 30 13:55:18.883882 systemd-timesyncd[1343]: Initial clock synchronization to Thu 2025-01-30 13:55:18.497922 UTC. Jan 30 13:55:18.893500 kernel: ISO 9660 Extensions: RRIP_1991A Jan 30 13:55:18.895914 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:55:18.896116 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:55:18.901470 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1377) Jan 30 13:55:18.901565 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 13:55:18.902277 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 30 13:55:18.903530 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:55:18.903839 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:55:18.907545 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:55:18.913706 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:55:18.913954 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:55:18.915603 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:55:18.918063 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:55:18.957599 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 30 13:55:18.983653 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 13:55:19.051437 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:55:19.061432 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 30 13:55:19.063555 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 30 13:55:19.066710 kernel: Console: switching to colour dummy device 80x25 Jan 30 13:55:19.067144 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:55:19.072443 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 13:55:19.072579 kernel: [drm] features: -context_init Jan 30 13:55:19.074438 kernel: [drm] number of scanouts: 1 Jan 30 13:55:19.074523 kernel: [drm] number of cap sets: 0 Jan 30 13:55:19.076437 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 30 13:55:19.081463 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 30 13:55:19.081641 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 13:55:19.091451 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 13:55:19.107592 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:55:19.109520 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:55:19.115871 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:55:19.125464 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:55:19.146017 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:55:19.167564 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:55:19.167965 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:55:19.194127 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:55:19.195472 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:55:19.316463 kernel: EDAC MC: Ver: 3.0.0 Jan 30 13:55:19.345267 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:55:19.347235 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:55:19.356976 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:55:19.379433 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:55:19.414870 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:55:19.417032 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:55:19.417266 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:55:19.417722 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:55:19.417888 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:55:19.418345 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:55:19.418695 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:55:19.418826 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:55:19.418915 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:55:19.418958 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:55:19.419043 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:55:19.420987 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:55:19.425630 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:55:19.434795 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:55:19.445745 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:55:19.449653 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:55:19.450413 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:55:19.452053 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:55:19.452699 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:55:19.452735 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:55:19.455672 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:55:19.458667 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:55:19.471749 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:55:19.484827 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:55:19.490232 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:55:19.503849 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:55:19.513431 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:55:19.516747 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:55:19.521616 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:55:19.526850 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:55:19.528908 jq[1434]: false Jan 30 13:55:19.542873 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:55:19.546666 dbus-daemon[1433]: [system] SELinux support is enabled Jan 30 13:55:19.561779 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:55:19.564263 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:55:19.564978 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:55:19.568746 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:55:19.584761 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:55:19.597528 coreos-metadata[1432]: Jan 30 13:55:19.596 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:55:19.601964 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:55:19.607389 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:55:19.622619 coreos-metadata[1432]: Jan 30 13:55:19.622 INFO Fetch successful Jan 30 13:55:19.633659 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:55:19.633891 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:55:19.642079 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:55:19.642145 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:55:19.646591 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:55:19.646766 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 30 13:55:19.646794 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:55:19.655101 jq[1444]: true Jan 30 13:55:19.662077 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:55:19.662414 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:55:19.675098 extend-filesystems[1435]: Found loop4 Jan 30 13:55:19.675098 extend-filesystems[1435]: Found loop5 Jan 30 13:55:19.675098 extend-filesystems[1435]: Found loop6 Jan 30 13:55:19.675098 extend-filesystems[1435]: Found loop7 Jan 30 13:55:19.675098 extend-filesystems[1435]: Found vda Jan 30 13:55:19.675098 extend-filesystems[1435]: Found vda1 Jan 30 13:55:19.675098 extend-filesystems[1435]: Found vda2 Jan 30 13:55:19.675098 extend-filesystems[1435]: Found vda3 Jan 30 13:55:19.675098 extend-filesystems[1435]: Found usr Jan 30 13:55:19.675098 extend-filesystems[1435]: Found vda4 Jan 30 13:55:19.675098 extend-filesystems[1435]: Found vda6 Jan 30 13:55:19.675098 extend-filesystems[1435]: Found vda7 Jan 30 13:55:19.675098 extend-filesystems[1435]: Found vda9 Jan 30 13:55:19.675098 extend-filesystems[1435]: Checking size of /dev/vda9 Jan 30 13:55:19.823683 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 30 13:55:19.823778 update_engine[1443]: I20250130 13:55:19.706298 1443 main.cc:92] Flatcar Update Engine starting Jan 30 13:55:19.823778 update_engine[1443]: I20250130 13:55:19.728144 1443 update_check_scheduler.cc:74] Next update check in 4m53s Jan 30 13:55:19.722996 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:55:19.859711 extend-filesystems[1435]: Resized partition /dev/vda9 Jan 30 13:55:19.737777 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:55:19.874283 extend-filesystems[1473]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:55:19.884114 tar[1451]: linux-amd64/helm Jan 30 13:55:19.744824 systemd-logind[1442]: New seat seat0. Jan 30 13:55:19.755703 (ntainerd)[1464]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:55:19.905665 jq[1461]: true Jan 30 13:55:19.765573 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:55:19.765964 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:55:19.828632 systemd-logind[1442]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:55:19.828660 systemd-logind[1442]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:55:19.851799 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:55:19.873827 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:55:19.888165 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:55:19.995053 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 30 13:55:20.009481 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1370) Jan 30 13:55:20.026436 extend-filesystems[1473]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:55:20.026436 extend-filesystems[1473]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 30 13:55:20.026436 extend-filesystems[1473]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 30 13:55:20.052115 extend-filesystems[1435]: Resized filesystem in /dev/vda9 Jan 30 13:55:20.052115 extend-filesystems[1435]: Found vdb Jan 30 13:55:20.031921 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:55:20.076688 bash[1495]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:55:20.033557 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:55:20.058614 systemd-networkd[1365]: eth1: Gained IPv6LL Jan 30 13:55:20.066427 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:55:20.081963 systemd[1]: Starting sshkeys.service... Jan 30 13:55:20.100110 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:55:20.119604 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:55:20.131782 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:55:20.142860 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:55:20.189774 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:55:20.203108 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 13:55:20.214839 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 13:55:20.320908 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:55:20.326795 locksmithd[1469]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:55:20.340195 coreos-metadata[1512]: Jan 30 13:55:20.339 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:55:20.358307 coreos-metadata[1512]: Jan 30 13:55:20.355 INFO Fetch successful Jan 30 13:55:20.378582 systemd-networkd[1365]: eth0: Gained IPv6LL Jan 30 13:55:20.389803 unknown[1512]: wrote ssh authorized keys file for user: core Jan 30 13:55:20.452696 containerd[1464]: time="2025-01-30T13:55:20.452526377Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:55:20.473432 update-ssh-keys[1524]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:55:20.466924 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 13:55:20.476132 systemd[1]: Finished sshkeys.service. Jan 30 13:55:20.550089 containerd[1464]: time="2025-01-30T13:55:20.549882749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:55:20.557667 containerd[1464]: time="2025-01-30T13:55:20.557081506Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:55:20.557667 containerd[1464]: time="2025-01-30T13:55:20.557140960Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:55:20.557667 containerd[1464]: time="2025-01-30T13:55:20.557167072Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:55:20.557667 containerd[1464]: time="2025-01-30T13:55:20.557390683Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:55:20.557667 containerd[1464]: time="2025-01-30T13:55:20.557433280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:55:20.557667 containerd[1464]: time="2025-01-30T13:55:20.557505519Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:55:20.557667 containerd[1464]: time="2025-01-30T13:55:20.557517514Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:55:20.557960 containerd[1464]: time="2025-01-30T13:55:20.557733611Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:55:20.557960 containerd[1464]: time="2025-01-30T13:55:20.557748697Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:55:20.557960 containerd[1464]: time="2025-01-30T13:55:20.557764467Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:55:20.557960 containerd[1464]: time="2025-01-30T13:55:20.557773771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:55:20.557960 containerd[1464]: time="2025-01-30T13:55:20.557870883Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:55:20.558129 containerd[1464]: time="2025-01-30T13:55:20.558095626Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:55:20.558269 containerd[1464]: time="2025-01-30T13:55:20.558216448Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:55:20.558269 containerd[1464]: time="2025-01-30T13:55:20.558239664Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:55:20.558524 containerd[1464]: time="2025-01-30T13:55:20.558376460Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:55:20.563252 containerd[1464]: time="2025-01-30T13:55:20.562966676Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:55:20.564768 sshd_keygen[1465]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:55:20.575307 containerd[1464]: time="2025-01-30T13:55:20.574543492Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:55:20.575307 containerd[1464]: time="2025-01-30T13:55:20.574620571Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:55:20.575307 containerd[1464]: time="2025-01-30T13:55:20.574638498Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:55:20.575307 containerd[1464]: time="2025-01-30T13:55:20.574653266Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:55:20.575307 containerd[1464]: time="2025-01-30T13:55:20.574670106Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:55:20.575307 containerd[1464]: time="2025-01-30T13:55:20.574873525Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:55:20.575307 containerd[1464]: time="2025-01-30T13:55:20.575149448Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:55:20.575307 containerd[1464]: time="2025-01-30T13:55:20.575269076Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:55:20.575307 containerd[1464]: time="2025-01-30T13:55:20.575290817Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:55:20.575307 containerd[1464]: time="2025-01-30T13:55:20.575311533Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:55:20.575307 containerd[1464]: time="2025-01-30T13:55:20.575331517Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:55:20.577293 containerd[1464]: time="2025-01-30T13:55:20.575376134Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:55:20.577293 containerd[1464]: time="2025-01-30T13:55:20.575416939Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:55:20.577293 containerd[1464]: time="2025-01-30T13:55:20.575438826Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:55:20.577293 containerd[1464]: time="2025-01-30T13:55:20.575459745Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:55:20.577293 containerd[1464]: time="2025-01-30T13:55:20.575477326Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:55:20.577293 containerd[1464]: time="2025-01-30T13:55:20.575492685Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:55:20.577293 containerd[1464]: time="2025-01-30T13:55:20.575507693Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:55:20.577293 containerd[1464]: time="2025-01-30T13:55:20.575534589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.577293 containerd[1464]: time="2025-01-30T13:55:20.575554128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.577293 containerd[1464]: time="2025-01-30T13:55:20.575611456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.577293 containerd[1464]: time="2025-01-30T13:55:20.575636949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.577293 containerd[1464]: time="2025-01-30T13:55:20.575658951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.577293 containerd[1464]: time="2025-01-30T13:55:20.575677794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.577293 containerd[1464]: time="2025-01-30T13:55:20.575694443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.578064 containerd[1464]: time="2025-01-30T13:55:20.575712362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.578064 containerd[1464]: time="2025-01-30T13:55:20.575730082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.578064 containerd[1464]: time="2025-01-30T13:55:20.575775986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.578064 containerd[1464]: time="2025-01-30T13:55:20.575800260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.578064 containerd[1464]: time="2025-01-30T13:55:20.575818429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.578064 containerd[1464]: time="2025-01-30T13:55:20.575834537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.578064 containerd[1464]: time="2025-01-30T13:55:20.575882140Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:55:20.578064 containerd[1464]: time="2025-01-30T13:55:20.575929259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.578064 containerd[1464]: time="2025-01-30T13:55:20.575948685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.578064 containerd[1464]: time="2025-01-30T13:55:20.575964200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:55:20.578064 containerd[1464]: time="2025-01-30T13:55:20.576022506Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:55:20.578064 containerd[1464]: time="2025-01-30T13:55:20.576050328Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:55:20.578064 containerd[1464]: time="2025-01-30T13:55:20.576065342Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:55:20.578589 containerd[1464]: time="2025-01-30T13:55:20.576082871Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:55:20.578589 containerd[1464]: time="2025-01-30T13:55:20.576097392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.578589 containerd[1464]: time="2025-01-30T13:55:20.576124241Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:55:20.578589 containerd[1464]: time="2025-01-30T13:55:20.576146098Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:55:20.578589 containerd[1464]: time="2025-01-30T13:55:20.576166358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.584121 containerd[1464]: time="2025-01-30T13:55:20.582745385Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:55:20.584121 containerd[1464]: time="2025-01-30T13:55:20.582834873Z" level=info msg="Connect containerd service" Jan 30 13:55:20.584121 containerd[1464]: time="2025-01-30T13:55:20.582898309Z" level=info msg="using legacy CRI server" Jan 30 13:55:20.584121 containerd[1464]: time="2025-01-30T13:55:20.582907431Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:55:20.584121 containerd[1464]: time="2025-01-30T13:55:20.583039839Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:55:20.584121 containerd[1464]: time="2025-01-30T13:55:20.583936877Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:55:20.586946 containerd[1464]: time="2025-01-30T13:55:20.584318716Z" level=info msg="Start subscribing containerd event" Jan 30 13:55:20.586946 containerd[1464]: time="2025-01-30T13:55:20.584414049Z" level=info msg="Start recovering state" Jan 30 13:55:20.599186 containerd[1464]: time="2025-01-30T13:55:20.598503844Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:55:20.599186 containerd[1464]: time="2025-01-30T13:55:20.598636657Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:55:20.599186 containerd[1464]: time="2025-01-30T13:55:20.598689028Z" level=info msg="Start event monitor" Jan 30 13:55:20.599186 containerd[1464]: time="2025-01-30T13:55:20.598712163Z" level=info msg="Start snapshots syncer" Jan 30 13:55:20.599186 containerd[1464]: time="2025-01-30T13:55:20.598728209Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:55:20.599186 containerd[1464]: time="2025-01-30T13:55:20.598736196Z" level=info msg="Start streaming server" Jan 30 13:55:20.598962 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:55:20.604839 containerd[1464]: time="2025-01-30T13:55:20.603830381Z" level=info msg="containerd successfully booted in 0.154850s" Jan 30 13:55:20.664881 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:55:20.679905 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:55:20.692931 systemd[1]: Started sshd@0-64.23.155.240:22-147.75.109.163:44684.service - OpenSSH per-connection server daemon (147.75.109.163:44684). Jan 30 13:55:20.731759 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:55:20.734681 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:55:20.747247 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:55:20.808471 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:55:20.825541 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:55:20.840112 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:55:20.845926 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:55:20.870140 sshd[1541]: Accepted publickey for core from 147.75.109.163 port 44684 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:55:20.874987 sshd[1541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:20.888349 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:55:20.902073 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:55:20.911009 systemd-logind[1442]: New session 1 of user core. Jan 30 13:55:20.947951 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:55:20.966014 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:55:20.990222 (systemd)[1553]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:55:21.150130 systemd[1553]: Queued start job for default target default.target. Jan 30 13:55:21.159004 systemd[1553]: Created slice app.slice - User Application Slice. Jan 30 13:55:21.159377 systemd[1553]: Reached target paths.target - Paths. Jan 30 13:55:21.159510 systemd[1553]: Reached target timers.target - Timers. Jan 30 13:55:21.162709 systemd[1553]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:55:21.187057 systemd[1553]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:55:21.187336 systemd[1553]: Reached target sockets.target - Sockets. Jan 30 13:55:21.187449 systemd[1553]: Reached target basic.target - Basic System. Jan 30 13:55:21.187505 systemd[1553]: Reached target default.target - Main User Target. Jan 30 13:55:21.187673 systemd[1553]: Startup finished in 171ms. Jan 30 13:55:21.188629 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:55:21.196948 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:55:21.233331 tar[1451]: linux-amd64/LICENSE Jan 30 13:55:21.235063 tar[1451]: linux-amd64/README.md Jan 30 13:55:21.266393 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:55:21.281905 systemd[1]: Started sshd@1-64.23.155.240:22-147.75.109.163:44692.service - OpenSSH per-connection server daemon (147.75.109.163:44692). Jan 30 13:55:21.351180 sshd[1567]: Accepted publickey for core from 147.75.109.163 port 44692 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:55:21.352978 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:21.359270 systemd-logind[1442]: New session 2 of user core. Jan 30 13:55:21.365778 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:55:21.436484 sshd[1567]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:21.450265 systemd[1]: sshd@1-64.23.155.240:22-147.75.109.163:44692.service: Deactivated successfully. Jan 30 13:55:21.453099 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:55:21.456587 systemd-logind[1442]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:55:21.463910 systemd[1]: Started sshd@2-64.23.155.240:22-147.75.109.163:44694.service - OpenSSH per-connection server daemon (147.75.109.163:44694). Jan 30 13:55:21.470955 systemd-logind[1442]: Removed session 2. Jan 30 13:55:21.525523 sshd[1574]: Accepted publickey for core from 147.75.109.163 port 44694 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:55:21.528879 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:21.536601 systemd-logind[1442]: New session 3 of user core. Jan 30 13:55:21.541697 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:55:21.610608 sshd[1574]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:21.615180 systemd[1]: sshd@2-64.23.155.240:22-147.75.109.163:44694.service: Deactivated successfully. Jan 30 13:55:21.617286 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:55:21.619569 systemd-logind[1442]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:55:21.621197 systemd-logind[1442]: Removed session 3. Jan 30 13:55:21.653296 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:21.654586 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:55:21.656858 systemd[1]: Startup finished in 1.121s (kernel) + 5.283s (initrd) + 5.711s (userspace) = 12.117s. Jan 30 13:55:21.666886 (kubelet)[1585]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:55:22.614367 kubelet[1585]: E0130 13:55:22.614273 1585 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:55:22.617173 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:55:22.617394 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:55:22.618708 systemd[1]: kubelet.service: Consumed 1.456s CPU time. Jan 30 13:55:31.422527 systemd[1]: Started sshd@3-64.23.155.240:22-147.75.109.163:39932.service - OpenSSH per-connection server daemon (147.75.109.163:39932). Jan 30 13:55:31.482432 sshd[1598]: Accepted publickey for core from 147.75.109.163 port 39932 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:55:31.485357 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:31.491873 systemd-logind[1442]: New session 4 of user core. Jan 30 13:55:31.502955 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:55:31.565822 sshd[1598]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:31.578461 systemd[1]: sshd@3-64.23.155.240:22-147.75.109.163:39932.service: Deactivated successfully. Jan 30 13:55:31.580345 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:55:31.583202 systemd-logind[1442]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:55:31.588955 systemd[1]: Started sshd@4-64.23.155.240:22-147.75.109.163:39942.service - OpenSSH per-connection server daemon (147.75.109.163:39942). Jan 30 13:55:31.590829 systemd-logind[1442]: Removed session 4. Jan 30 13:55:31.643770 sshd[1605]: Accepted publickey for core from 147.75.109.163 port 39942 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:55:31.645834 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:31.653699 systemd-logind[1442]: New session 5 of user core. Jan 30 13:55:31.660750 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:55:31.717968 sshd[1605]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:31.727825 systemd[1]: sshd@4-64.23.155.240:22-147.75.109.163:39942.service: Deactivated successfully. Jan 30 13:55:31.730117 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:55:31.732671 systemd-logind[1442]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:55:31.739919 systemd[1]: Started sshd@5-64.23.155.240:22-147.75.109.163:39958.service - OpenSSH per-connection server daemon (147.75.109.163:39958). Jan 30 13:55:31.742512 systemd-logind[1442]: Removed session 5. Jan 30 13:55:31.781375 sshd[1612]: Accepted publickey for core from 147.75.109.163 port 39958 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:55:31.783753 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:31.791289 systemd-logind[1442]: New session 6 of user core. Jan 30 13:55:31.804793 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:55:31.870026 sshd[1612]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:31.883440 systemd[1]: sshd@5-64.23.155.240:22-147.75.109.163:39958.service: Deactivated successfully. Jan 30 13:55:31.886000 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:55:31.887954 systemd-logind[1442]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:55:31.893795 systemd[1]: Started sshd@6-64.23.155.240:22-147.75.109.163:39960.service - OpenSSH per-connection server daemon (147.75.109.163:39960). Jan 30 13:55:31.895783 systemd-logind[1442]: Removed session 6. Jan 30 13:55:31.936443 sshd[1619]: Accepted publickey for core from 147.75.109.163 port 39960 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:55:31.938799 sshd[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:31.946028 systemd-logind[1442]: New session 7 of user core. Jan 30 13:55:31.952709 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:55:32.023163 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:55:32.024545 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:55:32.038588 sudo[1622]: pam_unix(sudo:session): session closed for user root Jan 30 13:55:32.042902 sshd[1619]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:32.059534 systemd[1]: sshd@6-64.23.155.240:22-147.75.109.163:39960.service: Deactivated successfully. Jan 30 13:55:32.063280 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:55:32.065094 systemd-logind[1442]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:55:32.070957 systemd[1]: Started sshd@7-64.23.155.240:22-147.75.109.163:39972.service - OpenSSH per-connection server daemon (147.75.109.163:39972). Jan 30 13:55:32.072919 systemd-logind[1442]: Removed session 7. Jan 30 13:55:32.126285 sshd[1627]: Accepted publickey for core from 147.75.109.163 port 39972 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:55:32.128257 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:32.133438 systemd-logind[1442]: New session 8 of user core. Jan 30 13:55:32.140689 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:55:32.202315 sudo[1631]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:55:32.202695 sudo[1631]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:55:32.207618 sudo[1631]: pam_unix(sudo:session): session closed for user root Jan 30 13:55:32.215674 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:55:32.216142 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:55:32.234853 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:55:32.237990 auditctl[1634]: No rules Jan 30 13:55:32.238417 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:55:32.238600 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:55:32.241331 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:55:32.290865 augenrules[1652]: No rules Jan 30 13:55:32.291292 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:55:32.292866 sudo[1630]: pam_unix(sudo:session): session closed for user root Jan 30 13:55:32.298751 sshd[1627]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:32.310720 systemd[1]: sshd@7-64.23.155.240:22-147.75.109.163:39972.service: Deactivated successfully. Jan 30 13:55:32.313577 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:55:32.314697 systemd-logind[1442]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:55:32.321901 systemd[1]: Started sshd@8-64.23.155.240:22-147.75.109.163:39986.service - OpenSSH per-connection server daemon (147.75.109.163:39986). Jan 30 13:55:32.323536 systemd-logind[1442]: Removed session 8. Jan 30 13:55:32.361810 sshd[1660]: Accepted publickey for core from 147.75.109.163 port 39986 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:55:32.364064 sshd[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:32.368796 systemd-logind[1442]: New session 9 of user core. Jan 30 13:55:32.378902 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:55:32.438146 sudo[1663]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:55:32.438502 sudo[1663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:55:32.867601 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:55:32.876930 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:55:32.890847 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:55:32.896863 (dockerd)[1682]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:55:33.109501 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:33.121008 (kubelet)[1688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:55:33.191426 kubelet[1688]: E0130 13:55:33.191363 1688 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:55:33.195924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:55:33.196705 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:55:33.402486 dockerd[1682]: time="2025-01-30T13:55:33.401710524Z" level=info msg="Starting up" Jan 30 13:55:33.540778 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1272684247-merged.mount: Deactivated successfully. Jan 30 13:55:33.584774 dockerd[1682]: time="2025-01-30T13:55:33.584718443Z" level=info msg="Loading containers: start." Jan 30 13:55:33.730425 kernel: Initializing XFRM netlink socket Jan 30 13:55:33.835190 systemd-networkd[1365]: docker0: Link UP Jan 30 13:55:33.853425 dockerd[1682]: time="2025-01-30T13:55:33.853321757Z" level=info msg="Loading containers: done." Jan 30 13:55:33.872086 dockerd[1682]: time="2025-01-30T13:55:33.872032446Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:55:33.872391 dockerd[1682]: time="2025-01-30T13:55:33.872211400Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 13:55:33.872391 dockerd[1682]: time="2025-01-30T13:55:33.872367805Z" level=info msg="Daemon has completed initialization" Jan 30 13:55:33.925267 dockerd[1682]: time="2025-01-30T13:55:33.925020038Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:55:33.925648 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:55:35.021366 containerd[1464]: time="2025-01-30T13:55:35.021082069Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 13:55:35.737760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount915361267.mount: Deactivated successfully. Jan 30 13:55:37.111465 containerd[1464]: time="2025-01-30T13:55:37.111317837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:37.112692 containerd[1464]: time="2025-01-30T13:55:37.112481237Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 30 13:55:37.113444 containerd[1464]: time="2025-01-30T13:55:37.113265998Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:37.119332 containerd[1464]: time="2025-01-30T13:55:37.119266979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:37.121248 containerd[1464]: time="2025-01-30T13:55:37.120585489Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.099424954s" Jan 30 13:55:37.121248 containerd[1464]: time="2025-01-30T13:55:37.120649036Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 13:55:37.152295 containerd[1464]: time="2025-01-30T13:55:37.152245225Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 13:55:38.800449 containerd[1464]: time="2025-01-30T13:55:38.800338566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:38.801943 containerd[1464]: time="2025-01-30T13:55:38.801886351Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 30 13:55:38.802721 containerd[1464]: time="2025-01-30T13:55:38.802660352Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:38.805376 containerd[1464]: time="2025-01-30T13:55:38.805303642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:38.807026 containerd[1464]: time="2025-01-30T13:55:38.806877505Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.654586173s" Jan 30 13:55:38.807026 containerd[1464]: time="2025-01-30T13:55:38.806923170Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 13:55:38.836188 containerd[1464]: time="2025-01-30T13:55:38.836118491Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 13:55:40.035062 containerd[1464]: time="2025-01-30T13:55:40.034994767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:40.037035 containerd[1464]: time="2025-01-30T13:55:40.036971584Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 30 13:55:40.038558 containerd[1464]: time="2025-01-30T13:55:40.038516166Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:40.042597 containerd[1464]: time="2025-01-30T13:55:40.042469025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:40.044842 containerd[1464]: time="2025-01-30T13:55:40.044772380Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.208608583s" Jan 30 13:55:40.044842 containerd[1464]: time="2025-01-30T13:55:40.044838445Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 13:55:40.081097 containerd[1464]: time="2025-01-30T13:55:40.081041903Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 13:55:41.144560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount210516071.mount: Deactivated successfully. Jan 30 13:55:41.659742 containerd[1464]: time="2025-01-30T13:55:41.659660943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:41.661005 containerd[1464]: time="2025-01-30T13:55:41.660560934Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 30 13:55:41.661852 containerd[1464]: time="2025-01-30T13:55:41.661792093Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:41.664925 containerd[1464]: time="2025-01-30T13:55:41.664718728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:41.665916 containerd[1464]: time="2025-01-30T13:55:41.665868646Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.584776002s" Jan 30 13:55:41.666101 containerd[1464]: time="2025-01-30T13:55:41.666078861Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 13:55:41.694789 containerd[1464]: time="2025-01-30T13:55:41.694496430Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:55:41.696423 systemd-resolved[1321]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 30 13:55:42.252630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1307090833.mount: Deactivated successfully. Jan 30 13:55:43.249840 containerd[1464]: time="2025-01-30T13:55:43.249710286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:43.251375 containerd[1464]: time="2025-01-30T13:55:43.251302953Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 30 13:55:43.252065 containerd[1464]: time="2025-01-30T13:55:43.251978395Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:43.254983 containerd[1464]: time="2025-01-30T13:55:43.254917761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:43.258009 containerd[1464]: time="2025-01-30T13:55:43.256826843Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.562029274s" Jan 30 13:55:43.258009 containerd[1464]: time="2025-01-30T13:55:43.256909774Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 13:55:43.298663 containerd[1464]: time="2025-01-30T13:55:43.298615907Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 13:55:43.446528 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:55:43.463123 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:55:43.601715 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:43.603000 (kubelet)[1990]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:55:43.670130 kubelet[1990]: E0130 13:55:43.670071 1990 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:55:43.673384 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:55:43.673673 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:55:43.831128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount597704539.mount: Deactivated successfully. Jan 30 13:55:43.840471 containerd[1464]: time="2025-01-30T13:55:43.840290269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:43.842544 containerd[1464]: time="2025-01-30T13:55:43.842455930Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 30 13:55:43.843895 containerd[1464]: time="2025-01-30T13:55:43.843811965Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:43.847235 containerd[1464]: time="2025-01-30T13:55:43.847194880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:43.848752 containerd[1464]: time="2025-01-30T13:55:43.848293243Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 549.632546ms" Jan 30 13:55:43.848752 containerd[1464]: time="2025-01-30T13:55:43.848331664Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 13:55:43.876942 containerd[1464]: time="2025-01-30T13:55:43.876813020Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 13:55:44.441098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount646379035.mount: Deactivated successfully. Jan 30 13:55:44.761677 systemd-resolved[1321]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 30 13:55:46.307282 containerd[1464]: time="2025-01-30T13:55:46.307217839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:46.308918 containerd[1464]: time="2025-01-30T13:55:46.308866306Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 30 13:55:46.310158 containerd[1464]: time="2025-01-30T13:55:46.310077300Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:46.323163 containerd[1464]: time="2025-01-30T13:55:46.323057284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:46.325287 containerd[1464]: time="2025-01-30T13:55:46.325148897Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.448298551s" Jan 30 13:55:46.325287 containerd[1464]: time="2025-01-30T13:55:46.325201938Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 13:55:49.919483 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:49.929922 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:55:49.965658 systemd[1]: Reloading requested from client PID 2116 ('systemctl') (unit session-9.scope)... Jan 30 13:55:49.965876 systemd[1]: Reloading... Jan 30 13:55:50.124429 zram_generator::config[2156]: No configuration found. Jan 30 13:55:50.271322 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:55:50.362349 systemd[1]: Reloading finished in 395 ms. Jan 30 13:55:50.421206 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:55:50.421345 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:55:50.421902 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:50.429179 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:55:50.574906 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:50.589377 (kubelet)[2209]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:55:50.661017 kubelet[2209]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:55:50.661017 kubelet[2209]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:55:50.661017 kubelet[2209]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:55:50.662343 kubelet[2209]: I0130 13:55:50.662255 2209 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:55:51.172615 kubelet[2209]: I0130 13:55:51.172563 2209 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:55:51.172615 kubelet[2209]: I0130 13:55:51.172601 2209 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:55:51.172916 kubelet[2209]: I0130 13:55:51.172886 2209 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:55:51.193897 kubelet[2209]: I0130 13:55:51.193734 2209 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:55:51.196080 kubelet[2209]: E0130 13:55:51.195907 2209 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://64.23.155.240:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 64.23.155.240:6443: connect: connection refused Jan 30 13:55:51.208489 kubelet[2209]: I0130 13:55:51.208409 2209 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:55:51.208832 kubelet[2209]: I0130 13:55:51.208772 2209 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:55:51.209020 kubelet[2209]: I0130 13:55:51.208827 2209 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-04505505d0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:55:51.209165 kubelet[2209]: I0130 13:55:51.209039 2209 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:55:51.209165 kubelet[2209]: I0130 13:55:51.209056 2209 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:55:51.209258 kubelet[2209]: I0130 13:55:51.209241 2209 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:55:51.210920 kubelet[2209]: W0130 13:55:51.210782 2209 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.155.240:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-04505505d0&limit=500&resourceVersion=0": dial tcp 64.23.155.240:6443: connect: connection refused Jan 30 13:55:51.210920 kubelet[2209]: E0130 13:55:51.210881 2209 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://64.23.155.240:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-04505505d0&limit=500&resourceVersion=0": dial tcp 64.23.155.240:6443: connect: connection refused Jan 30 13:55:51.212141 kubelet[2209]: I0130 13:55:51.212073 2209 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:55:51.212141 kubelet[2209]: I0130 13:55:51.212126 2209 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:55:51.212345 kubelet[2209]: I0130 13:55:51.212163 2209 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:55:51.212345 kubelet[2209]: I0130 13:55:51.212180 2209 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:55:51.216442 kubelet[2209]: W0130 13:55:51.215766 2209 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.155.240:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.155.240:6443: connect: connection refused Jan 30 13:55:51.216442 kubelet[2209]: E0130 13:55:51.215848 2209 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://64.23.155.240:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.155.240:6443: connect: connection refused Jan 30 13:55:51.219451 kubelet[2209]: I0130 13:55:51.218776 2209 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:55:51.222312 kubelet[2209]: I0130 13:55:51.222266 2209 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:55:51.223523 kubelet[2209]: W0130 13:55:51.222594 2209 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:55:51.227437 kubelet[2209]: I0130 13:55:51.226543 2209 server.go:1264] "Started kubelet" Jan 30 13:55:51.229655 kubelet[2209]: I0130 13:55:51.229580 2209 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:55:51.235301 kubelet[2209]: I0130 13:55:51.235231 2209 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:55:51.241352 kubelet[2209]: E0130 13:55:51.240463 2209 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.155.240:6443/api/v1/namespaces/default/events\": dial tcp 64.23.155.240:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-04505505d0.181f7ceff45427f3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-04505505d0,UID:ci-4081.3.0-a-04505505d0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-04505505d0,},FirstTimestamp:2025-01-30 13:55:51.226484723 +0000 UTC m=+0.631017793,LastTimestamp:2025-01-30 13:55:51.226484723 +0000 UTC m=+0.631017793,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-04505505d0,}" Jan 30 13:55:51.246689 kubelet[2209]: I0130 13:55:51.245342 2209 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:55:51.246689 kubelet[2209]: I0130 13:55:51.246234 2209 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:55:51.247283 kubelet[2209]: I0130 13:55:51.247255 2209 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:55:51.248973 kubelet[2209]: I0130 13:55:51.248939 2209 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:55:51.252880 kubelet[2209]: E0130 13:55:51.252843 2209 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.155.240:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-04505505d0?timeout=10s\": dial tcp 64.23.155.240:6443: connect: connection refused" interval="200ms" Jan 30 13:55:51.254974 kubelet[2209]: I0130 13:55:51.254940 2209 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:55:51.255160 kubelet[2209]: I0130 13:55:51.255144 2209 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:55:51.255361 kubelet[2209]: I0130 13:55:51.255333 2209 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:55:51.262600 kubelet[2209]: E0130 13:55:51.262569 2209 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:55:51.262770 kubelet[2209]: I0130 13:55:51.255068 2209 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:55:51.262816 kubelet[2209]: I0130 13:55:51.255194 2209 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:55:51.268766 kubelet[2209]: I0130 13:55:51.268713 2209 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:55:51.270644 kubelet[2209]: I0130 13:55:51.270595 2209 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:55:51.270814 kubelet[2209]: I0130 13:55:51.270804 2209 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:55:51.270889 kubelet[2209]: I0130 13:55:51.270879 2209 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:55:51.271026 kubelet[2209]: E0130 13:55:51.271007 2209 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:55:51.279994 kubelet[2209]: W0130 13:55:51.279930 2209 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.155.240:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.155.240:6443: connect: connection refused Jan 30 13:55:51.279994 kubelet[2209]: E0130 13:55:51.279994 2209 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://64.23.155.240:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.155.240:6443: connect: connection refused Jan 30 13:55:51.280408 kubelet[2209]: W0130 13:55:51.280349 2209 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.155.240:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.155.240:6443: connect: connection refused Jan 30 13:55:51.280408 kubelet[2209]: E0130 13:55:51.280424 2209 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://64.23.155.240:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.155.240:6443: connect: connection refused Jan 30 13:55:51.292957 kubelet[2209]: I0130 13:55:51.292816 2209 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:55:51.292957 kubelet[2209]: I0130 13:55:51.292893 2209 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:55:51.292957 kubelet[2209]: I0130 13:55:51.292932 2209 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:55:51.304364 kubelet[2209]: I0130 13:55:51.304300 2209 policy_none.go:49] "None policy: Start" Jan 30 13:55:51.305592 kubelet[2209]: I0130 13:55:51.305551 2209 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:55:51.305592 kubelet[2209]: I0130 13:55:51.305601 2209 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:55:51.322302 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:55:51.336859 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:55:51.340641 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:55:51.351205 kubelet[2209]: I0130 13:55:51.350949 2209 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:55:51.351205 kubelet[2209]: I0130 13:55:51.350972 2209 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-04505505d0" Jan 30 13:55:51.351899 kubelet[2209]: I0130 13:55:51.351591 2209 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:55:51.351899 kubelet[2209]: E0130 13:55:51.351614 2209 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.23.155.240:6443/api/v1/nodes\": dial tcp 64.23.155.240:6443: connect: connection refused" node="ci-4081.3.0-a-04505505d0" Jan 30 13:55:51.351899 kubelet[2209]: I0130 13:55:51.351742 2209 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:55:51.354344 kubelet[2209]: E0130 13:55:51.354310 2209 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-a-04505505d0\" not found" Jan 30 13:55:51.372943 kubelet[2209]: I0130 13:55:51.372389 2209 topology_manager.go:215] "Topology Admit Handler" podUID="69a20aacbcf04fcb70280b430c66a706" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-04505505d0" Jan 30 13:55:51.373837 kubelet[2209]: I0130 13:55:51.373807 2209 topology_manager.go:215] "Topology Admit Handler" podUID="39560b79da973238ec3ada684a076799" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-04505505d0" Jan 30 13:55:51.375545 kubelet[2209]: I0130 13:55:51.374993 2209 topology_manager.go:215] "Topology Admit Handler" podUID="25d8c8c3b51ed58a9b53c9f779b96cae" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-04505505d0" Jan 30 13:55:51.381994 systemd[1]: Created slice kubepods-burstable-pod69a20aacbcf04fcb70280b430c66a706.slice - libcontainer container kubepods-burstable-pod69a20aacbcf04fcb70280b430c66a706.slice. Jan 30 13:55:51.400503 systemd[1]: Created slice kubepods-burstable-pod39560b79da973238ec3ada684a076799.slice - libcontainer container kubepods-burstable-pod39560b79da973238ec3ada684a076799.slice. Jan 30 13:55:51.416927 systemd[1]: Created slice kubepods-burstable-pod25d8c8c3b51ed58a9b53c9f779b96cae.slice - libcontainer container kubepods-burstable-pod25d8c8c3b51ed58a9b53c9f779b96cae.slice. Jan 30 13:55:51.454770 kubelet[2209]: E0130 13:55:51.454586 2209 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.155.240:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-04505505d0?timeout=10s\": dial tcp 64.23.155.240:6443: connect: connection refused" interval="400ms" Jan 30 13:55:51.464318 kubelet[2209]: I0130 13:55:51.464185 2209 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/39560b79da973238ec3ada684a076799-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-04505505d0\" (UID: \"39560b79da973238ec3ada684a076799\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-04505505d0" Jan 30 13:55:51.464318 kubelet[2209]: I0130 13:55:51.464243 2209 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/69a20aacbcf04fcb70280b430c66a706-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-04505505d0\" (UID: \"69a20aacbcf04fcb70280b430c66a706\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-04505505d0" Jan 30 13:55:51.464318 kubelet[2209]: I0130 13:55:51.464313 2209 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/69a20aacbcf04fcb70280b430c66a706-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-04505505d0\" (UID: \"69a20aacbcf04fcb70280b430c66a706\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-04505505d0" Jan 30 13:55:51.464595 kubelet[2209]: I0130 13:55:51.464389 2209 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/69a20aacbcf04fcb70280b430c66a706-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-04505505d0\" (UID: \"69a20aacbcf04fcb70280b430c66a706\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-04505505d0" Jan 30 13:55:51.464595 kubelet[2209]: I0130 13:55:51.464443 2209 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/39560b79da973238ec3ada684a076799-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-04505505d0\" (UID: \"39560b79da973238ec3ada684a076799\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-04505505d0" Jan 30 13:55:51.464595 kubelet[2209]: I0130 13:55:51.464469 2209 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/39560b79da973238ec3ada684a076799-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-04505505d0\" (UID: \"39560b79da973238ec3ada684a076799\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-04505505d0" Jan 30 13:55:51.464595 kubelet[2209]: I0130 13:55:51.464490 2209 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39560b79da973238ec3ada684a076799-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-04505505d0\" (UID: \"39560b79da973238ec3ada684a076799\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-04505505d0" Jan 30 13:55:51.464595 kubelet[2209]: I0130 13:55:51.464519 2209 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/39560b79da973238ec3ada684a076799-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-04505505d0\" (UID: \"39560b79da973238ec3ada684a076799\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-04505505d0" Jan 30 13:55:51.464803 kubelet[2209]: I0130 13:55:51.464548 2209 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/25d8c8c3b51ed58a9b53c9f779b96cae-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-04505505d0\" (UID: \"25d8c8c3b51ed58a9b53c9f779b96cae\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-04505505d0" Jan 30 13:55:51.553923 kubelet[2209]: I0130 13:55:51.553419 2209 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-04505505d0" Jan 30 13:55:51.554185 kubelet[2209]: E0130 13:55:51.554150 2209 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.23.155.240:6443/api/v1/nodes\": dial tcp 64.23.155.240:6443: connect: connection refused" node="ci-4081.3.0-a-04505505d0" Jan 30 13:55:51.697655 kubelet[2209]: E0130 13:55:51.697556 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:51.698941 containerd[1464]: time="2025-01-30T13:55:51.698716347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-04505505d0,Uid:69a20aacbcf04fcb70280b430c66a706,Namespace:kube-system,Attempt:0,}" Jan 30 13:55:51.701140 systemd-resolved[1321]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jan 30 13:55:51.714640 kubelet[2209]: E0130 13:55:51.714153 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:51.719427 containerd[1464]: time="2025-01-30T13:55:51.719327517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-04505505d0,Uid:39560b79da973238ec3ada684a076799,Namespace:kube-system,Attempt:0,}" Jan 30 13:55:51.720799 kubelet[2209]: E0130 13:55:51.720618 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:51.724147 containerd[1464]: time="2025-01-30T13:55:51.724058696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-04505505d0,Uid:25d8c8c3b51ed58a9b53c9f779b96cae,Namespace:kube-system,Attempt:0,}" Jan 30 13:55:51.856136 kubelet[2209]: E0130 13:55:51.856068 2209 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.155.240:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-04505505d0?timeout=10s\": dial tcp 64.23.155.240:6443: connect: connection refused" interval="800ms" Jan 30 13:55:51.956831 kubelet[2209]: I0130 13:55:51.956388 2209 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-04505505d0" Jan 30 13:55:51.956831 kubelet[2209]: E0130 13:55:51.956815 2209 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.23.155.240:6443/api/v1/nodes\": dial tcp 64.23.155.240:6443: connect: connection refused" node="ci-4081.3.0-a-04505505d0" Jan 30 13:55:52.093543 kubelet[2209]: W0130 13:55:52.093382 2209 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.155.240:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.155.240:6443: connect: connection refused Jan 30 13:55:52.093543 kubelet[2209]: E0130 13:55:52.093459 2209 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://64.23.155.240:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.155.240:6443: connect: connection refused Jan 30 13:55:52.229520 kubelet[2209]: W0130 13:55:52.229423 2209 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.155.240:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.155.240:6443: connect: connection refused Jan 30 13:55:52.229520 kubelet[2209]: E0130 13:55:52.229486 2209 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://64.23.155.240:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.155.240:6443: connect: connection refused Jan 30 13:55:52.249509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount314783160.mount: Deactivated successfully. Jan 30 13:55:52.271449 containerd[1464]: time="2025-01-30T13:55:52.270501443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:55:52.272023 containerd[1464]: time="2025-01-30T13:55:52.271967411Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:55:52.272739 containerd[1464]: time="2025-01-30T13:55:52.272709578Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:55:52.274266 containerd[1464]: time="2025-01-30T13:55:52.274224385Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:55:52.274879 containerd[1464]: time="2025-01-30T13:55:52.274713862Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:55:52.276444 containerd[1464]: time="2025-01-30T13:55:52.275448196Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:55:52.276444 containerd[1464]: time="2025-01-30T13:55:52.275836493Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:55:52.279176 containerd[1464]: time="2025-01-30T13:55:52.279092236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:55:52.281394 containerd[1464]: time="2025-01-30T13:55:52.281101636Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 561.632419ms" Jan 30 13:55:52.283231 containerd[1464]: time="2025-01-30T13:55:52.283079266Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 558.922778ms" Jan 30 13:55:52.285581 containerd[1464]: time="2025-01-30T13:55:52.285525437Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 586.698868ms" Jan 30 13:55:52.423598 kubelet[2209]: W0130 13:55:52.423250 2209 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.155.240:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.155.240:6443: connect: connection refused Jan 30 13:55:52.423598 kubelet[2209]: E0130 13:55:52.423344 2209 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://64.23.155.240:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.155.240:6443: connect: connection refused Jan 30 13:55:52.467671 containerd[1464]: time="2025-01-30T13:55:52.467153010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:55:52.467671 containerd[1464]: time="2025-01-30T13:55:52.467304049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:55:52.467671 containerd[1464]: time="2025-01-30T13:55:52.467335502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:52.467671 containerd[1464]: time="2025-01-30T13:55:52.467541559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:52.495245 containerd[1464]: time="2025-01-30T13:55:52.495113179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:55:52.495245 containerd[1464]: time="2025-01-30T13:55:52.495178884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:55:52.495245 containerd[1464]: time="2025-01-30T13:55:52.495198537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:52.495546 containerd[1464]: time="2025-01-30T13:55:52.495305537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:52.498751 containerd[1464]: time="2025-01-30T13:55:52.498475781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:55:52.499315 containerd[1464]: time="2025-01-30T13:55:52.499029731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:55:52.499629 containerd[1464]: time="2025-01-30T13:55:52.499191279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:52.502643 systemd[1]: Started cri-containerd-51bb2d13baf9f6fd677f11a343616c3a9fc3c542735f1d1f80aaf916e8317564.scope - libcontainer container 51bb2d13baf9f6fd677f11a343616c3a9fc3c542735f1d1f80aaf916e8317564. Jan 30 13:55:52.508115 containerd[1464]: time="2025-01-30T13:55:52.506247262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:52.545698 systemd[1]: Started cri-containerd-941ec174c90f88b26f2db4a9c3e1926651b5f22a5cbe51e2bd3affe065e87035.scope - libcontainer container 941ec174c90f88b26f2db4a9c3e1926651b5f22a5cbe51e2bd3affe065e87035. Jan 30 13:55:52.552345 systemd[1]: Started cri-containerd-f3ac8049a1d50f1b86abae3917003d4bfe8a0333bca276f1f54557e12338101b.scope - libcontainer container f3ac8049a1d50f1b86abae3917003d4bfe8a0333bca276f1f54557e12338101b. Jan 30 13:55:52.631759 containerd[1464]: time="2025-01-30T13:55:52.631710772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-04505505d0,Uid:39560b79da973238ec3ada684a076799,Namespace:kube-system,Attempt:0,} returns sandbox id \"941ec174c90f88b26f2db4a9c3e1926651b5f22a5cbe51e2bd3affe065e87035\"" Jan 30 13:55:52.632584 kubelet[2209]: W0130 13:55:52.632230 2209 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.155.240:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-04505505d0&limit=500&resourceVersion=0": dial tcp 64.23.155.240:6443: connect: connection refused Jan 30 13:55:52.632975 kubelet[2209]: E0130 13:55:52.632944 2209 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://64.23.155.240:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-04505505d0&limit=500&resourceVersion=0": dial tcp 64.23.155.240:6443: connect: connection refused Jan 30 13:55:52.641651 kubelet[2209]: E0130 13:55:52.641610 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:52.648159 containerd[1464]: time="2025-01-30T13:55:52.648056381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-04505505d0,Uid:69a20aacbcf04fcb70280b430c66a706,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3ac8049a1d50f1b86abae3917003d4bfe8a0333bca276f1f54557e12338101b\"" Jan 30 13:55:52.650198 kubelet[2209]: E0130 13:55:52.650062 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:52.652562 containerd[1464]: time="2025-01-30T13:55:52.652390987Z" level=info msg="CreateContainer within sandbox \"941ec174c90f88b26f2db4a9c3e1926651b5f22a5cbe51e2bd3affe065e87035\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:55:52.655310 containerd[1464]: time="2025-01-30T13:55:52.655123726Z" level=info msg="CreateContainer within sandbox \"f3ac8049a1d50f1b86abae3917003d4bfe8a0333bca276f1f54557e12338101b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:55:52.657647 kubelet[2209]: E0130 13:55:52.657477 2209 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.155.240:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-04505505d0?timeout=10s\": dial tcp 64.23.155.240:6443: connect: connection refused" interval="1.6s" Jan 30 13:55:52.672498 containerd[1464]: time="2025-01-30T13:55:52.672264714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-04505505d0,Uid:25d8c8c3b51ed58a9b53c9f779b96cae,Namespace:kube-system,Attempt:0,} returns sandbox id \"51bb2d13baf9f6fd677f11a343616c3a9fc3c542735f1d1f80aaf916e8317564\"" Jan 30 13:55:52.675134 kubelet[2209]: E0130 13:55:52.673976 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:52.678380 containerd[1464]: time="2025-01-30T13:55:52.678306033Z" level=info msg="CreateContainer within sandbox \"51bb2d13baf9f6fd677f11a343616c3a9fc3c542735f1d1f80aaf916e8317564\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:55:52.683357 containerd[1464]: time="2025-01-30T13:55:52.683300777Z" level=info msg="CreateContainer within sandbox \"f3ac8049a1d50f1b86abae3917003d4bfe8a0333bca276f1f54557e12338101b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"68db9e49c6efeeadbce2e3734d3b94f8e37e99a741bf58f17d72895a871f291a\"" Jan 30 13:55:52.684238 containerd[1464]: time="2025-01-30T13:55:52.684204480Z" level=info msg="StartContainer for \"68db9e49c6efeeadbce2e3734d3b94f8e37e99a741bf58f17d72895a871f291a\"" Jan 30 13:55:52.698436 containerd[1464]: time="2025-01-30T13:55:52.698111932Z" level=info msg="CreateContainer within sandbox \"941ec174c90f88b26f2db4a9c3e1926651b5f22a5cbe51e2bd3affe065e87035\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c58d5753939dea129318f621aa38fdd9ef2ad4145eef3fc7cdcab96571a4c585\"" Jan 30 13:55:52.699204 containerd[1464]: time="2025-01-30T13:55:52.699175759Z" level=info msg="StartContainer for \"c58d5753939dea129318f621aa38fdd9ef2ad4145eef3fc7cdcab96571a4c585\"" Jan 30 13:55:52.711530 containerd[1464]: time="2025-01-30T13:55:52.711338869Z" level=info msg="CreateContainer within sandbox \"51bb2d13baf9f6fd677f11a343616c3a9fc3c542735f1d1f80aaf916e8317564\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6ce41f6737b02cff926b35cf1412da82bbca7b4796ed158d66952aee789d5167\"" Jan 30 13:55:52.712454 containerd[1464]: time="2025-01-30T13:55:52.712109989Z" level=info msg="StartContainer for \"6ce41f6737b02cff926b35cf1412da82bbca7b4796ed158d66952aee789d5167\"" Jan 30 13:55:52.729666 systemd[1]: Started cri-containerd-68db9e49c6efeeadbce2e3734d3b94f8e37e99a741bf58f17d72895a871f291a.scope - libcontainer container 68db9e49c6efeeadbce2e3734d3b94f8e37e99a741bf58f17d72895a871f291a. Jan 30 13:55:52.757048 systemd[1]: Started cri-containerd-c58d5753939dea129318f621aa38fdd9ef2ad4145eef3fc7cdcab96571a4c585.scope - libcontainer container c58d5753939dea129318f621aa38fdd9ef2ad4145eef3fc7cdcab96571a4c585. Jan 30 13:55:52.760751 kubelet[2209]: I0130 13:55:52.760639 2209 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-04505505d0" Jan 30 13:55:52.763520 kubelet[2209]: E0130 13:55:52.763260 2209 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.23.155.240:6443/api/v1/nodes\": dial tcp 64.23.155.240:6443: connect: connection refused" node="ci-4081.3.0-a-04505505d0" Jan 30 13:55:52.785667 systemd[1]: Started cri-containerd-6ce41f6737b02cff926b35cf1412da82bbca7b4796ed158d66952aee789d5167.scope - libcontainer container 6ce41f6737b02cff926b35cf1412da82bbca7b4796ed158d66952aee789d5167. Jan 30 13:55:52.808090 containerd[1464]: time="2025-01-30T13:55:52.808044916Z" level=info msg="StartContainer for \"68db9e49c6efeeadbce2e3734d3b94f8e37e99a741bf58f17d72895a871f291a\" returns successfully" Jan 30 13:55:52.849425 containerd[1464]: time="2025-01-30T13:55:52.849356788Z" level=info msg="StartContainer for \"c58d5753939dea129318f621aa38fdd9ef2ad4145eef3fc7cdcab96571a4c585\" returns successfully" Jan 30 13:55:52.884717 containerd[1464]: time="2025-01-30T13:55:52.884657705Z" level=info msg="StartContainer for \"6ce41f6737b02cff926b35cf1412da82bbca7b4796ed158d66952aee789d5167\" returns successfully" Jan 30 13:55:53.292314 kubelet[2209]: E0130 13:55:53.292242 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:53.297271 kubelet[2209]: E0130 13:55:53.296168 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:53.298849 kubelet[2209]: E0130 13:55:53.298822 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:54.300282 kubelet[2209]: E0130 13:55:54.300182 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:54.365466 kubelet[2209]: I0130 13:55:54.365348 2209 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-04505505d0" Jan 30 13:55:54.865545 kubelet[2209]: E0130 13:55:54.865470 2209 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-a-04505505d0\" not found" node="ci-4081.3.0-a-04505505d0" Jan 30 13:55:55.003457 kubelet[2209]: E0130 13:55:55.003193 2209 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.0-a-04505505d0.181f7ceff45427f3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-04505505d0,UID:ci-4081.3.0-a-04505505d0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-04505505d0,},FirstTimestamp:2025-01-30 13:55:51.226484723 +0000 UTC m=+0.631017793,LastTimestamp:2025-01-30 13:55:51.226484723 +0000 UTC m=+0.631017793,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-04505505d0,}" Jan 30 13:55:55.039546 kubelet[2209]: I0130 13:55:55.039478 2209 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-04505505d0" Jan 30 13:55:55.216345 kubelet[2209]: I0130 13:55:55.214760 2209 apiserver.go:52] "Watching apiserver" Jan 30 13:55:55.263050 kubelet[2209]: I0130 13:55:55.262980 2209 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:55:55.901153 kubelet[2209]: W0130 13:55:55.901087 2209 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:55:55.903441 kubelet[2209]: E0130 13:55:55.902227 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:56.303343 kubelet[2209]: E0130 13:55:56.303188 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:57.027450 systemd[1]: Reloading requested from client PID 2486 ('systemctl') (unit session-9.scope)... Jan 30 13:55:57.027912 systemd[1]: Reloading... Jan 30 13:55:57.113470 zram_generator::config[2524]: No configuration found. Jan 30 13:55:57.277101 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:55:57.386573 systemd[1]: Reloading finished in 358 ms. Jan 30 13:55:57.436204 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:55:57.454865 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:55:57.455220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:57.455314 systemd[1]: kubelet.service: Consumed 1.136s CPU time, 112.1M memory peak, 0B memory swap peak. Jan 30 13:55:57.462837 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:55:57.648839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:57.653066 (kubelet)[2576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:55:57.736321 kubelet[2576]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:55:57.736321 kubelet[2576]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:55:57.736321 kubelet[2576]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:55:57.737070 kubelet[2576]: I0130 13:55:57.736361 2576 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:55:57.742359 kubelet[2576]: I0130 13:55:57.742301 2576 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:55:57.742359 kubelet[2576]: I0130 13:55:57.742336 2576 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:55:57.742577 kubelet[2576]: I0130 13:55:57.742559 2576 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:55:57.744043 kubelet[2576]: I0130 13:55:57.744010 2576 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:55:57.745684 kubelet[2576]: I0130 13:55:57.745528 2576 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:55:57.755962 kubelet[2576]: I0130 13:55:57.755927 2576 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:55:57.756672 kubelet[2576]: I0130 13:55:57.756434 2576 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:55:57.757011 kubelet[2576]: I0130 13:55:57.756471 2576 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-04505505d0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:55:57.757178 kubelet[2576]: I0130 13:55:57.757166 2576 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:55:57.757232 kubelet[2576]: I0130 13:55:57.757225 2576 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:55:57.757326 kubelet[2576]: I0130 13:55:57.757318 2576 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:55:57.757535 kubelet[2576]: I0130 13:55:57.757525 2576 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:55:57.757816 kubelet[2576]: I0130 13:55:57.757798 2576 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:55:57.758432 kubelet[2576]: I0130 13:55:57.757904 2576 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:55:57.758432 kubelet[2576]: I0130 13:55:57.757923 2576 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:55:57.759486 kubelet[2576]: I0130 13:55:57.759457 2576 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:55:57.759763 kubelet[2576]: I0130 13:55:57.759749 2576 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:55:57.761252 kubelet[2576]: I0130 13:55:57.761231 2576 server.go:1264] "Started kubelet" Jan 30 13:55:57.765625 kubelet[2576]: I0130 13:55:57.764944 2576 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:55:57.785034 kubelet[2576]: I0130 13:55:57.784956 2576 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:55:57.786956 kubelet[2576]: I0130 13:55:57.786809 2576 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:55:57.787786 kubelet[2576]: I0130 13:55:57.787736 2576 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:55:57.787953 kubelet[2576]: I0130 13:55:57.787940 2576 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:55:57.789891 kubelet[2576]: I0130 13:55:57.789859 2576 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:55:57.790699 kubelet[2576]: I0130 13:55:57.790354 2576 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:55:57.790699 kubelet[2576]: I0130 13:55:57.790527 2576 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:55:57.795260 kubelet[2576]: I0130 13:55:57.795230 2576 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:55:57.796451 kubelet[2576]: I0130 13:55:57.795530 2576 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:55:57.799436 kubelet[2576]: E0130 13:55:57.797995 2576 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:55:57.799959 kubelet[2576]: I0130 13:55:57.799647 2576 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:55:57.802349 kubelet[2576]: I0130 13:55:57.802205 2576 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:55:57.802940 kubelet[2576]: I0130 13:55:57.802917 2576 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:55:57.803041 kubelet[2576]: I0130 13:55:57.803033 2576 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:55:57.803097 kubelet[2576]: I0130 13:55:57.803090 2576 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:55:57.803198 kubelet[2576]: E0130 13:55:57.803182 2576 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:55:57.872165 kubelet[2576]: I0130 13:55:57.872134 2576 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:55:57.872165 kubelet[2576]: I0130 13:55:57.872155 2576 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:55:57.872165 kubelet[2576]: I0130 13:55:57.872177 2576 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:55:57.872380 kubelet[2576]: I0130 13:55:57.872340 2576 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:55:57.872380 kubelet[2576]: I0130 13:55:57.872350 2576 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:55:57.872380 kubelet[2576]: I0130 13:55:57.872368 2576 policy_none.go:49] "None policy: Start" Jan 30 13:55:57.874370 kubelet[2576]: I0130 13:55:57.873392 2576 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:55:57.874370 kubelet[2576]: I0130 13:55:57.873442 2576 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:55:57.874370 kubelet[2576]: I0130 13:55:57.873614 2576 state_mem.go:75] "Updated machine memory state" Jan 30 13:55:57.878470 kubelet[2576]: I0130 13:55:57.878442 2576 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:55:57.878870 kubelet[2576]: I0130 13:55:57.878828 2576 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:55:57.879049 kubelet[2576]: I0130 13:55:57.879039 2576 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:55:57.891892 kubelet[2576]: I0130 13:55:57.891179 2576 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-04505505d0" Jan 30 13:55:57.905767 kubelet[2576]: I0130 13:55:57.904023 2576 topology_manager.go:215] "Topology Admit Handler" podUID="25d8c8c3b51ed58a9b53c9f779b96cae" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-04505505d0" Jan 30 13:55:57.906372 kubelet[2576]: I0130 13:55:57.906226 2576 topology_manager.go:215] "Topology Admit Handler" podUID="69a20aacbcf04fcb70280b430c66a706" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-04505505d0" Jan 30 13:55:57.907724 kubelet[2576]: I0130 13:55:57.907694 2576 topology_manager.go:215] "Topology Admit Handler" podUID="39560b79da973238ec3ada684a076799" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-04505505d0" Jan 30 13:55:57.909786 kubelet[2576]: I0130 13:55:57.909619 2576 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-a-04505505d0" Jan 30 13:55:57.910060 kubelet[2576]: I0130 13:55:57.909747 2576 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-04505505d0" Jan 30 13:55:57.931849 kubelet[2576]: W0130 13:55:57.931797 2576 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:55:57.935585 kubelet[2576]: W0130 13:55:57.933984 2576 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:55:57.939712 kubelet[2576]: W0130 13:55:57.939666 2576 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:55:57.940567 kubelet[2576]: E0130 13:55:57.940502 2576 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-04505505d0\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-a-04505505d0" Jan 30 13:55:57.991881 kubelet[2576]: I0130 13:55:57.991781 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/69a20aacbcf04fcb70280b430c66a706-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-04505505d0\" (UID: \"69a20aacbcf04fcb70280b430c66a706\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-04505505d0" Jan 30 13:55:57.991881 kubelet[2576]: I0130 13:55:57.991853 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/39560b79da973238ec3ada684a076799-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-04505505d0\" (UID: \"39560b79da973238ec3ada684a076799\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-04505505d0" Jan 30 13:55:57.991881 kubelet[2576]: I0130 13:55:57.991883 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/39560b79da973238ec3ada684a076799-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-04505505d0\" (UID: \"39560b79da973238ec3ada684a076799\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-04505505d0" Jan 30 13:55:57.992204 kubelet[2576]: I0130 13:55:57.991906 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39560b79da973238ec3ada684a076799-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-04505505d0\" (UID: \"39560b79da973238ec3ada684a076799\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-04505505d0" Jan 30 13:55:57.992204 kubelet[2576]: I0130 13:55:57.991992 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/69a20aacbcf04fcb70280b430c66a706-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-04505505d0\" (UID: \"69a20aacbcf04fcb70280b430c66a706\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-04505505d0" Jan 30 13:55:57.994479 kubelet[2576]: I0130 13:55:57.992040 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/69a20aacbcf04fcb70280b430c66a706-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-04505505d0\" (UID: \"69a20aacbcf04fcb70280b430c66a706\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-04505505d0" Jan 30 13:55:57.994682 kubelet[2576]: I0130 13:55:57.994545 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/39560b79da973238ec3ada684a076799-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-04505505d0\" (UID: \"39560b79da973238ec3ada684a076799\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-04505505d0" Jan 30 13:55:57.994682 kubelet[2576]: I0130 13:55:57.994603 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/39560b79da973238ec3ada684a076799-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-04505505d0\" (UID: \"39560b79da973238ec3ada684a076799\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-04505505d0" Jan 30 13:55:57.994682 kubelet[2576]: I0130 13:55:57.994668 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/25d8c8c3b51ed58a9b53c9f779b96cae-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-04505505d0\" (UID: \"25d8c8c3b51ed58a9b53c9f779b96cae\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-04505505d0" Jan 30 13:55:58.237023 kubelet[2576]: E0130 13:55:58.236066 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:58.237181 kubelet[2576]: E0130 13:55:58.237164 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:58.242911 kubelet[2576]: E0130 13:55:58.242852 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:58.770208 kubelet[2576]: I0130 13:55:58.769909 2576 apiserver.go:52] "Watching apiserver" Jan 30 13:55:58.790786 kubelet[2576]: I0130 13:55:58.790735 2576 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:55:58.818599 kubelet[2576]: I0130 13:55:58.818311 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-04505505d0" podStartSLOduration=1.8182882299999998 podStartE2EDuration="1.81828823s" podCreationTimestamp="2025-01-30 13:55:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:55:58.802999142 +0000 UTC m=+1.141910279" watchObservedRunningTime="2025-01-30 13:55:58.81828823 +0000 UTC m=+1.157199360" Jan 30 13:55:58.832566 kubelet[2576]: I0130 13:55:58.832489 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-a-04505505d0" podStartSLOduration=3.832457777 podStartE2EDuration="3.832457777s" podCreationTimestamp="2025-01-30 13:55:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:55:58.820281084 +0000 UTC m=+1.159192221" watchObservedRunningTime="2025-01-30 13:55:58.832457777 +0000 UTC m=+1.171368913" Jan 30 13:55:58.846751 kubelet[2576]: E0130 13:55:58.846644 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:58.849329 kubelet[2576]: E0130 13:55:58.849197 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:58.863005 kubelet[2576]: W0130 13:55:58.862875 2576 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:55:58.863601 kubelet[2576]: E0130 13:55:58.863031 2576 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.0-a-04505505d0\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.0-a-04505505d0" Jan 30 13:55:58.863601 kubelet[2576]: E0130 13:55:58.863575 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:58.873835 kubelet[2576]: I0130 13:55:58.873768 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-a-04505505d0" podStartSLOduration=1.873745062 podStartE2EDuration="1.873745062s" podCreationTimestamp="2025-01-30 13:55:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:55:58.833637362 +0000 UTC m=+1.172548497" watchObservedRunningTime="2025-01-30 13:55:58.873745062 +0000 UTC m=+1.212656200" Jan 30 13:55:59.853689 kubelet[2576]: E0130 13:55:59.853656 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:59.855776 kubelet[2576]: E0130 13:55:59.855748 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:01.268585 kubelet[2576]: E0130 13:56:01.268171 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:03.717301 sudo[1663]: pam_unix(sudo:session): session closed for user root Jan 30 13:56:03.722359 sshd[1660]: pam_unix(sshd:session): session closed for user core Jan 30 13:56:03.727962 systemd[1]: sshd@8-64.23.155.240:22-147.75.109.163:39986.service: Deactivated successfully. Jan 30 13:56:03.731809 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:56:03.732168 systemd[1]: session-9.scope: Consumed 6.019s CPU time, 187.9M memory peak, 0B memory swap peak. Jan 30 13:56:03.734440 systemd-logind[1442]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:56:03.736513 systemd-logind[1442]: Removed session 9. Jan 30 13:56:04.541483 update_engine[1443]: I20250130 13:56:04.540835 1443 update_attempter.cc:509] Updating boot flags... Jan 30 13:56:04.582682 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2658) Jan 30 13:56:04.666508 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2657) Jan 30 13:56:04.726439 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2657) Jan 30 13:56:05.325813 kubelet[2576]: E0130 13:56:05.325772 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:05.867983 kubelet[2576]: E0130 13:56:05.867933 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:08.635902 kubelet[2576]: E0130 13:56:08.635856 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:11.273094 kubelet[2576]: E0130 13:56:11.272945 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:12.103993 kubelet[2576]: I0130 13:56:12.103950 2576 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:56:12.106136 containerd[1464]: time="2025-01-30T13:56:12.105687534Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:56:12.106581 kubelet[2576]: I0130 13:56:12.105974 2576 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:56:13.143192 kubelet[2576]: I0130 13:56:13.142126 2576 topology_manager.go:215] "Topology Admit Handler" podUID="609604a2-2385-4c69-ad37-63865d888cf7" podNamespace="kube-system" podName="kube-proxy-5p588" Jan 30 13:56:13.154298 systemd[1]: Created slice kubepods-besteffort-pod609604a2_2385_4c69_ad37_63865d888cf7.slice - libcontainer container kubepods-besteffort-pod609604a2_2385_4c69_ad37_63865d888cf7.slice. Jan 30 13:56:13.301196 kubelet[2576]: I0130 13:56:13.300987 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/609604a2-2385-4c69-ad37-63865d888cf7-kube-proxy\") pod \"kube-proxy-5p588\" (UID: \"609604a2-2385-4c69-ad37-63865d888cf7\") " pod="kube-system/kube-proxy-5p588" Jan 30 13:56:13.301196 kubelet[2576]: I0130 13:56:13.301025 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/609604a2-2385-4c69-ad37-63865d888cf7-xtables-lock\") pod \"kube-proxy-5p588\" (UID: \"609604a2-2385-4c69-ad37-63865d888cf7\") " pod="kube-system/kube-proxy-5p588" Jan 30 13:56:13.301196 kubelet[2576]: I0130 13:56:13.301113 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/609604a2-2385-4c69-ad37-63865d888cf7-lib-modules\") pod \"kube-proxy-5p588\" (UID: \"609604a2-2385-4c69-ad37-63865d888cf7\") " pod="kube-system/kube-proxy-5p588" Jan 30 13:56:13.301196 kubelet[2576]: I0130 13:56:13.301148 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49m8v\" (UniqueName: \"kubernetes.io/projected/609604a2-2385-4c69-ad37-63865d888cf7-kube-api-access-49m8v\") pod \"kube-proxy-5p588\" (UID: \"609604a2-2385-4c69-ad37-63865d888cf7\") " pod="kube-system/kube-proxy-5p588" Jan 30 13:56:13.322444 kubelet[2576]: I0130 13:56:13.321209 2576 topology_manager.go:215] "Topology Admit Handler" podUID="11d928db-8175-491b-95f3-396b03be8df4" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-8zsz2" Jan 30 13:56:13.330556 systemd[1]: Created slice kubepods-besteffort-pod11d928db_8175_491b_95f3_396b03be8df4.slice - libcontainer container kubepods-besteffort-pod11d928db_8175_491b_95f3_396b03be8df4.slice. Jan 30 13:56:13.461330 kubelet[2576]: E0130 13:56:13.461177 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:13.463123 containerd[1464]: time="2025-01-30T13:56:13.463071916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5p588,Uid:609604a2-2385-4c69-ad37-63865d888cf7,Namespace:kube-system,Attempt:0,}" Jan 30 13:56:13.501859 kubelet[2576]: I0130 13:56:13.501806 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/11d928db-8175-491b-95f3-396b03be8df4-var-lib-calico\") pod \"tigera-operator-7bc55997bb-8zsz2\" (UID: \"11d928db-8175-491b-95f3-396b03be8df4\") " pod="tigera-operator/tigera-operator-7bc55997bb-8zsz2" Jan 30 13:56:13.502035 kubelet[2576]: I0130 13:56:13.501865 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdhbg\" (UniqueName: \"kubernetes.io/projected/11d928db-8175-491b-95f3-396b03be8df4-kube-api-access-pdhbg\") pod \"tigera-operator-7bc55997bb-8zsz2\" (UID: \"11d928db-8175-491b-95f3-396b03be8df4\") " pod="tigera-operator/tigera-operator-7bc55997bb-8zsz2" Jan 30 13:56:13.503332 containerd[1464]: time="2025-01-30T13:56:13.503142886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:13.504315 containerd[1464]: time="2025-01-30T13:56:13.504232166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:13.504565 containerd[1464]: time="2025-01-30T13:56:13.504497845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:13.504989 containerd[1464]: time="2025-01-30T13:56:13.504895852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:13.533073 systemd[1]: Started cri-containerd-50b79a05e4c780a4290466069c2230d36f766507ec3178e843c52d6edf322b0c.scope - libcontainer container 50b79a05e4c780a4290466069c2230d36f766507ec3178e843c52d6edf322b0c. Jan 30 13:56:13.576681 containerd[1464]: time="2025-01-30T13:56:13.576508842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5p588,Uid:609604a2-2385-4c69-ad37-63865d888cf7,Namespace:kube-system,Attempt:0,} returns sandbox id \"50b79a05e4c780a4290466069c2230d36f766507ec3178e843c52d6edf322b0c\"" Jan 30 13:56:13.577674 kubelet[2576]: E0130 13:56:13.577630 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:13.585998 containerd[1464]: time="2025-01-30T13:56:13.585672118Z" level=info msg="CreateContainer within sandbox \"50b79a05e4c780a4290466069c2230d36f766507ec3178e843c52d6edf322b0c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:56:13.620771 containerd[1464]: time="2025-01-30T13:56:13.620611631Z" level=info msg="CreateContainer within sandbox \"50b79a05e4c780a4290466069c2230d36f766507ec3178e843c52d6edf322b0c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ec5ce72b49350d2a5e37e105fad911daab7705ffb2f2778e39e895f8c03d4f14\"" Jan 30 13:56:13.622526 containerd[1464]: time="2025-01-30T13:56:13.622149840Z" level=info msg="StartContainer for \"ec5ce72b49350d2a5e37e105fad911daab7705ffb2f2778e39e895f8c03d4f14\"" Jan 30 13:56:13.642393 containerd[1464]: time="2025-01-30T13:56:13.641960872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-8zsz2,Uid:11d928db-8175-491b-95f3-396b03be8df4,Namespace:tigera-operator,Attempt:0,}" Jan 30 13:56:13.665111 systemd[1]: Started cri-containerd-ec5ce72b49350d2a5e37e105fad911daab7705ffb2f2778e39e895f8c03d4f14.scope - libcontainer container ec5ce72b49350d2a5e37e105fad911daab7705ffb2f2778e39e895f8c03d4f14. Jan 30 13:56:13.702951 containerd[1464]: time="2025-01-30T13:56:13.702340305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:13.702951 containerd[1464]: time="2025-01-30T13:56:13.702455149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:13.702951 containerd[1464]: time="2025-01-30T13:56:13.702511767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:13.702951 containerd[1464]: time="2025-01-30T13:56:13.702744626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:13.716917 containerd[1464]: time="2025-01-30T13:56:13.715643408Z" level=info msg="StartContainer for \"ec5ce72b49350d2a5e37e105fad911daab7705ffb2f2778e39e895f8c03d4f14\" returns successfully" Jan 30 13:56:13.746706 systemd[1]: Started cri-containerd-39ebd44370fd03d9f60539397ed0a12396b5e4825f684bb5bb55bb45fc973c79.scope - libcontainer container 39ebd44370fd03d9f60539397ed0a12396b5e4825f684bb5bb55bb45fc973c79. Jan 30 13:56:13.814834 containerd[1464]: time="2025-01-30T13:56:13.814770283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-8zsz2,Uid:11d928db-8175-491b-95f3-396b03be8df4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"39ebd44370fd03d9f60539397ed0a12396b5e4825f684bb5bb55bb45fc973c79\"" Jan 30 13:56:13.821550 containerd[1464]: time="2025-01-30T13:56:13.821508803Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 13:56:13.893374 kubelet[2576]: E0130 13:56:13.890667 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:13.907143 kubelet[2576]: I0130 13:56:13.907088 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5p588" podStartSLOduration=0.907071354 podStartE2EDuration="907.071354ms" podCreationTimestamp="2025-01-30 13:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:56:13.906847129 +0000 UTC m=+16.245758269" watchObservedRunningTime="2025-01-30 13:56:13.907071354 +0000 UTC m=+16.245982492" Jan 30 13:56:15.400250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1146086485.mount: Deactivated successfully. Jan 30 13:56:16.073606 containerd[1464]: time="2025-01-30T13:56:16.073525959Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:16.075530 containerd[1464]: time="2025-01-30T13:56:16.075486786Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 13:56:16.076522 containerd[1464]: time="2025-01-30T13:56:16.076488596Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:16.079699 containerd[1464]: time="2025-01-30T13:56:16.079638508Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:16.080786 containerd[1464]: time="2025-01-30T13:56:16.080750481Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.259198435s" Jan 30 13:56:16.080971 containerd[1464]: time="2025-01-30T13:56:16.080910759Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 13:56:16.115249 containerd[1464]: time="2025-01-30T13:56:16.115043853Z" level=info msg="CreateContainer within sandbox \"39ebd44370fd03d9f60539397ed0a12396b5e4825f684bb5bb55bb45fc973c79\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 13:56:16.138964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3056277855.mount: Deactivated successfully. Jan 30 13:56:16.142975 containerd[1464]: time="2025-01-30T13:56:16.142906987Z" level=info msg="CreateContainer within sandbox \"39ebd44370fd03d9f60539397ed0a12396b5e4825f684bb5bb55bb45fc973c79\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1e4c81a872769e823f8ea8428b70e269c39f5c5ae4c738cff54bf4f9e6b6c4a7\"" Jan 30 13:56:16.147219 containerd[1464]: time="2025-01-30T13:56:16.146018420Z" level=info msg="StartContainer for \"1e4c81a872769e823f8ea8428b70e269c39f5c5ae4c738cff54bf4f9e6b6c4a7\"" Jan 30 13:56:16.202816 systemd[1]: run-containerd-runc-k8s.io-1e4c81a872769e823f8ea8428b70e269c39f5c5ae4c738cff54bf4f9e6b6c4a7-runc.7D1H3a.mount: Deactivated successfully. Jan 30 13:56:16.215595 systemd[1]: Started cri-containerd-1e4c81a872769e823f8ea8428b70e269c39f5c5ae4c738cff54bf4f9e6b6c4a7.scope - libcontainer container 1e4c81a872769e823f8ea8428b70e269c39f5c5ae4c738cff54bf4f9e6b6c4a7. Jan 30 13:56:16.262225 containerd[1464]: time="2025-01-30T13:56:16.262167234Z" level=info msg="StartContainer for \"1e4c81a872769e823f8ea8428b70e269c39f5c5ae4c738cff54bf4f9e6b6c4a7\" returns successfully" Jan 30 13:56:19.628759 kubelet[2576]: I0130 13:56:19.628676 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-8zsz2" podStartSLOduration=4.358472961 podStartE2EDuration="6.628648742s" podCreationTimestamp="2025-01-30 13:56:13 +0000 UTC" firstStartedPulling="2025-01-30 13:56:13.818981385 +0000 UTC m=+16.157892523" lastFinishedPulling="2025-01-30 13:56:16.089157174 +0000 UTC m=+18.428068304" observedRunningTime="2025-01-30 13:56:16.923459356 +0000 UTC m=+19.262370491" watchObservedRunningTime="2025-01-30 13:56:19.628648742 +0000 UTC m=+21.967559882" Jan 30 13:56:19.630014 kubelet[2576]: I0130 13:56:19.629929 2576 topology_manager.go:215] "Topology Admit Handler" podUID="c67c9f0b-baec-4ff5-a5c8-14461108eabf" podNamespace="calico-system" podName="calico-typha-5677fb8787-t4bng" Jan 30 13:56:19.654459 systemd[1]: Created slice kubepods-besteffort-podc67c9f0b_baec_4ff5_a5c8_14461108eabf.slice - libcontainer container kubepods-besteffort-podc67c9f0b_baec_4ff5_a5c8_14461108eabf.slice. Jan 30 13:56:19.759586 kubelet[2576]: I0130 13:56:19.759445 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c67c9f0b-baec-4ff5-a5c8-14461108eabf-typha-certs\") pod \"calico-typha-5677fb8787-t4bng\" (UID: \"c67c9f0b-baec-4ff5-a5c8-14461108eabf\") " pod="calico-system/calico-typha-5677fb8787-t4bng" Jan 30 13:56:19.759586 kubelet[2576]: I0130 13:56:19.759501 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c67c9f0b-baec-4ff5-a5c8-14461108eabf-tigera-ca-bundle\") pod \"calico-typha-5677fb8787-t4bng\" (UID: \"c67c9f0b-baec-4ff5-a5c8-14461108eabf\") " pod="calico-system/calico-typha-5677fb8787-t4bng" Jan 30 13:56:19.759586 kubelet[2576]: I0130 13:56:19.759523 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bhwk\" (UniqueName: \"kubernetes.io/projected/c67c9f0b-baec-4ff5-a5c8-14461108eabf-kube-api-access-7bhwk\") pod \"calico-typha-5677fb8787-t4bng\" (UID: \"c67c9f0b-baec-4ff5-a5c8-14461108eabf\") " pod="calico-system/calico-typha-5677fb8787-t4bng" Jan 30 13:56:19.847128 kubelet[2576]: I0130 13:56:19.846927 2576 topology_manager.go:215] "Topology Admit Handler" podUID="8f68e650-6608-4d2c-9cf2-ee53aa34a722" podNamespace="calico-system" podName="calico-node-52xh2" Jan 30 13:56:19.859390 systemd[1]: Created slice kubepods-besteffort-pod8f68e650_6608_4d2c_9cf2_ee53aa34a722.slice - libcontainer container kubepods-besteffort-pod8f68e650_6608_4d2c_9cf2_ee53aa34a722.slice. Jan 30 13:56:19.963230 kubelet[2576]: I0130 13:56:19.961784 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f68e650-6608-4d2c-9cf2-ee53aa34a722-xtables-lock\") pod \"calico-node-52xh2\" (UID: \"8f68e650-6608-4d2c-9cf2-ee53aa34a722\") " pod="calico-system/calico-node-52xh2" Jan 30 13:56:19.963230 kubelet[2576]: I0130 13:56:19.961829 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8f68e650-6608-4d2c-9cf2-ee53aa34a722-flexvol-driver-host\") pod \"calico-node-52xh2\" (UID: \"8f68e650-6608-4d2c-9cf2-ee53aa34a722\") " pod="calico-system/calico-node-52xh2" Jan 30 13:56:19.963230 kubelet[2576]: I0130 13:56:19.961850 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8f68e650-6608-4d2c-9cf2-ee53aa34a722-node-certs\") pod \"calico-node-52xh2\" (UID: \"8f68e650-6608-4d2c-9cf2-ee53aa34a722\") " pod="calico-system/calico-node-52xh2" Jan 30 13:56:19.963230 kubelet[2576]: I0130 13:56:19.961869 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8f68e650-6608-4d2c-9cf2-ee53aa34a722-var-run-calico\") pod \"calico-node-52xh2\" (UID: \"8f68e650-6608-4d2c-9cf2-ee53aa34a722\") " pod="calico-system/calico-node-52xh2" Jan 30 13:56:19.963230 kubelet[2576]: I0130 13:56:19.961890 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8f68e650-6608-4d2c-9cf2-ee53aa34a722-cni-bin-dir\") pod \"calico-node-52xh2\" (UID: \"8f68e650-6608-4d2c-9cf2-ee53aa34a722\") " pod="calico-system/calico-node-52xh2" Jan 30 13:56:19.963583 kubelet[2576]: I0130 13:56:19.961906 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f68e650-6608-4d2c-9cf2-ee53aa34a722-tigera-ca-bundle\") pod \"calico-node-52xh2\" (UID: \"8f68e650-6608-4d2c-9cf2-ee53aa34a722\") " pod="calico-system/calico-node-52xh2" Jan 30 13:56:19.963583 kubelet[2576]: I0130 13:56:19.961922 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8f68e650-6608-4d2c-9cf2-ee53aa34a722-var-lib-calico\") pod \"calico-node-52xh2\" (UID: \"8f68e650-6608-4d2c-9cf2-ee53aa34a722\") " pod="calico-system/calico-node-52xh2" Jan 30 13:56:19.963583 kubelet[2576]: I0130 13:56:19.961937 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f68e650-6608-4d2c-9cf2-ee53aa34a722-lib-modules\") pod \"calico-node-52xh2\" (UID: \"8f68e650-6608-4d2c-9cf2-ee53aa34a722\") " pod="calico-system/calico-node-52xh2" Jan 30 13:56:19.963583 kubelet[2576]: I0130 13:56:19.961959 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8f68e650-6608-4d2c-9cf2-ee53aa34a722-cni-net-dir\") pod \"calico-node-52xh2\" (UID: \"8f68e650-6608-4d2c-9cf2-ee53aa34a722\") " pod="calico-system/calico-node-52xh2" Jan 30 13:56:19.963583 kubelet[2576]: I0130 13:56:19.961974 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8f68e650-6608-4d2c-9cf2-ee53aa34a722-cni-log-dir\") pod \"calico-node-52xh2\" (UID: \"8f68e650-6608-4d2c-9cf2-ee53aa34a722\") " pod="calico-system/calico-node-52xh2" Jan 30 13:56:19.963775 kubelet[2576]: I0130 13:56:19.962006 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8f68e650-6608-4d2c-9cf2-ee53aa34a722-policysync\") pod \"calico-node-52xh2\" (UID: \"8f68e650-6608-4d2c-9cf2-ee53aa34a722\") " pod="calico-system/calico-node-52xh2" Jan 30 13:56:19.963775 kubelet[2576]: I0130 13:56:19.962028 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7xmd\" (UniqueName: \"kubernetes.io/projected/8f68e650-6608-4d2c-9cf2-ee53aa34a722-kube-api-access-d7xmd\") pod \"calico-node-52xh2\" (UID: \"8f68e650-6608-4d2c-9cf2-ee53aa34a722\") " pod="calico-system/calico-node-52xh2" Jan 30 13:56:19.968762 kubelet[2576]: E0130 13:56:19.968712 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:19.970769 containerd[1464]: time="2025-01-30T13:56:19.970727516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5677fb8787-t4bng,Uid:c67c9f0b-baec-4ff5-a5c8-14461108eabf,Namespace:calico-system,Attempt:0,}" Jan 30 13:56:19.997306 kubelet[2576]: I0130 13:56:19.997237 2576 topology_manager.go:215] "Topology Admit Handler" podUID="1de43d45-e363-48a6-9642-5ad8984fd09e" podNamespace="calico-system" podName="csi-node-driver-rjzh2" Jan 30 13:56:20.001003 kubelet[2576]: E0130 13:56:20.000719 2576 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjzh2" podUID="1de43d45-e363-48a6-9642-5ad8984fd09e" Jan 30 13:56:20.032449 containerd[1464]: time="2025-01-30T13:56:20.030826799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:20.032449 containerd[1464]: time="2025-01-30T13:56:20.030996043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:20.032449 containerd[1464]: time="2025-01-30T13:56:20.031062309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:20.032449 containerd[1464]: time="2025-01-30T13:56:20.031215633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:20.070746 kubelet[2576]: E0130 13:56:20.070632 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.070746 kubelet[2576]: W0130 13:56:20.070658 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.070746 kubelet[2576]: E0130 13:56:20.070682 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.075966 kubelet[2576]: E0130 13:56:20.075104 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.075966 kubelet[2576]: W0130 13:56:20.075139 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.075966 kubelet[2576]: E0130 13:56:20.075167 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.078016 kubelet[2576]: E0130 13:56:20.077495 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.078016 kubelet[2576]: W0130 13:56:20.077522 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.078016 kubelet[2576]: E0130 13:56:20.077553 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.078588 kubelet[2576]: E0130 13:56:20.078554 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.078588 kubelet[2576]: W0130 13:56:20.078576 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.078729 kubelet[2576]: E0130 13:56:20.078604 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.079597 kubelet[2576]: E0130 13:56:20.079569 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.079713 kubelet[2576]: W0130 13:56:20.079699 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.079795 kubelet[2576]: E0130 13:56:20.079778 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.084904 kubelet[2576]: E0130 13:56:20.084867 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.084904 kubelet[2576]: W0130 13:56:20.084890 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.085141 kubelet[2576]: E0130 13:56:20.084921 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.086640 systemd[1]: Started cri-containerd-ae09d9ebafd903ec6b020b4b7805979dd9bfed264cd0ed59397950f5300322d6.scope - libcontainer container ae09d9ebafd903ec6b020b4b7805979dd9bfed264cd0ed59397950f5300322d6. Jan 30 13:56:20.100150 kubelet[2576]: E0130 13:56:20.099790 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.100150 kubelet[2576]: W0130 13:56:20.099820 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.100150 kubelet[2576]: E0130 13:56:20.099852 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.165352 kubelet[2576]: E0130 13:56:20.165314 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.165352 kubelet[2576]: W0130 13:56:20.165340 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.165748 kubelet[2576]: E0130 13:56:20.165367 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.165748 kubelet[2576]: I0130 13:56:20.165551 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1de43d45-e363-48a6-9642-5ad8984fd09e-varrun\") pod \"csi-node-driver-rjzh2\" (UID: \"1de43d45-e363-48a6-9642-5ad8984fd09e\") " pod="calico-system/csi-node-driver-rjzh2" Jan 30 13:56:20.165748 kubelet[2576]: E0130 13:56:20.165666 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.165748 kubelet[2576]: W0130 13:56:20.165674 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.165748 kubelet[2576]: E0130 13:56:20.165690 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.166724 kubelet[2576]: E0130 13:56:20.166672 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.166724 kubelet[2576]: W0130 13:56:20.166701 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.166724 kubelet[2576]: E0130 13:56:20.166745 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.167064 kubelet[2576]: E0130 13:56:20.167046 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.167064 kubelet[2576]: W0130 13:56:20.167064 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.167148 kubelet[2576]: E0130 13:56:20.167080 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.167148 kubelet[2576]: I0130 13:56:20.167125 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gtzw\" (UniqueName: \"kubernetes.io/projected/1de43d45-e363-48a6-9642-5ad8984fd09e-kube-api-access-8gtzw\") pod \"csi-node-driver-rjzh2\" (UID: \"1de43d45-e363-48a6-9642-5ad8984fd09e\") " pod="calico-system/csi-node-driver-rjzh2" Jan 30 13:56:20.167936 kubelet[2576]: E0130 13:56:20.167913 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.167936 kubelet[2576]: W0130 13:56:20.167935 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.168180 kubelet[2576]: E0130 13:56:20.168161 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:20.168460 kubelet[2576]: E0130 13:56:20.168436 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.168513 kubelet[2576]: I0130 13:56:20.168483 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1de43d45-e363-48a6-9642-5ad8984fd09e-kubelet-dir\") pod \"csi-node-driver-rjzh2\" (UID: \"1de43d45-e363-48a6-9642-5ad8984fd09e\") " pod="calico-system/csi-node-driver-rjzh2" Jan 30 13:56:20.168835 kubelet[2576]: E0130 13:56:20.168794 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.168835 kubelet[2576]: W0130 13:56:20.168818 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.168835 kubelet[2576]: E0130 13:56:20.168834 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.168948 kubelet[2576]: I0130 13:56:20.168864 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1de43d45-e363-48a6-9642-5ad8984fd09e-socket-dir\") pod \"csi-node-driver-rjzh2\" (UID: \"1de43d45-e363-48a6-9642-5ad8984fd09e\") " pod="calico-system/csi-node-driver-rjzh2" Jan 30 13:56:20.169849 kubelet[2576]: E0130 13:56:20.169824 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.169849 kubelet[2576]: W0130 13:56:20.169850 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.169998 kubelet[2576]: E0130 13:56:20.169869 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.169998 kubelet[2576]: I0130 13:56:20.169897 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1de43d45-e363-48a6-9642-5ad8984fd09e-registration-dir\") pod \"csi-node-driver-rjzh2\" (UID: \"1de43d45-e363-48a6-9642-5ad8984fd09e\") " pod="calico-system/csi-node-driver-rjzh2" Jan 30 13:56:20.171866 kubelet[2576]: E0130 13:56:20.171836 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.171866 kubelet[2576]: W0130 13:56:20.171853 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.171866 kubelet[2576]: E0130 13:56:20.171874 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.172155 containerd[1464]: time="2025-01-30T13:56:20.171942476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-52xh2,Uid:8f68e650-6608-4d2c-9cf2-ee53aa34a722,Namespace:calico-system,Attempt:0,}" Jan 30 13:56:20.172594 kubelet[2576]: E0130 13:56:20.172575 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.172594 kubelet[2576]: W0130 13:56:20.172591 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.172704 kubelet[2576]: E0130 13:56:20.172610 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.173660 kubelet[2576]: E0130 13:56:20.173633 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.173660 kubelet[2576]: W0130 13:56:20.173651 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.173660 kubelet[2576]: E0130 13:56:20.173669 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.174059 kubelet[2576]: E0130 13:56:20.174042 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.174059 kubelet[2576]: W0130 13:56:20.174056 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.174208 kubelet[2576]: E0130 13:56:20.174164 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.174771 kubelet[2576]: E0130 13:56:20.174747 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.174771 kubelet[2576]: W0130 13:56:20.174767 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.175021 kubelet[2576]: E0130 13:56:20.174807 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.175529 kubelet[2576]: E0130 13:56:20.175505 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.175529 kubelet[2576]: W0130 13:56:20.175527 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.175828 kubelet[2576]: E0130 13:56:20.175693 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.176556 kubelet[2576]: E0130 13:56:20.176531 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.176630 kubelet[2576]: W0130 13:56:20.176557 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.176630 kubelet[2576]: E0130 13:56:20.176576 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.177340 kubelet[2576]: E0130 13:56:20.177308 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.177340 kubelet[2576]: W0130 13:56:20.177332 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.177966 kubelet[2576]: E0130 13:56:20.177352 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.219580 containerd[1464]: time="2025-01-30T13:56:20.218808026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5677fb8787-t4bng,Uid:c67c9f0b-baec-4ff5-a5c8-14461108eabf,Namespace:calico-system,Attempt:0,} returns sandbox id \"ae09d9ebafd903ec6b020b4b7805979dd9bfed264cd0ed59397950f5300322d6\"" Jan 30 13:56:20.223701 kubelet[2576]: E0130 13:56:20.223168 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:20.228467 containerd[1464]: time="2025-01-30T13:56:20.228294867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 13:56:20.236154 containerd[1464]: time="2025-01-30T13:56:20.235529680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:20.236154 containerd[1464]: time="2025-01-30T13:56:20.235613306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:20.236154 containerd[1464]: time="2025-01-30T13:56:20.235661394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:20.236154 containerd[1464]: time="2025-01-30T13:56:20.235820665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:20.271147 kubelet[2576]: E0130 13:56:20.271113 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.271147 kubelet[2576]: W0130 13:56:20.271133 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.271377 kubelet[2576]: E0130 13:56:20.271156 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.272447 kubelet[2576]: E0130 13:56:20.271550 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.272447 kubelet[2576]: W0130 13:56:20.271564 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.272447 kubelet[2576]: E0130 13:56:20.271579 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.272447 kubelet[2576]: E0130 13:56:20.271885 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.272447 kubelet[2576]: W0130 13:56:20.271898 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.272447 kubelet[2576]: E0130 13:56:20.271949 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.273148 kubelet[2576]: E0130 13:56:20.272465 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.273148 kubelet[2576]: W0130 13:56:20.272492 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.273148 kubelet[2576]: E0130 13:56:20.272511 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.273793 systemd[1]: Started cri-containerd-61578128f6c2ea819d81853fdd24adac64e8fa9c03f7128b5ab64ac44d6998dc.scope - libcontainer container 61578128f6c2ea819d81853fdd24adac64e8fa9c03f7128b5ab64ac44d6998dc. Jan 30 13:56:20.276642 kubelet[2576]: E0130 13:56:20.275574 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.276642 kubelet[2576]: W0130 13:56:20.275597 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.276642 kubelet[2576]: E0130 13:56:20.275630 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.277503 kubelet[2576]: E0130 13:56:20.277217 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.277503 kubelet[2576]: W0130 13:56:20.277252 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.277503 kubelet[2576]: E0130 13:56:20.277385 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.278657 kubelet[2576]: E0130 13:56:20.278632 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.278657 kubelet[2576]: W0130 13:56:20.278650 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.278794 kubelet[2576]: E0130 13:56:20.278765 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.279480 kubelet[2576]: E0130 13:56:20.279455 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.279480 kubelet[2576]: W0130 13:56:20.279470 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.280199 kubelet[2576]: E0130 13:56:20.279748 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.280750 kubelet[2576]: E0130 13:56:20.280713 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.280750 kubelet[2576]: W0130 13:56:20.280729 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.281562 kubelet[2576]: E0130 13:56:20.281359 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.281633 kubelet[2576]: E0130 13:56:20.281589 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.282246 kubelet[2576]: W0130 13:56:20.281841 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.282470 kubelet[2576]: E0130 13:56:20.282449 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.283325 kubelet[2576]: E0130 13:56:20.283295 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.283325 kubelet[2576]: W0130 13:56:20.283313 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.283580 kubelet[2576]: E0130 13:56:20.283505 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.284863 kubelet[2576]: E0130 13:56:20.284769 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.284863 kubelet[2576]: W0130 13:56:20.284791 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.285900 kubelet[2576]: E0130 13:56:20.285171 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.285900 kubelet[2576]: E0130 13:56:20.285884 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.285900 kubelet[2576]: W0130 13:56:20.285896 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.287130 kubelet[2576]: E0130 13:56:20.286593 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.287130 kubelet[2576]: E0130 13:56:20.286946 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.287130 kubelet[2576]: W0130 13:56:20.286960 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.287310 kubelet[2576]: E0130 13:56:20.287152 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.287861 kubelet[2576]: E0130 13:56:20.287756 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.287861 kubelet[2576]: W0130 13:56:20.287772 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.289146 kubelet[2576]: E0130 13:56:20.288067 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.289146 kubelet[2576]: E0130 13:56:20.288447 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.289146 kubelet[2576]: W0130 13:56:20.288458 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.289146 kubelet[2576]: E0130 13:56:20.288819 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.289362 kubelet[2576]: E0130 13:56:20.289163 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.289362 kubelet[2576]: W0130 13:56:20.289174 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.290064 kubelet[2576]: E0130 13:56:20.289824 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.290064 kubelet[2576]: E0130 13:56:20.290047 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.290064 kubelet[2576]: W0130 13:56:20.290057 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.293720 kubelet[2576]: E0130 13:56:20.290389 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.293720 kubelet[2576]: E0130 13:56:20.290676 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.293720 kubelet[2576]: W0130 13:56:20.290689 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.293720 kubelet[2576]: E0130 13:56:20.291134 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.293720 kubelet[2576]: E0130 13:56:20.291602 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.293720 kubelet[2576]: W0130 13:56:20.291614 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.293720 kubelet[2576]: E0130 13:56:20.291856 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.293720 kubelet[2576]: E0130 13:56:20.292171 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.293720 kubelet[2576]: W0130 13:56:20.292286 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.293720 kubelet[2576]: E0130 13:56:20.292429 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.294275 kubelet[2576]: E0130 13:56:20.292872 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.294275 kubelet[2576]: W0130 13:56:20.292882 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.294275 kubelet[2576]: E0130 13:56:20.293099 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.294275 kubelet[2576]: E0130 13:56:20.293365 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.294275 kubelet[2576]: W0130 13:56:20.293374 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.294275 kubelet[2576]: E0130 13:56:20.293577 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.294275 kubelet[2576]: E0130 13:56:20.293914 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.294275 kubelet[2576]: W0130 13:56:20.293928 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.294275 kubelet[2576]: E0130 13:56:20.294230 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.294737 kubelet[2576]: E0130 13:56:20.294479 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.294737 kubelet[2576]: W0130 13:56:20.294490 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.294737 kubelet[2576]: E0130 13:56:20.294502 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.307591 kubelet[2576]: E0130 13:56:20.307552 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.307591 kubelet[2576]: W0130 13:56:20.307588 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.307925 kubelet[2576]: E0130 13:56:20.307614 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.344306 containerd[1464]: time="2025-01-30T13:56:20.343737581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-52xh2,Uid:8f68e650-6608-4d2c-9cf2-ee53aa34a722,Namespace:calico-system,Attempt:0,} returns sandbox id \"61578128f6c2ea819d81853fdd24adac64e8fa9c03f7128b5ab64ac44d6998dc\"" Jan 30 13:56:20.346937 kubelet[2576]: E0130 13:56:20.346873 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:21.638275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2620273565.mount: Deactivated successfully. Jan 30 13:56:21.809602 kubelet[2576]: E0130 13:56:21.808714 2576 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjzh2" podUID="1de43d45-e363-48a6-9642-5ad8984fd09e" Jan 30 13:56:22.428536 containerd[1464]: time="2025-01-30T13:56:22.427683555Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:22.429599 containerd[1464]: time="2025-01-30T13:56:22.429539372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 30 13:56:22.430652 containerd[1464]: time="2025-01-30T13:56:22.430587004Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:22.433221 containerd[1464]: time="2025-01-30T13:56:22.433169524Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:22.434984 containerd[1464]: time="2025-01-30T13:56:22.434386358Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.205747549s" Jan 30 13:56:22.434984 containerd[1464]: time="2025-01-30T13:56:22.434458143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 13:56:22.438432 containerd[1464]: time="2025-01-30T13:56:22.437288901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 13:56:22.456991 containerd[1464]: time="2025-01-30T13:56:22.456922661Z" level=info msg="CreateContainer within sandbox \"ae09d9ebafd903ec6b020b4b7805979dd9bfed264cd0ed59397950f5300322d6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 13:56:22.485818 containerd[1464]: time="2025-01-30T13:56:22.485764419Z" level=info msg="CreateContainer within sandbox \"ae09d9ebafd903ec6b020b4b7805979dd9bfed264cd0ed59397950f5300322d6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fe811551c5fe25bac226c5d759edbb4f2e7b65b774665d5907bbf8481809b1ea\"" Jan 30 13:56:22.487706 containerd[1464]: time="2025-01-30T13:56:22.486791316Z" level=info msg="StartContainer for \"fe811551c5fe25bac226c5d759edbb4f2e7b65b774665d5907bbf8481809b1ea\"" Jan 30 13:56:22.537699 systemd[1]: Started cri-containerd-fe811551c5fe25bac226c5d759edbb4f2e7b65b774665d5907bbf8481809b1ea.scope - libcontainer container fe811551c5fe25bac226c5d759edbb4f2e7b65b774665d5907bbf8481809b1ea. Jan 30 13:56:22.600125 containerd[1464]: time="2025-01-30T13:56:22.600065129Z" level=info msg="StartContainer for \"fe811551c5fe25bac226c5d759edbb4f2e7b65b774665d5907bbf8481809b1ea\" returns successfully" Jan 30 13:56:22.927315 kubelet[2576]: E0130 13:56:22.927048 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:22.954523 kubelet[2576]: I0130 13:56:22.954457 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5677fb8787-t4bng" podStartSLOduration=1.744847019 podStartE2EDuration="3.954436042s" podCreationTimestamp="2025-01-30 13:56:19 +0000 UTC" firstStartedPulling="2025-01-30 13:56:20.226292718 +0000 UTC m=+22.565203837" lastFinishedPulling="2025-01-30 13:56:22.435881743 +0000 UTC m=+24.774792860" observedRunningTime="2025-01-30 13:56:22.954348882 +0000 UTC m=+25.293260019" watchObservedRunningTime="2025-01-30 13:56:22.954436042 +0000 UTC m=+25.293347179" Jan 30 13:56:22.986738 kubelet[2576]: E0130 13:56:22.986689 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:22.986738 kubelet[2576]: W0130 13:56:22.986716 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:22.986738 kubelet[2576]: E0130 13:56:22.986741 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:22.987027 kubelet[2576]: E0130 13:56:22.986988 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:22.987027 kubelet[2576]: W0130 13:56:22.986998 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:22.987027 kubelet[2576]: E0130 13:56:22.987010 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:22.987218 kubelet[2576]: E0130 13:56:22.987199 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:22.987218 kubelet[2576]: W0130 13:56:22.987212 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:22.987341 kubelet[2576]: E0130 13:56:22.987222 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:22.987469 kubelet[2576]: E0130 13:56:22.987454 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:22.987469 kubelet[2576]: W0130 13:56:22.987466 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:22.987584 kubelet[2576]: E0130 13:56:22.987476 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:22.987734 kubelet[2576]: E0130 13:56:22.987718 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:22.987734 kubelet[2576]: W0130 13:56:22.987732 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:22.987842 kubelet[2576]: E0130 13:56:22.987743 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:22.987914 kubelet[2576]: E0130 13:56:22.987898 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:22.987914 kubelet[2576]: W0130 13:56:22.987909 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:22.988006 kubelet[2576]: E0130 13:56:22.987919 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:22.988085 kubelet[2576]: E0130 13:56:22.988070 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:22.988085 kubelet[2576]: W0130 13:56:22.988081 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:22.988192 kubelet[2576]: E0130 13:56:22.988088 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:22.988485 kubelet[2576]: E0130 13:56:22.988467 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:22.988485 kubelet[2576]: W0130 13:56:22.988487 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:22.988580 kubelet[2576]: E0130 13:56:22.988498 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:22.988696 kubelet[2576]: E0130 13:56:22.988685 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:22.988735 kubelet[2576]: W0130 13:56:22.988696 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:22.988735 kubelet[2576]: E0130 13:56:22.988704 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:22.988956 kubelet[2576]: E0130 13:56:22.988937 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:22.988986 kubelet[2576]: W0130 13:56:22.988955 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:22.988986 kubelet[2576]: E0130 13:56:22.988969 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:22.989196 kubelet[2576]: E0130 13:56:22.989182 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:22.989196 kubelet[2576]: W0130 13:56:22.989195 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:22.989280 kubelet[2576]: E0130 13:56:22.989205 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:22.989380 kubelet[2576]: E0130 13:56:22.989368 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:22.989380 kubelet[2576]: W0130 13:56:22.989378 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:22.989457 kubelet[2576]: E0130 13:56:22.989386 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:22.989667 kubelet[2576]: E0130 13:56:22.989654 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:22.989667 kubelet[2576]: W0130 13:56:22.989665 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:22.989759 kubelet[2576]: E0130 13:56:22.989675 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:22.989852 kubelet[2576]: E0130 13:56:22.989841 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:22.989852 kubelet[2576]: W0130 13:56:22.989851 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:22.989914 kubelet[2576]: E0130 13:56:22.989859 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:22.990027 kubelet[2576]: E0130 13:56:22.990012 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:22.990027 kubelet[2576]: W0130 13:56:22.990023 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:22.990087 kubelet[2576]: E0130 13:56:22.990031 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:22.997777 kubelet[2576]: E0130 13:56:22.997741 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:22.997777 kubelet[2576]: W0130 13:56:22.997767 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:22.997777 kubelet[2576]: E0130 13:56:22.997791 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:22.998248 kubelet[2576]: E0130 13:56:22.998215 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:22.998248 kubelet[2576]: W0130 13:56:22.998240 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:22.998519 kubelet[2576]: E0130 13:56:22.998262 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:22.998519 kubelet[2576]: E0130 13:56:22.998502 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:22.998519 kubelet[2576]: W0130 13:56:22.998511 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:22.998667 kubelet[2576]: E0130 13:56:22.998533 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:22.998770 kubelet[2576]: E0130 13:56:22.998752 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:22.998840 kubelet[2576]: W0130 13:56:22.998781 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:22.998840 kubelet[2576]: E0130 13:56:22.998795 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:22.998986 kubelet[2576]: E0130 13:56:22.998974 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:22.998986 kubelet[2576]: W0130 13:56:22.998984 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:22.999089 kubelet[2576]: E0130 13:56:22.999002 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:22.999193 kubelet[2576]: E0130 13:56:22.999180 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:22.999193 kubelet[2576]: W0130 13:56:22.999190 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:22.999263 kubelet[2576]: E0130 13:56:22.999245 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:22.999531 kubelet[2576]: E0130 13:56:22.999516 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:22.999531 kubelet[2576]: W0130 13:56:22.999527 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:22.999620 kubelet[2576]: E0130 13:56:22.999543 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:23.000154 kubelet[2576]: E0130 13:56:23.000125 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:23.000154 kubelet[2576]: W0130 13:56:23.000147 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:23.000473 kubelet[2576]: E0130 13:56:23.000275 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:23.000554 kubelet[2576]: E0130 13:56:23.000539 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:23.000554 kubelet[2576]: W0130 13:56:23.000550 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:23.000641 kubelet[2576]: E0130 13:56:23.000630 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:23.000823 kubelet[2576]: E0130 13:56:23.000799 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:23.000823 kubelet[2576]: W0130 13:56:23.000819 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:23.000925 kubelet[2576]: E0130 13:56:23.000840 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:23.001062 kubelet[2576]: E0130 13:56:23.001047 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:23.001062 kubelet[2576]: W0130 13:56:23.001059 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:23.001160 kubelet[2576]: E0130 13:56:23.001081 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:23.001316 kubelet[2576]: E0130 13:56:23.001301 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:23.001316 kubelet[2576]: W0130 13:56:23.001313 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:23.001451 kubelet[2576]: E0130 13:56:23.001331 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:23.001609 kubelet[2576]: E0130 13:56:23.001595 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:23.001609 kubelet[2576]: W0130 13:56:23.001607 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:23.001704 kubelet[2576]: E0130 13:56:23.001627 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:23.001915 kubelet[2576]: E0130 13:56:23.001887 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:23.001915 kubelet[2576]: W0130 13:56:23.001905 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:23.002023 kubelet[2576]: E0130 13:56:23.001918 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:23.002248 kubelet[2576]: E0130 13:56:23.002227 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:23.002248 kubelet[2576]: W0130 13:56:23.002241 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:23.002359 kubelet[2576]: E0130 13:56:23.002256 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:23.002507 kubelet[2576]: E0130 13:56:23.002481 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:23.002507 kubelet[2576]: W0130 13:56:23.002497 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:23.002618 kubelet[2576]: E0130 13:56:23.002510 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:23.003175 kubelet[2576]: E0130 13:56:23.002969 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:23.003175 kubelet[2576]: W0130 13:56:23.002989 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:23.003175 kubelet[2576]: E0130 13:56:23.003028 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:23.003480 kubelet[2576]: E0130 13:56:23.003460 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:23.003644 kubelet[2576]: W0130 13:56:23.003581 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:23.003644 kubelet[2576]: E0130 13:56:23.003607 2576 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:23.790342 containerd[1464]: time="2025-01-30T13:56:23.790261805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:23.799515 containerd[1464]: time="2025-01-30T13:56:23.799001518Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 30 13:56:23.800987 containerd[1464]: time="2025-01-30T13:56:23.800737706Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:23.805240 kubelet[2576]: E0130 13:56:23.805192 2576 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjzh2" podUID="1de43d45-e363-48a6-9642-5ad8984fd09e" Jan 30 13:56:23.809650 containerd[1464]: time="2025-01-30T13:56:23.808070305Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:23.810638 containerd[1464]: time="2025-01-30T13:56:23.810596044Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.373263319s" Jan 30 13:56:23.810827 containerd[1464]: time="2025-01-30T13:56:23.810771040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 13:56:23.817521 containerd[1464]: time="2025-01-30T13:56:23.817454638Z" level=info msg="CreateContainer within sandbox \"61578128f6c2ea819d81853fdd24adac64e8fa9c03f7128b5ab64ac44d6998dc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 13:56:23.847499 containerd[1464]: time="2025-01-30T13:56:23.847438392Z" level=info msg="CreateContainer within sandbox \"61578128f6c2ea819d81853fdd24adac64e8fa9c03f7128b5ab64ac44d6998dc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0a733e1dc732280ea05fc0c7e6c1d0ee5d5bd9a07ba2a6aad5daf371772f124e\"" Jan 30 13:56:23.850494 containerd[1464]: time="2025-01-30T13:56:23.849127485Z" level=info msg="StartContainer for \"0a733e1dc732280ea05fc0c7e6c1d0ee5d5bd9a07ba2a6aad5daf371772f124e\"" Jan 30 13:56:23.906227 systemd[1]: Started cri-containerd-0a733e1dc732280ea05fc0c7e6c1d0ee5d5bd9a07ba2a6aad5daf371772f124e.scope - libcontainer container 0a733e1dc732280ea05fc0c7e6c1d0ee5d5bd9a07ba2a6aad5daf371772f124e. Jan 30 13:56:23.931938 kubelet[2576]: I0130 13:56:23.931909 2576 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:56:23.933076 kubelet[2576]: E0130 13:56:23.933042 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:23.970226 containerd[1464]: time="2025-01-30T13:56:23.970178674Z" level=info msg="StartContainer for \"0a733e1dc732280ea05fc0c7e6c1d0ee5d5bd9a07ba2a6aad5daf371772f124e\" returns successfully" Jan 30 13:56:23.992483 systemd[1]: cri-containerd-0a733e1dc732280ea05fc0c7e6c1d0ee5d5bd9a07ba2a6aad5daf371772f124e.scope: Deactivated successfully. Jan 30 13:56:24.043333 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a733e1dc732280ea05fc0c7e6c1d0ee5d5bd9a07ba2a6aad5daf371772f124e-rootfs.mount: Deactivated successfully. Jan 30 13:56:24.156771 containerd[1464]: time="2025-01-30T13:56:24.116345897Z" level=info msg="shim disconnected" id=0a733e1dc732280ea05fc0c7e6c1d0ee5d5bd9a07ba2a6aad5daf371772f124e namespace=k8s.io Jan 30 13:56:24.156771 containerd[1464]: time="2025-01-30T13:56:24.156513329Z" level=warning msg="cleaning up after shim disconnected" id=0a733e1dc732280ea05fc0c7e6c1d0ee5d5bd9a07ba2a6aad5daf371772f124e namespace=k8s.io Jan 30 13:56:24.156771 containerd[1464]: time="2025-01-30T13:56:24.156534668Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:56:24.941579 kubelet[2576]: E0130 13:56:24.940923 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:24.947243 containerd[1464]: time="2025-01-30T13:56:24.946867038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 13:56:25.805260 kubelet[2576]: E0130 13:56:25.804859 2576 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjzh2" podUID="1de43d45-e363-48a6-9642-5ad8984fd09e" Jan 30 13:56:27.807426 kubelet[2576]: E0130 13:56:27.804363 2576 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjzh2" podUID="1de43d45-e363-48a6-9642-5ad8984fd09e" Jan 30 13:56:29.124446 containerd[1464]: time="2025-01-30T13:56:29.123667337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:29.127015 containerd[1464]: time="2025-01-30T13:56:29.126808243Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 13:56:29.130321 containerd[1464]: time="2025-01-30T13:56:29.129660018Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:29.141799 containerd[1464]: time="2025-01-30T13:56:29.140642216Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:29.142239 containerd[1464]: time="2025-01-30T13:56:29.142124289Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.195189081s" Jan 30 13:56:29.142636 containerd[1464]: time="2025-01-30T13:56:29.142597321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 13:56:29.147245 containerd[1464]: time="2025-01-30T13:56:29.147087906Z" level=info msg="CreateContainer within sandbox \"61578128f6c2ea819d81853fdd24adac64e8fa9c03f7128b5ab64ac44d6998dc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:56:29.198435 containerd[1464]: time="2025-01-30T13:56:29.198263222Z" level=info msg="CreateContainer within sandbox \"61578128f6c2ea819d81853fdd24adac64e8fa9c03f7128b5ab64ac44d6998dc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"89eb2e4151c72e9589cb86271764b060a23a6bd9acd88dd3d91ed2feadcc0146\"" Jan 30 13:56:29.199936 containerd[1464]: time="2025-01-30T13:56:29.199712046Z" level=info msg="StartContainer for \"89eb2e4151c72e9589cb86271764b060a23a6bd9acd88dd3d91ed2feadcc0146\"" Jan 30 13:56:29.334704 systemd[1]: run-containerd-runc-k8s.io-89eb2e4151c72e9589cb86271764b060a23a6bd9acd88dd3d91ed2feadcc0146-runc.5MUOFq.mount: Deactivated successfully. Jan 30 13:56:29.347903 systemd[1]: Started cri-containerd-89eb2e4151c72e9589cb86271764b060a23a6bd9acd88dd3d91ed2feadcc0146.scope - libcontainer container 89eb2e4151c72e9589cb86271764b060a23a6bd9acd88dd3d91ed2feadcc0146. Jan 30 13:56:29.390088 containerd[1464]: time="2025-01-30T13:56:29.389950166Z" level=info msg="StartContainer for \"89eb2e4151c72e9589cb86271764b060a23a6bd9acd88dd3d91ed2feadcc0146\" returns successfully" Jan 30 13:56:29.805682 kubelet[2576]: E0130 13:56:29.805059 2576 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rjzh2" podUID="1de43d45-e363-48a6-9642-5ad8984fd09e" Jan 30 13:56:29.960198 kubelet[2576]: E0130 13:56:29.959508 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:30.086860 systemd[1]: cri-containerd-89eb2e4151c72e9589cb86271764b060a23a6bd9acd88dd3d91ed2feadcc0146.scope: Deactivated successfully. Jan 30 13:56:30.124161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89eb2e4151c72e9589cb86271764b060a23a6bd9acd88dd3d91ed2feadcc0146-rootfs.mount: Deactivated successfully. Jan 30 13:56:30.149573 containerd[1464]: time="2025-01-30T13:56:30.149164129Z" level=info msg="shim disconnected" id=89eb2e4151c72e9589cb86271764b060a23a6bd9acd88dd3d91ed2feadcc0146 namespace=k8s.io Jan 30 13:56:30.149573 containerd[1464]: time="2025-01-30T13:56:30.149251251Z" level=warning msg="cleaning up after shim disconnected" id=89eb2e4151c72e9589cb86271764b060a23a6bd9acd88dd3d91ed2feadcc0146 namespace=k8s.io Jan 30 13:56:30.149573 containerd[1464]: time="2025-01-30T13:56:30.149262649Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:56:30.162603 kubelet[2576]: I0130 13:56:30.162330 2576 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 13:56:30.204705 kubelet[2576]: I0130 13:56:30.204657 2576 topology_manager.go:215] "Topology Admit Handler" podUID="1ed9698b-56af-4e7f-90ec-aa46e4b9c7f6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-g59jk" Jan 30 13:56:30.206529 kubelet[2576]: I0130 13:56:30.206489 2576 topology_manager.go:215] "Topology Admit Handler" podUID="98eaa407-4d24-4e4e-b6fc-fe8371389f6d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zhk5s" Jan 30 13:56:30.208942 kubelet[2576]: I0130 13:56:30.207776 2576 topology_manager.go:215] "Topology Admit Handler" podUID="bf3a2e24-1024-4d44-97d6-556904b751fc" podNamespace="calico-system" podName="calico-kube-controllers-59b5bcffb9-86cwd" Jan 30 13:56:30.216555 kubelet[2576]: I0130 13:56:30.214475 2576 topology_manager.go:215] "Topology Admit Handler" podUID="10fe79e7-f8b8-48d0-9f50-3dcca5453972" podNamespace="calico-apiserver" podName="calico-apiserver-7846bc9c4f-9s5kj" Jan 30 13:56:30.216555 kubelet[2576]: I0130 13:56:30.214958 2576 topology_manager.go:215] "Topology Admit Handler" podUID="0510e406-ed27-4565-a620-76d33cf07b41" podNamespace="calico-apiserver" podName="calico-apiserver-7846bc9c4f-pphrg" Jan 30 13:56:30.220237 systemd[1]: Created slice kubepods-burstable-pod1ed9698b_56af_4e7f_90ec_aa46e4b9c7f6.slice - libcontainer container kubepods-burstable-pod1ed9698b_56af_4e7f_90ec_aa46e4b9c7f6.slice. Jan 30 13:56:30.239280 systemd[1]: Created slice kubepods-burstable-pod98eaa407_4d24_4e4e_b6fc_fe8371389f6d.slice - libcontainer container kubepods-burstable-pod98eaa407_4d24_4e4e_b6fc_fe8371389f6d.slice. Jan 30 13:56:30.251918 systemd[1]: Created slice kubepods-besteffort-podbf3a2e24_1024_4d44_97d6_556904b751fc.slice - libcontainer container kubepods-besteffort-podbf3a2e24_1024_4d44_97d6_556904b751fc.slice. Jan 30 13:56:30.265729 systemd[1]: Created slice kubepods-besteffort-pod10fe79e7_f8b8_48d0_9f50_3dcca5453972.slice - libcontainer container kubepods-besteffort-pod10fe79e7_f8b8_48d0_9f50_3dcca5453972.slice. Jan 30 13:56:30.278882 systemd[1]: Created slice kubepods-besteffort-pod0510e406_ed27_4565_a620_76d33cf07b41.slice - libcontainer container kubepods-besteffort-pod0510e406_ed27_4565_a620_76d33cf07b41.slice. Jan 30 13:56:30.290961 kubelet[2576]: I0130 13:56:30.290911 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls4lf\" (UniqueName: \"kubernetes.io/projected/98eaa407-4d24-4e4e-b6fc-fe8371389f6d-kube-api-access-ls4lf\") pod \"coredns-7db6d8ff4d-zhk5s\" (UID: \"98eaa407-4d24-4e4e-b6fc-fe8371389f6d\") " pod="kube-system/coredns-7db6d8ff4d-zhk5s" Jan 30 13:56:30.290961 kubelet[2576]: I0130 13:56:30.290961 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98eaa407-4d24-4e4e-b6fc-fe8371389f6d-config-volume\") pod \"coredns-7db6d8ff4d-zhk5s\" (UID: \"98eaa407-4d24-4e4e-b6fc-fe8371389f6d\") " pod="kube-system/coredns-7db6d8ff4d-zhk5s" Jan 30 13:56:30.290961 kubelet[2576]: I0130 13:56:30.290985 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6t4k\" (UniqueName: \"kubernetes.io/projected/bf3a2e24-1024-4d44-97d6-556904b751fc-kube-api-access-h6t4k\") pod \"calico-kube-controllers-59b5bcffb9-86cwd\" (UID: \"bf3a2e24-1024-4d44-97d6-556904b751fc\") " pod="calico-system/calico-kube-controllers-59b5bcffb9-86cwd" Jan 30 13:56:30.291276 kubelet[2576]: I0130 13:56:30.291042 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf3a2e24-1024-4d44-97d6-556904b751fc-tigera-ca-bundle\") pod \"calico-kube-controllers-59b5bcffb9-86cwd\" (UID: \"bf3a2e24-1024-4d44-97d6-556904b751fc\") " pod="calico-system/calico-kube-controllers-59b5bcffb9-86cwd" Jan 30 13:56:30.291276 kubelet[2576]: I0130 13:56:30.291065 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/10fe79e7-f8b8-48d0-9f50-3dcca5453972-calico-apiserver-certs\") pod \"calico-apiserver-7846bc9c4f-9s5kj\" (UID: \"10fe79e7-f8b8-48d0-9f50-3dcca5453972\") " pod="calico-apiserver/calico-apiserver-7846bc9c4f-9s5kj" Jan 30 13:56:30.291276 kubelet[2576]: I0130 13:56:30.291082 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0510e406-ed27-4565-a620-76d33cf07b41-calico-apiserver-certs\") pod \"calico-apiserver-7846bc9c4f-pphrg\" (UID: \"0510e406-ed27-4565-a620-76d33cf07b41\") " pod="calico-apiserver/calico-apiserver-7846bc9c4f-pphrg" Jan 30 13:56:30.291276 kubelet[2576]: I0130 13:56:30.291099 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ed9698b-56af-4e7f-90ec-aa46e4b9c7f6-config-volume\") pod \"coredns-7db6d8ff4d-g59jk\" (UID: \"1ed9698b-56af-4e7f-90ec-aa46e4b9c7f6\") " pod="kube-system/coredns-7db6d8ff4d-g59jk" Jan 30 13:56:30.291276 kubelet[2576]: I0130 13:56:30.291117 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwtgc\" (UniqueName: \"kubernetes.io/projected/1ed9698b-56af-4e7f-90ec-aa46e4b9c7f6-kube-api-access-dwtgc\") pod \"coredns-7db6d8ff4d-g59jk\" (UID: \"1ed9698b-56af-4e7f-90ec-aa46e4b9c7f6\") " pod="kube-system/coredns-7db6d8ff4d-g59jk" Jan 30 13:56:30.291453 kubelet[2576]: I0130 13:56:30.291133 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d29x6\" (UniqueName: \"kubernetes.io/projected/10fe79e7-f8b8-48d0-9f50-3dcca5453972-kube-api-access-d29x6\") pod \"calico-apiserver-7846bc9c4f-9s5kj\" (UID: \"10fe79e7-f8b8-48d0-9f50-3dcca5453972\") " pod="calico-apiserver/calico-apiserver-7846bc9c4f-9s5kj" Jan 30 13:56:30.291453 kubelet[2576]: I0130 13:56:30.291151 2576 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k946n\" (UniqueName: \"kubernetes.io/projected/0510e406-ed27-4565-a620-76d33cf07b41-kube-api-access-k946n\") pod \"calico-apiserver-7846bc9c4f-pphrg\" (UID: \"0510e406-ed27-4565-a620-76d33cf07b41\") " pod="calico-apiserver/calico-apiserver-7846bc9c4f-pphrg" Jan 30 13:56:30.526999 kubelet[2576]: E0130 13:56:30.526556 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:30.527839 containerd[1464]: time="2025-01-30T13:56:30.527646641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-g59jk,Uid:1ed9698b-56af-4e7f-90ec-aa46e4b9c7f6,Namespace:kube-system,Attempt:0,}" Jan 30 13:56:30.546906 kubelet[2576]: E0130 13:56:30.546857 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:30.547985 containerd[1464]: time="2025-01-30T13:56:30.547665846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zhk5s,Uid:98eaa407-4d24-4e4e-b6fc-fe8371389f6d,Namespace:kube-system,Attempt:0,}" Jan 30 13:56:30.562182 containerd[1464]: time="2025-01-30T13:56:30.561931616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59b5bcffb9-86cwd,Uid:bf3a2e24-1024-4d44-97d6-556904b751fc,Namespace:calico-system,Attempt:0,}" Jan 30 13:56:30.573106 containerd[1464]: time="2025-01-30T13:56:30.572555830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7846bc9c4f-9s5kj,Uid:10fe79e7-f8b8-48d0-9f50-3dcca5453972,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:56:30.602291 containerd[1464]: time="2025-01-30T13:56:30.601596992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7846bc9c4f-pphrg,Uid:0510e406-ed27-4565-a620-76d33cf07b41,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:56:30.930894 containerd[1464]: time="2025-01-30T13:56:30.930703873Z" level=error msg="Failed to destroy network for sandbox \"4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:30.931376 containerd[1464]: time="2025-01-30T13:56:30.931255700Z" level=error msg="Failed to destroy network for sandbox \"22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:30.937529 containerd[1464]: time="2025-01-30T13:56:30.937458516Z" level=error msg="encountered an error cleaning up failed sandbox \"22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:30.937715 containerd[1464]: time="2025-01-30T13:56:30.937567091Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59b5bcffb9-86cwd,Uid:bf3a2e24-1024-4d44-97d6-556904b751fc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:30.942595 containerd[1464]: time="2025-01-30T13:56:30.942417106Z" level=error msg="encountered an error cleaning up failed sandbox \"4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:30.943006 containerd[1464]: time="2025-01-30T13:56:30.942864865Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7846bc9c4f-9s5kj,Uid:10fe79e7-f8b8-48d0-9f50-3dcca5453972,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:30.947341 kubelet[2576]: E0130 13:56:30.947212 2576 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:30.947341 kubelet[2576]: E0130 13:56:30.947308 2576 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59b5bcffb9-86cwd" Jan 30 13:56:30.947341 kubelet[2576]: E0130 13:56:30.947334 2576 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59b5bcffb9-86cwd" Jan 30 13:56:30.949029 kubelet[2576]: E0130 13:56:30.947381 2576 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-59b5bcffb9-86cwd_calico-system(bf3a2e24-1024-4d44-97d6-556904b751fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-59b5bcffb9-86cwd_calico-system(bf3a2e24-1024-4d44-97d6-556904b751fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59b5bcffb9-86cwd" podUID="bf3a2e24-1024-4d44-97d6-556904b751fc" Jan 30 13:56:30.949029 kubelet[2576]: E0130 13:56:30.947765 2576 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:30.949029 kubelet[2576]: E0130 13:56:30.947822 2576 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7846bc9c4f-9s5kj" Jan 30 13:56:30.950198 kubelet[2576]: E0130 13:56:30.947842 2576 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7846bc9c4f-9s5kj" Jan 30 13:56:30.950198 kubelet[2576]: E0130 13:56:30.948327 2576 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7846bc9c4f-9s5kj_calico-apiserver(10fe79e7-f8b8-48d0-9f50-3dcca5453972)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7846bc9c4f-9s5kj_calico-apiserver(10fe79e7-f8b8-48d0-9f50-3dcca5453972)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7846bc9c4f-9s5kj" podUID="10fe79e7-f8b8-48d0-9f50-3dcca5453972" Jan 30 13:56:30.960237 containerd[1464]: time="2025-01-30T13:56:30.959985587Z" level=error msg="Failed to destroy network for sandbox \"b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:30.961195 containerd[1464]: time="2025-01-30T13:56:30.961151142Z" level=error msg="encountered an error cleaning up failed sandbox \"b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:30.961307 containerd[1464]: time="2025-01-30T13:56:30.961219138Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zhk5s,Uid:98eaa407-4d24-4e4e-b6fc-fe8371389f6d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:30.962428 kubelet[2576]: E0130 13:56:30.961525 2576 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:30.962428 kubelet[2576]: E0130 13:56:30.961594 2576 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zhk5s" Jan 30 13:56:30.962428 kubelet[2576]: E0130 13:56:30.961615 2576 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zhk5s" Jan 30 13:56:30.962594 kubelet[2576]: E0130 13:56:30.961671 2576 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zhk5s_kube-system(98eaa407-4d24-4e4e-b6fc-fe8371389f6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zhk5s_kube-system(98eaa407-4d24-4e4e-b6fc-fe8371389f6d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zhk5s" podUID="98eaa407-4d24-4e4e-b6fc-fe8371389f6d" Jan 30 13:56:30.965032 kubelet[2576]: I0130 13:56:30.964883 2576 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" Jan 30 13:56:30.971334 kubelet[2576]: I0130 13:56:30.971284 2576 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" Jan 30 13:56:30.982438 containerd[1464]: time="2025-01-30T13:56:30.980156913Z" level=info msg="StopPodSandbox for \"4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602\"" Jan 30 13:56:30.982438 containerd[1464]: time="2025-01-30T13:56:30.981887706Z" level=info msg="StopPodSandbox for \"22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318\"" Jan 30 13:56:30.984196 containerd[1464]: time="2025-01-30T13:56:30.984136619Z" level=info msg="Ensure that sandbox 4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602 in task-service has been cleanup successfully" Jan 30 13:56:30.984474 containerd[1464]: time="2025-01-30T13:56:30.984436623Z" level=info msg="Ensure that sandbox 22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318 in task-service has been cleanup successfully" Jan 30 13:56:30.985905 containerd[1464]: time="2025-01-30T13:56:30.985854807Z" level=error msg="Failed to destroy network for sandbox \"e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:30.986534 containerd[1464]: time="2025-01-30T13:56:30.986479919Z" level=error msg="encountered an error cleaning up failed sandbox \"e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:30.986815 containerd[1464]: time="2025-01-30T13:56:30.986780275Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-g59jk,Uid:1ed9698b-56af-4e7f-90ec-aa46e4b9c7f6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:30.987492 kubelet[2576]: E0130 13:56:30.987439 2576 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:30.987613 kubelet[2576]: E0130 13:56:30.987546 2576 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-g59jk" Jan 30 13:56:30.987613 kubelet[2576]: E0130 13:56:30.987591 2576 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-g59jk" Jan 30 13:56:30.990475 kubelet[2576]: E0130 13:56:30.988037 2576 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-g59jk_kube-system(1ed9698b-56af-4e7f-90ec-aa46e4b9c7f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-g59jk_kube-system(1ed9698b-56af-4e7f-90ec-aa46e4b9c7f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-g59jk" podUID="1ed9698b-56af-4e7f-90ec-aa46e4b9c7f6" Jan 30 13:56:31.013870 kubelet[2576]: E0130 13:56:31.013681 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:31.020448 containerd[1464]: time="2025-01-30T13:56:31.020097812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 13:56:31.028370 containerd[1464]: time="2025-01-30T13:56:31.028317457Z" level=error msg="Failed to destroy network for sandbox \"181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.033231 containerd[1464]: time="2025-01-30T13:56:31.033166791Z" level=error msg="encountered an error cleaning up failed sandbox \"181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.033386 containerd[1464]: time="2025-01-30T13:56:31.033250262Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7846bc9c4f-pphrg,Uid:0510e406-ed27-4565-a620-76d33cf07b41,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.033791 kubelet[2576]: E0130 13:56:31.033727 2576 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.033974 kubelet[2576]: E0130 13:56:31.033896 2576 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7846bc9c4f-pphrg" Jan 30 13:56:31.033974 kubelet[2576]: E0130 13:56:31.033926 2576 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7846bc9c4f-pphrg" Jan 30 13:56:31.034308 kubelet[2576]: E0130 13:56:31.034174 2576 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7846bc9c4f-pphrg_calico-apiserver(0510e406-ed27-4565-a620-76d33cf07b41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7846bc9c4f-pphrg_calico-apiserver(0510e406-ed27-4565-a620-76d33cf07b41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7846bc9c4f-pphrg" podUID="0510e406-ed27-4565-a620-76d33cf07b41" Jan 30 13:56:31.063320 containerd[1464]: time="2025-01-30T13:56:31.062814206Z" level=error msg="StopPodSandbox for \"22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318\" failed" error="failed to destroy network for sandbox \"22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.063751 kubelet[2576]: E0130 13:56:31.063057 2576 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" Jan 30 13:56:31.063751 kubelet[2576]: E0130 13:56:31.063115 2576 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318"} Jan 30 13:56:31.063751 kubelet[2576]: E0130 13:56:31.063175 2576 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bf3a2e24-1024-4d44-97d6-556904b751fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:56:31.063751 kubelet[2576]: E0130 13:56:31.063197 2576 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bf3a2e24-1024-4d44-97d6-556904b751fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59b5bcffb9-86cwd" podUID="bf3a2e24-1024-4d44-97d6-556904b751fc" Jan 30 13:56:31.077661 containerd[1464]: time="2025-01-30T13:56:31.077530713Z" level=error msg="StopPodSandbox for \"4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602\" failed" error="failed to destroy network for sandbox \"4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.078011 kubelet[2576]: E0130 13:56:31.077956 2576 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" Jan 30 13:56:31.078121 kubelet[2576]: E0130 13:56:31.078020 2576 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602"} Jan 30 13:56:31.078121 kubelet[2576]: E0130 13:56:31.078055 2576 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"10fe79e7-f8b8-48d0-9f50-3dcca5453972\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:56:31.078121 kubelet[2576]: E0130 13:56:31.078098 2576 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"10fe79e7-f8b8-48d0-9f50-3dcca5453972\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7846bc9c4f-9s5kj" podUID="10fe79e7-f8b8-48d0-9f50-3dcca5453972" Jan 30 13:56:31.812538 systemd[1]: Created slice kubepods-besteffort-pod1de43d45_e363_48a6_9642_5ad8984fd09e.slice - libcontainer container kubepods-besteffort-pod1de43d45_e363_48a6_9642_5ad8984fd09e.slice. Jan 30 13:56:31.815583 containerd[1464]: time="2025-01-30T13:56:31.815536563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rjzh2,Uid:1de43d45-e363-48a6-9642-5ad8984fd09e,Namespace:calico-system,Attempt:0,}" Jan 30 13:56:31.908775 containerd[1464]: time="2025-01-30T13:56:31.908719997Z" level=error msg="Failed to destroy network for sandbox \"df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.909421 containerd[1464]: time="2025-01-30T13:56:31.909250715Z" level=error msg="encountered an error cleaning up failed sandbox \"df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.909421 containerd[1464]: time="2025-01-30T13:56:31.909329953Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rjzh2,Uid:1de43d45-e363-48a6-9642-5ad8984fd09e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.912554 kubelet[2576]: E0130 13:56:31.911580 2576 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.912554 kubelet[2576]: E0130 13:56:31.911657 2576 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rjzh2" Jan 30 13:56:31.912554 kubelet[2576]: E0130 13:56:31.911678 2576 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rjzh2" Jan 30 13:56:31.912225 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b-shm.mount: Deactivated successfully. Jan 30 13:56:31.912786 kubelet[2576]: E0130 13:56:31.911733 2576 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rjzh2_calico-system(1de43d45-e363-48a6-9642-5ad8984fd09e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rjzh2_calico-system(1de43d45-e363-48a6-9642-5ad8984fd09e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rjzh2" podUID="1de43d45-e363-48a6-9642-5ad8984fd09e" Jan 30 13:56:32.017455 kubelet[2576]: I0130 13:56:32.017383 2576 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" Jan 30 13:56:32.020227 containerd[1464]: time="2025-01-30T13:56:32.019661779Z" level=info msg="StopPodSandbox for \"e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f\"" Jan 30 13:56:32.021056 containerd[1464]: time="2025-01-30T13:56:32.020830889Z" level=info msg="Ensure that sandbox e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f in task-service has been cleanup successfully" Jan 30 13:56:32.021854 kubelet[2576]: I0130 13:56:32.021595 2576 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" Jan 30 13:56:32.023461 containerd[1464]: time="2025-01-30T13:56:32.023040501Z" level=info msg="StopPodSandbox for \"df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b\"" Jan 30 13:56:32.026304 containerd[1464]: time="2025-01-30T13:56:32.024734930Z" level=info msg="Ensure that sandbox df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b in task-service has been cleanup successfully" Jan 30 13:56:32.030105 kubelet[2576]: I0130 13:56:32.029610 2576 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" Jan 30 13:56:32.031944 containerd[1464]: time="2025-01-30T13:56:32.031898747Z" level=info msg="StopPodSandbox for \"181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535\"" Jan 30 13:56:32.032772 containerd[1464]: time="2025-01-30T13:56:32.032276186Z" level=info msg="Ensure that sandbox 181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535 in task-service has been cleanup successfully" Jan 30 13:56:32.035190 kubelet[2576]: I0130 13:56:32.035156 2576 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" Jan 30 13:56:32.037246 containerd[1464]: time="2025-01-30T13:56:32.037208096Z" level=info msg="StopPodSandbox for \"b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae\"" Jan 30 13:56:32.037456 containerd[1464]: time="2025-01-30T13:56:32.037394194Z" level=info msg="Ensure that sandbox b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae in task-service has been cleanup successfully" Jan 30 13:56:32.113667 containerd[1464]: time="2025-01-30T13:56:32.113595988Z" level=error msg="StopPodSandbox for \"e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f\" failed" error="failed to destroy network for sandbox \"e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:32.115227 kubelet[2576]: E0130 13:56:32.115165 2576 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" Jan 30 13:56:32.115483 kubelet[2576]: E0130 13:56:32.115241 2576 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f"} Jan 30 13:56:32.115483 kubelet[2576]: E0130 13:56:32.115278 2576 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1ed9698b-56af-4e7f-90ec-aa46e4b9c7f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:56:32.115483 kubelet[2576]: E0130 13:56:32.115308 2576 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1ed9698b-56af-4e7f-90ec-aa46e4b9c7f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-g59jk" podUID="1ed9698b-56af-4e7f-90ec-aa46e4b9c7f6" Jan 30 13:56:32.119535 containerd[1464]: time="2025-01-30T13:56:32.119122456Z" level=error msg="StopPodSandbox for \"df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b\" failed" error="failed to destroy network for sandbox \"df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:32.120004 kubelet[2576]: E0130 13:56:32.119576 2576 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" Jan 30 13:56:32.120004 kubelet[2576]: E0130 13:56:32.119634 2576 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b"} Jan 30 13:56:32.120004 kubelet[2576]: E0130 13:56:32.119676 2576 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1de43d45-e363-48a6-9642-5ad8984fd09e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:56:32.120004 kubelet[2576]: E0130 13:56:32.119710 2576 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1de43d45-e363-48a6-9642-5ad8984fd09e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rjzh2" podUID="1de43d45-e363-48a6-9642-5ad8984fd09e" Jan 30 13:56:32.127577 containerd[1464]: time="2025-01-30T13:56:32.127522026Z" level=error msg="StopPodSandbox for \"b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae\" failed" error="failed to destroy network for sandbox \"b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:32.128175 kubelet[2576]: E0130 13:56:32.127802 2576 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" Jan 30 13:56:32.128175 kubelet[2576]: E0130 13:56:32.127877 2576 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae"} Jan 30 13:56:32.128175 kubelet[2576]: E0130 13:56:32.127927 2576 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"98eaa407-4d24-4e4e-b6fc-fe8371389f6d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:56:32.128175 kubelet[2576]: E0130 13:56:32.127969 2576 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"98eaa407-4d24-4e4e-b6fc-fe8371389f6d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zhk5s" podUID="98eaa407-4d24-4e4e-b6fc-fe8371389f6d" Jan 30 13:56:32.136245 containerd[1464]: time="2025-01-30T13:56:32.136176943Z" level=error msg="StopPodSandbox for \"181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535\" failed" error="failed to destroy network for sandbox \"181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:32.136779 kubelet[2576]: E0130 13:56:32.136729 2576 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" Jan 30 13:56:32.136884 kubelet[2576]: E0130 13:56:32.136792 2576 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535"} Jan 30 13:56:32.136884 kubelet[2576]: E0130 13:56:32.136830 2576 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0510e406-ed27-4565-a620-76d33cf07b41\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:56:32.136884 kubelet[2576]: E0130 13:56:32.136855 2576 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0510e406-ed27-4565-a620-76d33cf07b41\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7846bc9c4f-pphrg" podUID="0510e406-ed27-4565-a620-76d33cf07b41" Jan 30 13:56:37.223576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount665912722.mount: Deactivated successfully. Jan 30 13:56:37.396564 containerd[1464]: time="2025-01-30T13:56:37.394378627Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:37.397572 containerd[1464]: time="2025-01-30T13:56:37.391826217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 13:56:37.401110 containerd[1464]: time="2025-01-30T13:56:37.401066383Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:37.402089 containerd[1464]: time="2025-01-30T13:56:37.402038152Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:37.409515 containerd[1464]: time="2025-01-30T13:56:37.409425051Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.383047379s" Jan 30 13:56:37.409515 containerd[1464]: time="2025-01-30T13:56:37.409514575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 13:56:37.566898 containerd[1464]: time="2025-01-30T13:56:37.566693450Z" level=info msg="CreateContainer within sandbox \"61578128f6c2ea819d81853fdd24adac64e8fa9c03f7128b5ab64ac44d6998dc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 13:56:37.731739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1338911934.mount: Deactivated successfully. Jan 30 13:56:37.747808 containerd[1464]: time="2025-01-30T13:56:37.747709415Z" level=info msg="CreateContainer within sandbox \"61578128f6c2ea819d81853fdd24adac64e8fa9c03f7128b5ab64ac44d6998dc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"684b2bf1033d80f4b215300ea3424f41677ce8aa560a4cb4d86d78f5204c11dc\"" Jan 30 13:56:37.750360 containerd[1464]: time="2025-01-30T13:56:37.749083855Z" level=info msg="StartContainer for \"684b2bf1033d80f4b215300ea3424f41677ce8aa560a4cb4d86d78f5204c11dc\"" Jan 30 13:56:37.875918 systemd[1]: Started cri-containerd-684b2bf1033d80f4b215300ea3424f41677ce8aa560a4cb4d86d78f5204c11dc.scope - libcontainer container 684b2bf1033d80f4b215300ea3424f41677ce8aa560a4cb4d86d78f5204c11dc. Jan 30 13:56:37.939036 containerd[1464]: time="2025-01-30T13:56:37.938507006Z" level=info msg="StartContainer for \"684b2bf1033d80f4b215300ea3424f41677ce8aa560a4cb4d86d78f5204c11dc\" returns successfully" Jan 30 13:56:38.046004 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 13:56:38.046481 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 13:56:38.077529 kubelet[2576]: E0130 13:56:38.076639 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:38.114609 kubelet[2576]: I0130 13:56:38.114345 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-52xh2" podStartSLOduration=2.006715205 podStartE2EDuration="19.114320628s" podCreationTimestamp="2025-01-30 13:56:19 +0000 UTC" firstStartedPulling="2025-01-30 13:56:20.349313905 +0000 UTC m=+22.688225044" lastFinishedPulling="2025-01-30 13:56:37.456919338 +0000 UTC m=+39.795830467" observedRunningTime="2025-01-30 13:56:38.11161668 +0000 UTC m=+40.450527817" watchObservedRunningTime="2025-01-30 13:56:38.114320628 +0000 UTC m=+40.453231780" Jan 30 13:56:39.069458 kubelet[2576]: E0130 13:56:39.067925 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:39.114013 systemd[1]: run-containerd-runc-k8s.io-684b2bf1033d80f4b215300ea3424f41677ce8aa560a4cb4d86d78f5204c11dc-runc.CkkZ1z.mount: Deactivated successfully. Jan 30 13:56:42.666942 kubelet[2576]: I0130 13:56:42.666568 2576 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:56:42.667991 kubelet[2576]: E0130 13:56:42.667847 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:43.088805 kubelet[2576]: E0130 13:56:43.088389 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:43.416592 kernel: bpftool[3924]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 13:56:43.806116 systemd-networkd[1365]: vxlan.calico: Link UP Jan 30 13:56:43.806126 systemd-networkd[1365]: vxlan.calico: Gained carrier Jan 30 13:56:43.812865 containerd[1464]: time="2025-01-30T13:56:43.812809218Z" level=info msg="StopPodSandbox for \"df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b\"" Jan 30 13:56:44.210064 containerd[1464]: 2025-01-30 13:56:43.989 [INFO][3974] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" Jan 30 13:56:44.210064 containerd[1464]: 2025-01-30 13:56:43.993 [INFO][3974] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" iface="eth0" netns="/var/run/netns/cni-0312eb5f-001e-cfcf-844d-33169718b70b" Jan 30 13:56:44.210064 containerd[1464]: 2025-01-30 13:56:43.993 [INFO][3974] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" iface="eth0" netns="/var/run/netns/cni-0312eb5f-001e-cfcf-844d-33169718b70b" Jan 30 13:56:44.210064 containerd[1464]: 2025-01-30 13:56:43.994 [INFO][3974] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" iface="eth0" netns="/var/run/netns/cni-0312eb5f-001e-cfcf-844d-33169718b70b" Jan 30 13:56:44.210064 containerd[1464]: 2025-01-30 13:56:43.994 [INFO][3974] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" Jan 30 13:56:44.210064 containerd[1464]: 2025-01-30 13:56:43.994 [INFO][3974] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" Jan 30 13:56:44.210064 containerd[1464]: 2025-01-30 13:56:44.167 [INFO][3984] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" HandleID="k8s-pod-network.df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" Workload="ci--4081.3.0--a--04505505d0-k8s-csi--node--driver--rjzh2-eth0" Jan 30 13:56:44.210064 containerd[1464]: 2025-01-30 13:56:44.171 [INFO][3984] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:44.210064 containerd[1464]: 2025-01-30 13:56:44.171 [INFO][3984] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:44.210064 containerd[1464]: 2025-01-30 13:56:44.193 [WARNING][3984] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" HandleID="k8s-pod-network.df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" Workload="ci--4081.3.0--a--04505505d0-k8s-csi--node--driver--rjzh2-eth0" Jan 30 13:56:44.210064 containerd[1464]: 2025-01-30 13:56:44.193 [INFO][3984] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" HandleID="k8s-pod-network.df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" Workload="ci--4081.3.0--a--04505505d0-k8s-csi--node--driver--rjzh2-eth0" Jan 30 13:56:44.210064 containerd[1464]: 2025-01-30 13:56:44.199 [INFO][3984] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:44.210064 containerd[1464]: 2025-01-30 13:56:44.204 [INFO][3974] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" Jan 30 13:56:44.214297 systemd[1]: run-netns-cni\x2d0312eb5f\x2d001e\x2dcfcf\x2d844d\x2d33169718b70b.mount: Deactivated successfully. Jan 30 13:56:44.229361 containerd[1464]: time="2025-01-30T13:56:44.229291479Z" level=info msg="TearDown network for sandbox \"df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b\" successfully" Jan 30 13:56:44.229654 containerd[1464]: time="2025-01-30T13:56:44.229622023Z" level=info msg="StopPodSandbox for \"df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b\" returns successfully" Jan 30 13:56:44.246301 containerd[1464]: time="2025-01-30T13:56:44.245743562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rjzh2,Uid:1de43d45-e363-48a6-9642-5ad8984fd09e,Namespace:calico-system,Attempt:1,}" Jan 30 13:56:44.488524 systemd-networkd[1365]: cali6e1bd6f9e36: Link UP Jan 30 13:56:44.490134 systemd-networkd[1365]: cali6e1bd6f9e36: Gained carrier Jan 30 13:56:44.510024 containerd[1464]: 2025-01-30 13:56:44.361 [INFO][4026] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--04505505d0-k8s-csi--node--driver--rjzh2-eth0 csi-node-driver- calico-system 1de43d45-e363-48a6-9642-5ad8984fd09e 821 0 2025-01-30 13:56:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-a-04505505d0 csi-node-driver-rjzh2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6e1bd6f9e36 [] []}} ContainerID="503d33f9d5d2ec042241150c52d83601e138e8007d241e72d88dbfdcae717c3b" Namespace="calico-system" Pod="csi-node-driver-rjzh2" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-csi--node--driver--rjzh2-" Jan 30 13:56:44.510024 containerd[1464]: 2025-01-30 13:56:44.362 [INFO][4026] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="503d33f9d5d2ec042241150c52d83601e138e8007d241e72d88dbfdcae717c3b" Namespace="calico-system" Pod="csi-node-driver-rjzh2" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-csi--node--driver--rjzh2-eth0" Jan 30 13:56:44.510024 containerd[1464]: 2025-01-30 13:56:44.416 [INFO][4035] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="503d33f9d5d2ec042241150c52d83601e138e8007d241e72d88dbfdcae717c3b" HandleID="k8s-pod-network.503d33f9d5d2ec042241150c52d83601e138e8007d241e72d88dbfdcae717c3b" Workload="ci--4081.3.0--a--04505505d0-k8s-csi--node--driver--rjzh2-eth0" Jan 30 13:56:44.510024 containerd[1464]: 2025-01-30 13:56:44.429 [INFO][4035] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="503d33f9d5d2ec042241150c52d83601e138e8007d241e72d88dbfdcae717c3b" HandleID="k8s-pod-network.503d33f9d5d2ec042241150c52d83601e138e8007d241e72d88dbfdcae717c3b" Workload="ci--4081.3.0--a--04505505d0-k8s-csi--node--driver--rjzh2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002917d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-04505505d0", "pod":"csi-node-driver-rjzh2", "timestamp":"2025-01-30 13:56:44.416777296 +0000 UTC"}, Hostname:"ci-4081.3.0-a-04505505d0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:56:44.510024 containerd[1464]: 2025-01-30 13:56:44.429 [INFO][4035] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:44.510024 containerd[1464]: 2025-01-30 13:56:44.429 [INFO][4035] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:44.510024 containerd[1464]: 2025-01-30 13:56:44.429 [INFO][4035] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-04505505d0' Jan 30 13:56:44.510024 containerd[1464]: 2025-01-30 13:56:44.432 [INFO][4035] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.503d33f9d5d2ec042241150c52d83601e138e8007d241e72d88dbfdcae717c3b" host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:44.510024 containerd[1464]: 2025-01-30 13:56:44.443 [INFO][4035] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:44.510024 containerd[1464]: 2025-01-30 13:56:44.450 [INFO][4035] ipam/ipam.go 489: Trying affinity for 192.168.52.0/26 host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:44.510024 containerd[1464]: 2025-01-30 13:56:44.453 [INFO][4035] ipam/ipam.go 155: Attempting to load block cidr=192.168.52.0/26 host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:44.510024 containerd[1464]: 2025-01-30 13:56:44.457 [INFO][4035] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.0/26 host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:44.510024 containerd[1464]: 2025-01-30 13:56:44.457 [INFO][4035] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.0/26 handle="k8s-pod-network.503d33f9d5d2ec042241150c52d83601e138e8007d241e72d88dbfdcae717c3b" host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:44.510024 containerd[1464]: 2025-01-30 13:56:44.459 [INFO][4035] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.503d33f9d5d2ec042241150c52d83601e138e8007d241e72d88dbfdcae717c3b Jan 30 13:56:44.510024 containerd[1464]: 2025-01-30 13:56:44.466 [INFO][4035] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.52.0/26 handle="k8s-pod-network.503d33f9d5d2ec042241150c52d83601e138e8007d241e72d88dbfdcae717c3b" host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:44.510024 containerd[1464]: 2025-01-30 13:56:44.478 [INFO][4035] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.52.1/26] block=192.168.52.0/26 handle="k8s-pod-network.503d33f9d5d2ec042241150c52d83601e138e8007d241e72d88dbfdcae717c3b" host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:44.510024 containerd[1464]: 2025-01-30 13:56:44.478 [INFO][4035] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.1/26] handle="k8s-pod-network.503d33f9d5d2ec042241150c52d83601e138e8007d241e72d88dbfdcae717c3b" host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:44.510024 containerd[1464]: 2025-01-30 13:56:44.478 [INFO][4035] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:44.510024 containerd[1464]: 2025-01-30 13:56:44.478 [INFO][4035] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.1/26] IPv6=[] ContainerID="503d33f9d5d2ec042241150c52d83601e138e8007d241e72d88dbfdcae717c3b" HandleID="k8s-pod-network.503d33f9d5d2ec042241150c52d83601e138e8007d241e72d88dbfdcae717c3b" Workload="ci--4081.3.0--a--04505505d0-k8s-csi--node--driver--rjzh2-eth0" Jan 30 13:56:44.513303 containerd[1464]: 2025-01-30 13:56:44.484 [INFO][4026] cni-plugin/k8s.go 386: Populated endpoint ContainerID="503d33f9d5d2ec042241150c52d83601e138e8007d241e72d88dbfdcae717c3b" Namespace="calico-system" Pod="csi-node-driver-rjzh2" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-csi--node--driver--rjzh2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--04505505d0-k8s-csi--node--driver--rjzh2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1de43d45-e363-48a6-9642-5ad8984fd09e", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-04505505d0", ContainerID:"", Pod:"csi-node-driver-rjzh2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6e1bd6f9e36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:44.513303 containerd[1464]: 2025-01-30 13:56:44.484 [INFO][4026] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.52.1/32] ContainerID="503d33f9d5d2ec042241150c52d83601e138e8007d241e72d88dbfdcae717c3b" Namespace="calico-system" Pod="csi-node-driver-rjzh2" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-csi--node--driver--rjzh2-eth0" Jan 30 13:56:44.513303 containerd[1464]: 2025-01-30 13:56:44.484 [INFO][4026] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6e1bd6f9e36 ContainerID="503d33f9d5d2ec042241150c52d83601e138e8007d241e72d88dbfdcae717c3b" Namespace="calico-system" Pod="csi-node-driver-rjzh2" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-csi--node--driver--rjzh2-eth0" Jan 30 13:56:44.513303 containerd[1464]: 2025-01-30 13:56:44.490 [INFO][4026] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="503d33f9d5d2ec042241150c52d83601e138e8007d241e72d88dbfdcae717c3b" Namespace="calico-system" Pod="csi-node-driver-rjzh2" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-csi--node--driver--rjzh2-eth0" Jan 30 13:56:44.513303 containerd[1464]: 2025-01-30 13:56:44.491 [INFO][4026] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="503d33f9d5d2ec042241150c52d83601e138e8007d241e72d88dbfdcae717c3b" Namespace="calico-system" Pod="csi-node-driver-rjzh2" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-csi--node--driver--rjzh2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--04505505d0-k8s-csi--node--driver--rjzh2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1de43d45-e363-48a6-9642-5ad8984fd09e", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-04505505d0", ContainerID:"503d33f9d5d2ec042241150c52d83601e138e8007d241e72d88dbfdcae717c3b", Pod:"csi-node-driver-rjzh2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6e1bd6f9e36", MAC:"2e:45:86:b2:a3:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:44.513303 containerd[1464]: 2025-01-30 13:56:44.507 [INFO][4026] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="503d33f9d5d2ec042241150c52d83601e138e8007d241e72d88dbfdcae717c3b" Namespace="calico-system" Pod="csi-node-driver-rjzh2" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-csi--node--driver--rjzh2-eth0" Jan 30 13:56:44.546198 containerd[1464]: time="2025-01-30T13:56:44.545334088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:44.546462 containerd[1464]: time="2025-01-30T13:56:44.546205587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:44.546462 containerd[1464]: time="2025-01-30T13:56:44.546235970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:44.546835 containerd[1464]: time="2025-01-30T13:56:44.546456970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:44.578868 systemd[1]: Started cri-containerd-503d33f9d5d2ec042241150c52d83601e138e8007d241e72d88dbfdcae717c3b.scope - libcontainer container 503d33f9d5d2ec042241150c52d83601e138e8007d241e72d88dbfdcae717c3b. Jan 30 13:56:44.616206 containerd[1464]: time="2025-01-30T13:56:44.616074854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rjzh2,Uid:1de43d45-e363-48a6-9642-5ad8984fd09e,Namespace:calico-system,Attempt:1,} returns sandbox id \"503d33f9d5d2ec042241150c52d83601e138e8007d241e72d88dbfdcae717c3b\"" Jan 30 13:56:44.652345 containerd[1464]: time="2025-01-30T13:56:44.651999474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 13:56:44.805027 containerd[1464]: time="2025-01-30T13:56:44.804823707Z" level=info msg="StopPodSandbox for \"4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602\"" Jan 30 13:56:44.805770 containerd[1464]: time="2025-01-30T13:56:44.804826491Z" level=info msg="StopPodSandbox for \"181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535\"" Jan 30 13:56:44.986127 containerd[1464]: 2025-01-30 13:56:44.914 [INFO][4124] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" Jan 30 13:56:44.986127 containerd[1464]: 2025-01-30 13:56:44.914 [INFO][4124] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" iface="eth0" netns="/var/run/netns/cni-06e88bde-4d2c-d97d-d97c-cc538550f588" Jan 30 13:56:44.986127 containerd[1464]: 2025-01-30 13:56:44.914 [INFO][4124] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" iface="eth0" netns="/var/run/netns/cni-06e88bde-4d2c-d97d-d97c-cc538550f588" Jan 30 13:56:44.986127 containerd[1464]: 2025-01-30 13:56:44.914 [INFO][4124] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" iface="eth0" netns="/var/run/netns/cni-06e88bde-4d2c-d97d-d97c-cc538550f588" Jan 30 13:56:44.986127 containerd[1464]: 2025-01-30 13:56:44.914 [INFO][4124] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" Jan 30 13:56:44.986127 containerd[1464]: 2025-01-30 13:56:44.915 [INFO][4124] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" Jan 30 13:56:44.986127 containerd[1464]: 2025-01-30 13:56:44.966 [INFO][4140] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" HandleID="k8s-pod-network.4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--9s5kj-eth0" Jan 30 13:56:44.986127 containerd[1464]: 2025-01-30 13:56:44.967 [INFO][4140] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:44.986127 containerd[1464]: 2025-01-30 13:56:44.967 [INFO][4140] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:44.986127 containerd[1464]: 2025-01-30 13:56:44.975 [WARNING][4140] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" HandleID="k8s-pod-network.4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--9s5kj-eth0" Jan 30 13:56:44.986127 containerd[1464]: 2025-01-30 13:56:44.975 [INFO][4140] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" HandleID="k8s-pod-network.4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--9s5kj-eth0" Jan 30 13:56:44.986127 containerd[1464]: 2025-01-30 13:56:44.978 [INFO][4140] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:44.986127 containerd[1464]: 2025-01-30 13:56:44.981 [INFO][4124] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" Jan 30 13:56:44.988355 containerd[1464]: time="2025-01-30T13:56:44.986297082Z" level=info msg="TearDown network for sandbox \"4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602\" successfully" Jan 30 13:56:44.988355 containerd[1464]: time="2025-01-30T13:56:44.986339018Z" level=info msg="StopPodSandbox for \"4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602\" returns successfully" Jan 30 13:56:44.989157 containerd[1464]: time="2025-01-30T13:56:44.989128098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7846bc9c4f-9s5kj,Uid:10fe79e7-f8b8-48d0-9f50-3dcca5453972,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:56:45.005044 containerd[1464]: 2025-01-30 13:56:44.901 [INFO][4123] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" Jan 30 13:56:45.005044 containerd[1464]: 2025-01-30 13:56:44.902 [INFO][4123] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" iface="eth0" netns="/var/run/netns/cni-f5fdbdcf-2564-2e06-965b-8e9ed5451d58" Jan 30 13:56:45.005044 containerd[1464]: 2025-01-30 13:56:44.902 [INFO][4123] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" iface="eth0" netns="/var/run/netns/cni-f5fdbdcf-2564-2e06-965b-8e9ed5451d58" Jan 30 13:56:45.005044 containerd[1464]: 2025-01-30 13:56:44.904 [INFO][4123] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" iface="eth0" netns="/var/run/netns/cni-f5fdbdcf-2564-2e06-965b-8e9ed5451d58" Jan 30 13:56:45.005044 containerd[1464]: 2025-01-30 13:56:44.904 [INFO][4123] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" Jan 30 13:56:45.005044 containerd[1464]: 2025-01-30 13:56:44.904 [INFO][4123] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" Jan 30 13:56:45.005044 containerd[1464]: 2025-01-30 13:56:44.969 [INFO][4136] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" HandleID="k8s-pod-network.181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--pphrg-eth0" Jan 30 13:56:45.005044 containerd[1464]: 2025-01-30 13:56:44.969 [INFO][4136] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:45.005044 containerd[1464]: 2025-01-30 13:56:44.978 [INFO][4136] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:45.005044 containerd[1464]: 2025-01-30 13:56:44.987 [WARNING][4136] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" HandleID="k8s-pod-network.181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--pphrg-eth0" Jan 30 13:56:45.005044 containerd[1464]: 2025-01-30 13:56:44.987 [INFO][4136] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" HandleID="k8s-pod-network.181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--pphrg-eth0" Jan 30 13:56:45.005044 containerd[1464]: 2025-01-30 13:56:44.993 [INFO][4136] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:45.005044 containerd[1464]: 2025-01-30 13:56:44.998 [INFO][4123] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" Jan 30 13:56:45.005044 containerd[1464]: time="2025-01-30T13:56:45.004212920Z" level=info msg="TearDown network for sandbox \"181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535\" successfully" Jan 30 13:56:45.005044 containerd[1464]: time="2025-01-30T13:56:45.004269824Z" level=info msg="StopPodSandbox for \"181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535\" returns successfully" Jan 30 13:56:45.006121 containerd[1464]: time="2025-01-30T13:56:45.005954322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7846bc9c4f-pphrg,Uid:0510e406-ed27-4565-a620-76d33cf07b41,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:56:45.221992 systemd[1]: run-containerd-runc-k8s.io-503d33f9d5d2ec042241150c52d83601e138e8007d241e72d88dbfdcae717c3b-runc.VlLWEH.mount: Deactivated successfully. Jan 30 13:56:45.222638 systemd[1]: run-netns-cni\x2df5fdbdcf\x2d2564\x2d2e06\x2d965b\x2d8e9ed5451d58.mount: Deactivated successfully. Jan 30 13:56:45.222731 systemd[1]: run-netns-cni\x2d06e88bde\x2d4d2c\x2dd97d\x2dd97c\x2dcc538550f588.mount: Deactivated successfully. Jan 30 13:56:45.247988 systemd-networkd[1365]: cali42520583144: Link UP Jan 30 13:56:45.248384 systemd-networkd[1365]: cali42520583144: Gained carrier Jan 30 13:56:45.276933 containerd[1464]: 2025-01-30 13:56:45.089 [INFO][4158] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--pphrg-eth0 calico-apiserver-7846bc9c4f- calico-apiserver 0510e406-ed27-4565-a620-76d33cf07b41 829 0 2025-01-30 13:56:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7846bc9c4f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-04505505d0 calico-apiserver-7846bc9c4f-pphrg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali42520583144 [] []}} ContainerID="f9c35256784f07e128503956e361a86168f637910b00c3c2130533a0ca877bd2" Namespace="calico-apiserver" Pod="calico-apiserver-7846bc9c4f-pphrg" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--pphrg-" Jan 30 13:56:45.276933 containerd[1464]: 2025-01-30 13:56:45.090 [INFO][4158] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f9c35256784f07e128503956e361a86168f637910b00c3c2130533a0ca877bd2" Namespace="calico-apiserver" Pod="calico-apiserver-7846bc9c4f-pphrg" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--pphrg-eth0" Jan 30 13:56:45.276933 containerd[1464]: 2025-01-30 13:56:45.142 [INFO][4173] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f9c35256784f07e128503956e361a86168f637910b00c3c2130533a0ca877bd2" HandleID="k8s-pod-network.f9c35256784f07e128503956e361a86168f637910b00c3c2130533a0ca877bd2" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--pphrg-eth0" Jan 30 13:56:45.276933 containerd[1464]: 2025-01-30 13:56:45.160 [INFO][4173] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f9c35256784f07e128503956e361a86168f637910b00c3c2130533a0ca877bd2" HandleID="k8s-pod-network.f9c35256784f07e128503956e361a86168f637910b00c3c2130533a0ca877bd2" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--pphrg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000513c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-04505505d0", "pod":"calico-apiserver-7846bc9c4f-pphrg", "timestamp":"2025-01-30 13:56:45.142704644 +0000 UTC"}, Hostname:"ci-4081.3.0-a-04505505d0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:56:45.276933 containerd[1464]: 2025-01-30 13:56:45.162 [INFO][4173] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:45.276933 containerd[1464]: 2025-01-30 13:56:45.162 [INFO][4173] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:45.276933 containerd[1464]: 2025-01-30 13:56:45.162 [INFO][4173] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-04505505d0' Jan 30 13:56:45.276933 containerd[1464]: 2025-01-30 13:56:45.165 [INFO][4173] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f9c35256784f07e128503956e361a86168f637910b00c3c2130533a0ca877bd2" host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:45.276933 containerd[1464]: 2025-01-30 13:56:45.174 [INFO][4173] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:45.276933 containerd[1464]: 2025-01-30 13:56:45.180 [INFO][4173] ipam/ipam.go 489: Trying affinity for 192.168.52.0/26 host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:45.276933 containerd[1464]: 2025-01-30 13:56:45.183 [INFO][4173] ipam/ipam.go 155: Attempting to load block cidr=192.168.52.0/26 host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:45.276933 containerd[1464]: 2025-01-30 13:56:45.187 [INFO][4173] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.0/26 host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:45.276933 containerd[1464]: 2025-01-30 13:56:45.188 [INFO][4173] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.0/26 handle="k8s-pod-network.f9c35256784f07e128503956e361a86168f637910b00c3c2130533a0ca877bd2" host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:45.276933 containerd[1464]: 2025-01-30 13:56:45.190 [INFO][4173] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f9c35256784f07e128503956e361a86168f637910b00c3c2130533a0ca877bd2 Jan 30 13:56:45.276933 containerd[1464]: 2025-01-30 13:56:45.197 [INFO][4173] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.52.0/26 handle="k8s-pod-network.f9c35256784f07e128503956e361a86168f637910b00c3c2130533a0ca877bd2" host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:45.276933 containerd[1464]: 2025-01-30 13:56:45.205 [INFO][4173] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.52.2/26] block=192.168.52.0/26 handle="k8s-pod-network.f9c35256784f07e128503956e361a86168f637910b00c3c2130533a0ca877bd2" host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:45.276933 containerd[1464]: 2025-01-30 13:56:45.205 [INFO][4173] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.2/26] handle="k8s-pod-network.f9c35256784f07e128503956e361a86168f637910b00c3c2130533a0ca877bd2" host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:45.276933 containerd[1464]: 2025-01-30 13:56:45.205 [INFO][4173] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:45.276933 containerd[1464]: 2025-01-30 13:56:45.205 [INFO][4173] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.2/26] IPv6=[] ContainerID="f9c35256784f07e128503956e361a86168f637910b00c3c2130533a0ca877bd2" HandleID="k8s-pod-network.f9c35256784f07e128503956e361a86168f637910b00c3c2130533a0ca877bd2" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--pphrg-eth0" Jan 30 13:56:45.278815 containerd[1464]: 2025-01-30 13:56:45.210 [INFO][4158] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f9c35256784f07e128503956e361a86168f637910b00c3c2130533a0ca877bd2" Namespace="calico-apiserver" Pod="calico-apiserver-7846bc9c4f-pphrg" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--pphrg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--pphrg-eth0", GenerateName:"calico-apiserver-7846bc9c4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"0510e406-ed27-4565-a620-76d33cf07b41", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7846bc9c4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-04505505d0", ContainerID:"", Pod:"calico-apiserver-7846bc9c4f-pphrg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali42520583144", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:45.278815 containerd[1464]: 2025-01-30 13:56:45.210 [INFO][4158] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.52.2/32] ContainerID="f9c35256784f07e128503956e361a86168f637910b00c3c2130533a0ca877bd2" Namespace="calico-apiserver" Pod="calico-apiserver-7846bc9c4f-pphrg" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--pphrg-eth0" Jan 30 13:56:45.278815 containerd[1464]: 2025-01-30 13:56:45.210 [INFO][4158] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali42520583144 ContainerID="f9c35256784f07e128503956e361a86168f637910b00c3c2130533a0ca877bd2" Namespace="calico-apiserver" Pod="calico-apiserver-7846bc9c4f-pphrg" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--pphrg-eth0" Jan 30 13:56:45.278815 containerd[1464]: 2025-01-30 13:56:45.251 [INFO][4158] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f9c35256784f07e128503956e361a86168f637910b00c3c2130533a0ca877bd2" Namespace="calico-apiserver" Pod="calico-apiserver-7846bc9c4f-pphrg" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--pphrg-eth0" Jan 30 13:56:45.278815 containerd[1464]: 2025-01-30 13:56:45.252 [INFO][4158] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f9c35256784f07e128503956e361a86168f637910b00c3c2130533a0ca877bd2" Namespace="calico-apiserver" Pod="calico-apiserver-7846bc9c4f-pphrg" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--pphrg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--pphrg-eth0", GenerateName:"calico-apiserver-7846bc9c4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"0510e406-ed27-4565-a620-76d33cf07b41", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7846bc9c4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-04505505d0", ContainerID:"f9c35256784f07e128503956e361a86168f637910b00c3c2130533a0ca877bd2", Pod:"calico-apiserver-7846bc9c4f-pphrg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali42520583144", MAC:"ae:80:ac:c3:0e:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:45.278815 containerd[1464]: 2025-01-30 13:56:45.271 [INFO][4158] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f9c35256784f07e128503956e361a86168f637910b00c3c2130533a0ca877bd2" Namespace="calico-apiserver" Pod="calico-apiserver-7846bc9c4f-pphrg" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--pphrg-eth0" Jan 30 13:56:45.305735 systemd-networkd[1365]: vxlan.calico: Gained IPv6LL Jan 30 13:56:45.307167 systemd-networkd[1365]: caliabfcc8296f4: Link UP Jan 30 13:56:45.308614 systemd-networkd[1365]: caliabfcc8296f4: Gained carrier Jan 30 13:56:45.330565 containerd[1464]: time="2025-01-30T13:56:45.329256427Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:45.330565 containerd[1464]: time="2025-01-30T13:56:45.329338177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:45.330565 containerd[1464]: time="2025-01-30T13:56:45.329354304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:45.330565 containerd[1464]: time="2025-01-30T13:56:45.329521571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:45.369701 containerd[1464]: 2025-01-30 13:56:45.095 [INFO][4149] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--9s5kj-eth0 calico-apiserver-7846bc9c4f- calico-apiserver 10fe79e7-f8b8-48d0-9f50-3dcca5453972 830 0 2025-01-30 13:56:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7846bc9c4f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-04505505d0 calico-apiserver-7846bc9c4f-9s5kj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliabfcc8296f4 [] []}} ContainerID="38ee6a79b60317781c20b1748a363b321511afc394d09b69ec9b1889cf182047" Namespace="calico-apiserver" Pod="calico-apiserver-7846bc9c4f-9s5kj" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--9s5kj-" Jan 30 13:56:45.369701 containerd[1464]: 2025-01-30 13:56:45.096 [INFO][4149] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="38ee6a79b60317781c20b1748a363b321511afc394d09b69ec9b1889cf182047" Namespace="calico-apiserver" Pod="calico-apiserver-7846bc9c4f-9s5kj" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--9s5kj-eth0" Jan 30 13:56:45.369701 containerd[1464]: 2025-01-30 13:56:45.153 [INFO][4177] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38ee6a79b60317781c20b1748a363b321511afc394d09b69ec9b1889cf182047" HandleID="k8s-pod-network.38ee6a79b60317781c20b1748a363b321511afc394d09b69ec9b1889cf182047" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--9s5kj-eth0" Jan 30 13:56:45.369701 containerd[1464]: 2025-01-30 13:56:45.170 [INFO][4177] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="38ee6a79b60317781c20b1748a363b321511afc394d09b69ec9b1889cf182047" HandleID="k8s-pod-network.38ee6a79b60317781c20b1748a363b321511afc394d09b69ec9b1889cf182047" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--9s5kj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b7c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-04505505d0", "pod":"calico-apiserver-7846bc9c4f-9s5kj", "timestamp":"2025-01-30 13:56:45.153799733 +0000 UTC"}, Hostname:"ci-4081.3.0-a-04505505d0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:56:45.369701 containerd[1464]: 2025-01-30 13:56:45.170 [INFO][4177] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:45.369701 containerd[1464]: 2025-01-30 13:56:45.205 [INFO][4177] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:45.369701 containerd[1464]: 2025-01-30 13:56:45.206 [INFO][4177] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-04505505d0' Jan 30 13:56:45.369701 containerd[1464]: 2025-01-30 13:56:45.211 [INFO][4177] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.38ee6a79b60317781c20b1748a363b321511afc394d09b69ec9b1889cf182047" host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:45.369701 containerd[1464]: 2025-01-30 13:56:45.247 [INFO][4177] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:45.369701 containerd[1464]: 2025-01-30 13:56:45.260 [INFO][4177] ipam/ipam.go 489: Trying affinity for 192.168.52.0/26 host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:45.369701 containerd[1464]: 2025-01-30 13:56:45.262 [INFO][4177] ipam/ipam.go 155: Attempting to load block cidr=192.168.52.0/26 host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:45.369701 containerd[1464]: 2025-01-30 13:56:45.269 [INFO][4177] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.0/26 host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:45.369701 containerd[1464]: 2025-01-30 13:56:45.269 [INFO][4177] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.0/26 handle="k8s-pod-network.38ee6a79b60317781c20b1748a363b321511afc394d09b69ec9b1889cf182047" host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:45.369701 containerd[1464]: 2025-01-30 13:56:45.272 [INFO][4177] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.38ee6a79b60317781c20b1748a363b321511afc394d09b69ec9b1889cf182047 Jan 30 13:56:45.369701 containerd[1464]: 2025-01-30 13:56:45.280 [INFO][4177] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.52.0/26 handle="k8s-pod-network.38ee6a79b60317781c20b1748a363b321511afc394d09b69ec9b1889cf182047" host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:45.369701 containerd[1464]: 2025-01-30 13:56:45.292 [INFO][4177] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.52.3/26] block=192.168.52.0/26 handle="k8s-pod-network.38ee6a79b60317781c20b1748a363b321511afc394d09b69ec9b1889cf182047" host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:45.369701 containerd[1464]: 2025-01-30 13:56:45.292 [INFO][4177] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.3/26] handle="k8s-pod-network.38ee6a79b60317781c20b1748a363b321511afc394d09b69ec9b1889cf182047" host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:45.369701 containerd[1464]: 2025-01-30 13:56:45.292 [INFO][4177] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:45.369701 containerd[1464]: 2025-01-30 13:56:45.292 [INFO][4177] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.3/26] IPv6=[] ContainerID="38ee6a79b60317781c20b1748a363b321511afc394d09b69ec9b1889cf182047" HandleID="k8s-pod-network.38ee6a79b60317781c20b1748a363b321511afc394d09b69ec9b1889cf182047" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--9s5kj-eth0" Jan 30 13:56:45.372232 containerd[1464]: 2025-01-30 13:56:45.296 [INFO][4149] cni-plugin/k8s.go 386: Populated endpoint ContainerID="38ee6a79b60317781c20b1748a363b321511afc394d09b69ec9b1889cf182047" Namespace="calico-apiserver" Pod="calico-apiserver-7846bc9c4f-9s5kj" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--9s5kj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--9s5kj-eth0", GenerateName:"calico-apiserver-7846bc9c4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"10fe79e7-f8b8-48d0-9f50-3dcca5453972", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7846bc9c4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-04505505d0", ContainerID:"", Pod:"calico-apiserver-7846bc9c4f-9s5kj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliabfcc8296f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:45.372232 containerd[1464]: 2025-01-30 13:56:45.296 [INFO][4149] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.52.3/32] ContainerID="38ee6a79b60317781c20b1748a363b321511afc394d09b69ec9b1889cf182047" Namespace="calico-apiserver" Pod="calico-apiserver-7846bc9c4f-9s5kj" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--9s5kj-eth0" Jan 30 13:56:45.372232 containerd[1464]: 2025-01-30 13:56:45.296 [INFO][4149] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliabfcc8296f4 ContainerID="38ee6a79b60317781c20b1748a363b321511afc394d09b69ec9b1889cf182047" Namespace="calico-apiserver" Pod="calico-apiserver-7846bc9c4f-9s5kj" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--9s5kj-eth0" Jan 30 13:56:45.372232 containerd[1464]: 2025-01-30 13:56:45.312 [INFO][4149] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38ee6a79b60317781c20b1748a363b321511afc394d09b69ec9b1889cf182047" Namespace="calico-apiserver" Pod="calico-apiserver-7846bc9c4f-9s5kj" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--9s5kj-eth0" Jan 30 13:56:45.372232 containerd[1464]: 2025-01-30 13:56:45.324 [INFO][4149] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="38ee6a79b60317781c20b1748a363b321511afc394d09b69ec9b1889cf182047" Namespace="calico-apiserver" Pod="calico-apiserver-7846bc9c4f-9s5kj" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--9s5kj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--9s5kj-eth0", GenerateName:"calico-apiserver-7846bc9c4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"10fe79e7-f8b8-48d0-9f50-3dcca5453972", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7846bc9c4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-04505505d0", ContainerID:"38ee6a79b60317781c20b1748a363b321511afc394d09b69ec9b1889cf182047", Pod:"calico-apiserver-7846bc9c4f-9s5kj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliabfcc8296f4", MAC:"46:34:f6:1c:84:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:45.372232 containerd[1464]: 2025-01-30 13:56:45.348 [INFO][4149] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="38ee6a79b60317781c20b1748a363b321511afc394d09b69ec9b1889cf182047" Namespace="calico-apiserver" Pod="calico-apiserver-7846bc9c4f-9s5kj" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--9s5kj-eth0" Jan 30 13:56:45.373378 systemd[1]: run-containerd-runc-k8s.io-f9c35256784f07e128503956e361a86168f637910b00c3c2130533a0ca877bd2-runc.VLbW0q.mount: Deactivated successfully. Jan 30 13:56:45.385889 systemd[1]: Started cri-containerd-f9c35256784f07e128503956e361a86168f637910b00c3c2130533a0ca877bd2.scope - libcontainer container f9c35256784f07e128503956e361a86168f637910b00c3c2130533a0ca877bd2. Jan 30 13:56:45.423172 containerd[1464]: time="2025-01-30T13:56:45.420196104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:45.423172 containerd[1464]: time="2025-01-30T13:56:45.420271427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:45.423172 containerd[1464]: time="2025-01-30T13:56:45.420284518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:45.423172 containerd[1464]: time="2025-01-30T13:56:45.420384977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:45.463992 systemd[1]: Started cri-containerd-38ee6a79b60317781c20b1748a363b321511afc394d09b69ec9b1889cf182047.scope - libcontainer container 38ee6a79b60317781c20b1748a363b321511afc394d09b69ec9b1889cf182047. Jan 30 13:56:45.514650 containerd[1464]: time="2025-01-30T13:56:45.514497159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7846bc9c4f-pphrg,Uid:0510e406-ed27-4565-a620-76d33cf07b41,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f9c35256784f07e128503956e361a86168f637910b00c3c2130533a0ca877bd2\"" Jan 30 13:56:45.559697 containerd[1464]: time="2025-01-30T13:56:45.559635598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7846bc9c4f-9s5kj,Uid:10fe79e7-f8b8-48d0-9f50-3dcca5453972,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"38ee6a79b60317781c20b1748a363b321511afc394d09b69ec9b1889cf182047\"" Jan 30 13:56:45.807266 containerd[1464]: time="2025-01-30T13:56:45.807115777Z" level=info msg="StopPodSandbox for \"22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318\"" Jan 30 13:56:45.809321 containerd[1464]: time="2025-01-30T13:56:45.808746411Z" level=info msg="StopPodSandbox for \"b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae\"" Jan 30 13:56:45.945878 systemd-networkd[1365]: cali6e1bd6f9e36: Gained IPv6LL Jan 30 13:56:45.986046 containerd[1464]: 2025-01-30 13:56:45.904 [INFO][4318] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" Jan 30 13:56:45.986046 containerd[1464]: 2025-01-30 13:56:45.905 [INFO][4318] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" iface="eth0" netns="/var/run/netns/cni-d8a32282-66bd-fae0-16e9-a598ae8ac9d5" Jan 30 13:56:45.986046 containerd[1464]: 2025-01-30 13:56:45.906 [INFO][4318] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" iface="eth0" netns="/var/run/netns/cni-d8a32282-66bd-fae0-16e9-a598ae8ac9d5" Jan 30 13:56:45.986046 containerd[1464]: 2025-01-30 13:56:45.907 [INFO][4318] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" iface="eth0" netns="/var/run/netns/cni-d8a32282-66bd-fae0-16e9-a598ae8ac9d5" Jan 30 13:56:45.986046 containerd[1464]: 2025-01-30 13:56:45.908 [INFO][4318] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" Jan 30 13:56:45.986046 containerd[1464]: 2025-01-30 13:56:45.908 [INFO][4318] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" Jan 30 13:56:45.986046 containerd[1464]: 2025-01-30 13:56:45.961 [INFO][4334] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" HandleID="k8s-pod-network.b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" Workload="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--zhk5s-eth0" Jan 30 13:56:45.986046 containerd[1464]: 2025-01-30 13:56:45.962 [INFO][4334] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:45.986046 containerd[1464]: 2025-01-30 13:56:45.962 [INFO][4334] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:45.986046 containerd[1464]: 2025-01-30 13:56:45.974 [WARNING][4334] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" HandleID="k8s-pod-network.b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" Workload="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--zhk5s-eth0" Jan 30 13:56:45.986046 containerd[1464]: 2025-01-30 13:56:45.974 [INFO][4334] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" HandleID="k8s-pod-network.b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" Workload="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--zhk5s-eth0" Jan 30 13:56:45.986046 containerd[1464]: 2025-01-30 13:56:45.977 [INFO][4334] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:45.986046 containerd[1464]: 2025-01-30 13:56:45.981 [INFO][4318] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" Jan 30 13:56:45.986046 containerd[1464]: time="2025-01-30T13:56:45.985980221Z" level=info msg="TearDown network for sandbox \"b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae\" successfully" Jan 30 13:56:45.986046 containerd[1464]: time="2025-01-30T13:56:45.986007115Z" level=info msg="StopPodSandbox for \"b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae\" returns successfully" Jan 30 13:56:45.987239 kubelet[2576]: E0130 13:56:45.986457 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:45.988987 containerd[1464]: time="2025-01-30T13:56:45.987995047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zhk5s,Uid:98eaa407-4d24-4e4e-b6fc-fe8371389f6d,Namespace:kube-system,Attempt:1,}" Jan 30 13:56:46.014048 containerd[1464]: 2025-01-30 13:56:45.921 [INFO][4322] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" Jan 30 13:56:46.014048 containerd[1464]: 2025-01-30 13:56:45.922 [INFO][4322] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" iface="eth0" netns="/var/run/netns/cni-6b36f681-397f-7f0f-fce4-bbe0a527d0cc" Jan 30 13:56:46.014048 containerd[1464]: 2025-01-30 13:56:45.923 [INFO][4322] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" iface="eth0" netns="/var/run/netns/cni-6b36f681-397f-7f0f-fce4-bbe0a527d0cc" Jan 30 13:56:46.014048 containerd[1464]: 2025-01-30 13:56:45.923 [INFO][4322] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" iface="eth0" netns="/var/run/netns/cni-6b36f681-397f-7f0f-fce4-bbe0a527d0cc" Jan 30 13:56:46.014048 containerd[1464]: 2025-01-30 13:56:45.923 [INFO][4322] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" Jan 30 13:56:46.014048 containerd[1464]: 2025-01-30 13:56:45.923 [INFO][4322] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" Jan 30 13:56:46.014048 containerd[1464]: 2025-01-30 13:56:45.989 [INFO][4337] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" HandleID="k8s-pod-network.22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--kube--controllers--59b5bcffb9--86cwd-eth0" Jan 30 13:56:46.014048 containerd[1464]: 2025-01-30 13:56:45.989 [INFO][4337] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:46.014048 containerd[1464]: 2025-01-30 13:56:45.989 [INFO][4337] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:46.014048 containerd[1464]: 2025-01-30 13:56:46.001 [WARNING][4337] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" HandleID="k8s-pod-network.22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--kube--controllers--59b5bcffb9--86cwd-eth0" Jan 30 13:56:46.014048 containerd[1464]: 2025-01-30 13:56:46.002 [INFO][4337] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" HandleID="k8s-pod-network.22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--kube--controllers--59b5bcffb9--86cwd-eth0" Jan 30 13:56:46.014048 containerd[1464]: 2025-01-30 13:56:46.006 [INFO][4337] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:46.014048 containerd[1464]: 2025-01-30 13:56:46.008 [INFO][4322] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" Jan 30 13:56:46.015205 containerd[1464]: time="2025-01-30T13:56:46.014244249Z" level=info msg="TearDown network for sandbox \"22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318\" successfully" Jan 30 13:56:46.015205 containerd[1464]: time="2025-01-30T13:56:46.014283270Z" level=info msg="StopPodSandbox for \"22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318\" returns successfully" Jan 30 13:56:46.016384 containerd[1464]: time="2025-01-30T13:56:46.016349386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59b5bcffb9-86cwd,Uid:bf3a2e24-1024-4d44-97d6-556904b751fc,Namespace:calico-system,Attempt:1,}" Jan 30 13:56:46.223751 systemd[1]: run-netns-cni\x2d6b36f681\x2d397f\x2d7f0f\x2dfce4\x2dbbe0a527d0cc.mount: Deactivated successfully. Jan 30 13:56:46.223872 systemd[1]: run-netns-cni\x2dd8a32282\x2d66bd\x2dfae0\x2d16e9\x2da598ae8ac9d5.mount: Deactivated successfully. Jan 30 13:56:46.320038 containerd[1464]: time="2025-01-30T13:56:46.319382639Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:46.320676 containerd[1464]: time="2025-01-30T13:56:46.320539996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 13:56:46.322344 containerd[1464]: time="2025-01-30T13:56:46.322290849Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:46.325787 containerd[1464]: time="2025-01-30T13:56:46.325515377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:46.327476 containerd[1464]: time="2025-01-30T13:56:46.327292984Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.675237341s" Jan 30 13:56:46.327476 containerd[1464]: time="2025-01-30T13:56:46.327336313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 13:56:46.332611 containerd[1464]: time="2025-01-30T13:56:46.332535001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:56:46.334794 containerd[1464]: time="2025-01-30T13:56:46.334508830Z" level=info msg="CreateContainer within sandbox \"503d33f9d5d2ec042241150c52d83601e138e8007d241e72d88dbfdcae717c3b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 13:56:46.364977 systemd-networkd[1365]: calibb48a07b4ce: Link UP Jan 30 13:56:46.368136 systemd-networkd[1365]: calibb48a07b4ce: Gained carrier Jan 30 13:56:46.385318 containerd[1464]: time="2025-01-30T13:56:46.384613270Z" level=info msg="CreateContainer within sandbox \"503d33f9d5d2ec042241150c52d83601e138e8007d241e72d88dbfdcae717c3b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4d2861d3bc9a3f6f020616a6a13b972b17a9560a9e210959eba5ab4e45640414\"" Jan 30 13:56:46.393381 containerd[1464]: time="2025-01-30T13:56:46.392795030Z" level=info msg="StartContainer for \"4d2861d3bc9a3f6f020616a6a13b972b17a9560a9e210959eba5ab4e45640414\"" Jan 30 13:56:46.411606 containerd[1464]: 2025-01-30 13:56:46.152 [INFO][4351] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--zhk5s-eth0 coredns-7db6d8ff4d- kube-system 98eaa407-4d24-4e4e-b6fc-fe8371389f6d 843 0 2025-01-30 13:56:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-04505505d0 coredns-7db6d8ff4d-zhk5s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibb48a07b4ce [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1182bb2cf2a5363468f87940c57b7e858699dc00dc16293a913863d47f74f9da" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zhk5s" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--zhk5s-" Jan 30 13:56:46.411606 containerd[1464]: 2025-01-30 13:56:46.153 [INFO][4351] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1182bb2cf2a5363468f87940c57b7e858699dc00dc16293a913863d47f74f9da" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zhk5s" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--zhk5s-eth0" Jan 30 13:56:46.411606 containerd[1464]: 2025-01-30 13:56:46.263 [INFO][4377] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1182bb2cf2a5363468f87940c57b7e858699dc00dc16293a913863d47f74f9da" HandleID="k8s-pod-network.1182bb2cf2a5363468f87940c57b7e858699dc00dc16293a913863d47f74f9da" Workload="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--zhk5s-eth0" Jan 30 13:56:46.411606 containerd[1464]: 2025-01-30 13:56:46.280 [INFO][4377] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1182bb2cf2a5363468f87940c57b7e858699dc00dc16293a913863d47f74f9da" HandleID="k8s-pod-network.1182bb2cf2a5363468f87940c57b7e858699dc00dc16293a913863d47f74f9da" Workload="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--zhk5s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034f100), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-04505505d0", "pod":"coredns-7db6d8ff4d-zhk5s", "timestamp":"2025-01-30 13:56:46.263297554 +0000 UTC"}, Hostname:"ci-4081.3.0-a-04505505d0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:56:46.411606 containerd[1464]: 2025-01-30 13:56:46.281 [INFO][4377] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:46.411606 containerd[1464]: 2025-01-30 13:56:46.281 [INFO][4377] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:46.411606 containerd[1464]: 2025-01-30 13:56:46.281 [INFO][4377] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-04505505d0' Jan 30 13:56:46.411606 containerd[1464]: 2025-01-30 13:56:46.285 [INFO][4377] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1182bb2cf2a5363468f87940c57b7e858699dc00dc16293a913863d47f74f9da" host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:46.411606 containerd[1464]: 2025-01-30 13:56:46.293 [INFO][4377] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:46.411606 containerd[1464]: 2025-01-30 13:56:46.311 [INFO][4377] ipam/ipam.go 489: Trying affinity for 192.168.52.0/26 host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:46.411606 containerd[1464]: 2025-01-30 13:56:46.315 [INFO][4377] ipam/ipam.go 155: Attempting to load block cidr=192.168.52.0/26 host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:46.411606 containerd[1464]: 2025-01-30 13:56:46.320 [INFO][4377] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.0/26 host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:46.411606 containerd[1464]: 2025-01-30 13:56:46.320 [INFO][4377] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.0/26 handle="k8s-pod-network.1182bb2cf2a5363468f87940c57b7e858699dc00dc16293a913863d47f74f9da" host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:46.411606 containerd[1464]: 2025-01-30 13:56:46.324 [INFO][4377] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1182bb2cf2a5363468f87940c57b7e858699dc00dc16293a913863d47f74f9da Jan 30 13:56:46.411606 containerd[1464]: 2025-01-30 13:56:46.336 [INFO][4377] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.52.0/26 handle="k8s-pod-network.1182bb2cf2a5363468f87940c57b7e858699dc00dc16293a913863d47f74f9da" host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:46.411606 containerd[1464]: 2025-01-30 13:56:46.350 [INFO][4377] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.52.4/26] block=192.168.52.0/26 handle="k8s-pod-network.1182bb2cf2a5363468f87940c57b7e858699dc00dc16293a913863d47f74f9da" host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:46.411606 containerd[1464]: 2025-01-30 13:56:46.351 [INFO][4377] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.4/26] handle="k8s-pod-network.1182bb2cf2a5363468f87940c57b7e858699dc00dc16293a913863d47f74f9da" host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:46.411606 containerd[1464]: 2025-01-30 13:56:46.351 [INFO][4377] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:46.411606 containerd[1464]: 2025-01-30 13:56:46.351 [INFO][4377] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.4/26] IPv6=[] ContainerID="1182bb2cf2a5363468f87940c57b7e858699dc00dc16293a913863d47f74f9da" HandleID="k8s-pod-network.1182bb2cf2a5363468f87940c57b7e858699dc00dc16293a913863d47f74f9da" Workload="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--zhk5s-eth0" Jan 30 13:56:46.413681 containerd[1464]: 2025-01-30 13:56:46.353 [INFO][4351] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1182bb2cf2a5363468f87940c57b7e858699dc00dc16293a913863d47f74f9da" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zhk5s" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--zhk5s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--zhk5s-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"98eaa407-4d24-4e4e-b6fc-fe8371389f6d", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-04505505d0", ContainerID:"", Pod:"coredns-7db6d8ff4d-zhk5s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb48a07b4ce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:46.413681 containerd[1464]: 2025-01-30 13:56:46.354 [INFO][4351] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.52.4/32] ContainerID="1182bb2cf2a5363468f87940c57b7e858699dc00dc16293a913863d47f74f9da" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zhk5s" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--zhk5s-eth0" Jan 30 13:56:46.413681 containerd[1464]: 2025-01-30 13:56:46.354 [INFO][4351] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb48a07b4ce ContainerID="1182bb2cf2a5363468f87940c57b7e858699dc00dc16293a913863d47f74f9da" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zhk5s" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--zhk5s-eth0" Jan 30 13:56:46.413681 containerd[1464]: 2025-01-30 13:56:46.367 [INFO][4351] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1182bb2cf2a5363468f87940c57b7e858699dc00dc16293a913863d47f74f9da" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zhk5s" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--zhk5s-eth0" Jan 30 13:56:46.413681 containerd[1464]: 2025-01-30 13:56:46.369 [INFO][4351] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1182bb2cf2a5363468f87940c57b7e858699dc00dc16293a913863d47f74f9da" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zhk5s" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--zhk5s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--zhk5s-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"98eaa407-4d24-4e4e-b6fc-fe8371389f6d", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-04505505d0", ContainerID:"1182bb2cf2a5363468f87940c57b7e858699dc00dc16293a913863d47f74f9da", Pod:"coredns-7db6d8ff4d-zhk5s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb48a07b4ce", MAC:"36:bd:75:e4:fa:93", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:46.413681 containerd[1464]: 2025-01-30 13:56:46.400 [INFO][4351] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1182bb2cf2a5363468f87940c57b7e858699dc00dc16293a913863d47f74f9da" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zhk5s" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--zhk5s-eth0" Jan 30 13:56:46.484208 systemd-networkd[1365]: calid6d638bd5b7: Link UP Jan 30 13:56:46.489969 systemd-networkd[1365]: calid6d638bd5b7: Gained carrier Jan 30 13:56:46.526206 systemd[1]: Started cri-containerd-4d2861d3bc9a3f6f020616a6a13b972b17a9560a9e210959eba5ab4e45640414.scope - libcontainer container 4d2861d3bc9a3f6f020616a6a13b972b17a9560a9e210959eba5ab4e45640414. Jan 30 13:56:46.544215 containerd[1464]: 2025-01-30 13:56:46.156 [INFO][4363] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--04505505d0-k8s-calico--kube--controllers--59b5bcffb9--86cwd-eth0 calico-kube-controllers-59b5bcffb9- calico-system bf3a2e24-1024-4d44-97d6-556904b751fc 844 0 2025-01-30 13:56:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:59b5bcffb9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-a-04505505d0 calico-kube-controllers-59b5bcffb9-86cwd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid6d638bd5b7 [] []}} ContainerID="a2498b032f73d300b7b30eb0745fc4b343389ae97e46ea3312b90b46f73fd1e3" Namespace="calico-system" Pod="calico-kube-controllers-59b5bcffb9-86cwd" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-calico--kube--controllers--59b5bcffb9--86cwd-" Jan 30 13:56:46.544215 containerd[1464]: 2025-01-30 13:56:46.157 [INFO][4363] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a2498b032f73d300b7b30eb0745fc4b343389ae97e46ea3312b90b46f73fd1e3" Namespace="calico-system" Pod="calico-kube-controllers-59b5bcffb9-86cwd" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-calico--kube--controllers--59b5bcffb9--86cwd-eth0" Jan 30 13:56:46.544215 containerd[1464]: 2025-01-30 13:56:46.292 [INFO][4378] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a2498b032f73d300b7b30eb0745fc4b343389ae97e46ea3312b90b46f73fd1e3" HandleID="k8s-pod-network.a2498b032f73d300b7b30eb0745fc4b343389ae97e46ea3312b90b46f73fd1e3" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--kube--controllers--59b5bcffb9--86cwd-eth0" Jan 30 13:56:46.544215 containerd[1464]: 2025-01-30 13:56:46.312 [INFO][4378] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a2498b032f73d300b7b30eb0745fc4b343389ae97e46ea3312b90b46f73fd1e3" HandleID="k8s-pod-network.a2498b032f73d300b7b30eb0745fc4b343389ae97e46ea3312b90b46f73fd1e3" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--kube--controllers--59b5bcffb9--86cwd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034f860), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-04505505d0", "pod":"calico-kube-controllers-59b5bcffb9-86cwd", "timestamp":"2025-01-30 13:56:46.292026999 +0000 UTC"}, Hostname:"ci-4081.3.0-a-04505505d0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:56:46.544215 containerd[1464]: 2025-01-30 13:56:46.312 [INFO][4378] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:46.544215 containerd[1464]: 2025-01-30 13:56:46.351 [INFO][4378] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:46.544215 containerd[1464]: 2025-01-30 13:56:46.351 [INFO][4378] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-04505505d0' Jan 30 13:56:46.544215 containerd[1464]: 2025-01-30 13:56:46.360 [INFO][4378] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a2498b032f73d300b7b30eb0745fc4b343389ae97e46ea3312b90b46f73fd1e3" host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:46.544215 containerd[1464]: 2025-01-30 13:56:46.394 [INFO][4378] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:46.544215 containerd[1464]: 2025-01-30 13:56:46.403 [INFO][4378] ipam/ipam.go 489: Trying affinity for 192.168.52.0/26 host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:46.544215 containerd[1464]: 2025-01-30 13:56:46.412 [INFO][4378] ipam/ipam.go 155: Attempting to load block cidr=192.168.52.0/26 host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:46.544215 containerd[1464]: 2025-01-30 13:56:46.423 [INFO][4378] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.0/26 host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:46.544215 containerd[1464]: 2025-01-30 13:56:46.424 [INFO][4378] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.0/26 handle="k8s-pod-network.a2498b032f73d300b7b30eb0745fc4b343389ae97e46ea3312b90b46f73fd1e3" host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:46.544215 containerd[1464]: 2025-01-30 13:56:46.432 [INFO][4378] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a2498b032f73d300b7b30eb0745fc4b343389ae97e46ea3312b90b46f73fd1e3 Jan 30 13:56:46.544215 containerd[1464]: 2025-01-30 13:56:46.451 [INFO][4378] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.52.0/26 handle="k8s-pod-network.a2498b032f73d300b7b30eb0745fc4b343389ae97e46ea3312b90b46f73fd1e3" host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:46.544215 containerd[1464]: 2025-01-30 13:56:46.472 [INFO][4378] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.52.5/26] block=192.168.52.0/26 handle="k8s-pod-network.a2498b032f73d300b7b30eb0745fc4b343389ae97e46ea3312b90b46f73fd1e3" host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:46.544215 containerd[1464]: 2025-01-30 13:56:46.472 [INFO][4378] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.5/26] handle="k8s-pod-network.a2498b032f73d300b7b30eb0745fc4b343389ae97e46ea3312b90b46f73fd1e3" host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:46.544215 containerd[1464]: 2025-01-30 13:56:46.472 [INFO][4378] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:46.544215 containerd[1464]: 2025-01-30 13:56:46.472 [INFO][4378] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.5/26] IPv6=[] ContainerID="a2498b032f73d300b7b30eb0745fc4b343389ae97e46ea3312b90b46f73fd1e3" HandleID="k8s-pod-network.a2498b032f73d300b7b30eb0745fc4b343389ae97e46ea3312b90b46f73fd1e3" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--kube--controllers--59b5bcffb9--86cwd-eth0" Jan 30 13:56:46.546805 containerd[1464]: 2025-01-30 13:56:46.476 [INFO][4363] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a2498b032f73d300b7b30eb0745fc4b343389ae97e46ea3312b90b46f73fd1e3" Namespace="calico-system" Pod="calico-kube-controllers-59b5bcffb9-86cwd" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-calico--kube--controllers--59b5bcffb9--86cwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--04505505d0-k8s-calico--kube--controllers--59b5bcffb9--86cwd-eth0", GenerateName:"calico-kube-controllers-59b5bcffb9-", Namespace:"calico-system", SelfLink:"", UID:"bf3a2e24-1024-4d44-97d6-556904b751fc", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59b5bcffb9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-04505505d0", ContainerID:"", Pod:"calico-kube-controllers-59b5bcffb9-86cwd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid6d638bd5b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:46.546805 containerd[1464]: 2025-01-30 13:56:46.477 [INFO][4363] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.52.5/32] ContainerID="a2498b032f73d300b7b30eb0745fc4b343389ae97e46ea3312b90b46f73fd1e3" Namespace="calico-system" Pod="calico-kube-controllers-59b5bcffb9-86cwd" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-calico--kube--controllers--59b5bcffb9--86cwd-eth0" Jan 30 13:56:46.546805 containerd[1464]: 2025-01-30 13:56:46.477 [INFO][4363] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid6d638bd5b7 ContainerID="a2498b032f73d300b7b30eb0745fc4b343389ae97e46ea3312b90b46f73fd1e3" Namespace="calico-system" Pod="calico-kube-controllers-59b5bcffb9-86cwd" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-calico--kube--controllers--59b5bcffb9--86cwd-eth0" Jan 30 13:56:46.546805 containerd[1464]: 2025-01-30 13:56:46.493 [INFO][4363] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a2498b032f73d300b7b30eb0745fc4b343389ae97e46ea3312b90b46f73fd1e3" Namespace="calico-system" Pod="calico-kube-controllers-59b5bcffb9-86cwd" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-calico--kube--controllers--59b5bcffb9--86cwd-eth0" Jan 30 13:56:46.546805 containerd[1464]: 2025-01-30 13:56:46.496 [INFO][4363] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a2498b032f73d300b7b30eb0745fc4b343389ae97e46ea3312b90b46f73fd1e3" Namespace="calico-system" Pod="calico-kube-controllers-59b5bcffb9-86cwd" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-calico--kube--controllers--59b5bcffb9--86cwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--04505505d0-k8s-calico--kube--controllers--59b5bcffb9--86cwd-eth0", GenerateName:"calico-kube-controllers-59b5bcffb9-", Namespace:"calico-system", SelfLink:"", UID:"bf3a2e24-1024-4d44-97d6-556904b751fc", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59b5bcffb9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-04505505d0", ContainerID:"a2498b032f73d300b7b30eb0745fc4b343389ae97e46ea3312b90b46f73fd1e3", Pod:"calico-kube-controllers-59b5bcffb9-86cwd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid6d638bd5b7", MAC:"4a:da:44:ea:4d:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:46.546805 containerd[1464]: 2025-01-30 13:56:46.527 [INFO][4363] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a2498b032f73d300b7b30eb0745fc4b343389ae97e46ea3312b90b46f73fd1e3" Namespace="calico-system" Pod="calico-kube-controllers-59b5bcffb9-86cwd" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-calico--kube--controllers--59b5bcffb9--86cwd-eth0" Jan 30 13:56:46.573315 containerd[1464]: time="2025-01-30T13:56:46.573208288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:46.573503 containerd[1464]: time="2025-01-30T13:56:46.573289589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:46.573503 containerd[1464]: time="2025-01-30T13:56:46.573306195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:46.573503 containerd[1464]: time="2025-01-30T13:56:46.573432611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:46.635619 systemd[1]: Started cri-containerd-1182bb2cf2a5363468f87940c57b7e858699dc00dc16293a913863d47f74f9da.scope - libcontainer container 1182bb2cf2a5363468f87940c57b7e858699dc00dc16293a913863d47f74f9da. Jan 30 13:56:46.645460 containerd[1464]: time="2025-01-30T13:56:46.645355348Z" level=info msg="StartContainer for \"4d2861d3bc9a3f6f020616a6a13b972b17a9560a9e210959eba5ab4e45640414\" returns successfully" Jan 30 13:56:46.655313 containerd[1464]: time="2025-01-30T13:56:46.655008412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:46.655313 containerd[1464]: time="2025-01-30T13:56:46.655112167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:46.655313 containerd[1464]: time="2025-01-30T13:56:46.655136213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:46.658703 containerd[1464]: time="2025-01-30T13:56:46.657243692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:46.714304 systemd-networkd[1365]: caliabfcc8296f4: Gained IPv6LL Jan 30 13:56:46.747734 systemd[1]: Started cri-containerd-a2498b032f73d300b7b30eb0745fc4b343389ae97e46ea3312b90b46f73fd1e3.scope - libcontainer container a2498b032f73d300b7b30eb0745fc4b343389ae97e46ea3312b90b46f73fd1e3. Jan 30 13:56:46.758624 containerd[1464]: time="2025-01-30T13:56:46.758335393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zhk5s,Uid:98eaa407-4d24-4e4e-b6fc-fe8371389f6d,Namespace:kube-system,Attempt:1,} returns sandbox id \"1182bb2cf2a5363468f87940c57b7e858699dc00dc16293a913863d47f74f9da\"" Jan 30 13:56:46.759517 kubelet[2576]: E0130 13:56:46.759483 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:46.798086 containerd[1464]: time="2025-01-30T13:56:46.797733730Z" level=info msg="CreateContainer within sandbox \"1182bb2cf2a5363468f87940c57b7e858699dc00dc16293a913863d47f74f9da\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:56:46.811414 containerd[1464]: time="2025-01-30T13:56:46.811064753Z" level=info msg="StopPodSandbox for \"e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f\"" Jan 30 13:56:46.837761 containerd[1464]: time="2025-01-30T13:56:46.837600208Z" level=info msg="CreateContainer within sandbox \"1182bb2cf2a5363468f87940c57b7e858699dc00dc16293a913863d47f74f9da\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0a19fee0a9d5c7878bc92c5c6618bd45c90b906cd7c9a8e5e75a977abe24f797\"" Jan 30 13:56:46.839167 containerd[1464]: time="2025-01-30T13:56:46.839115561Z" level=info msg="StartContainer for \"0a19fee0a9d5c7878bc92c5c6618bd45c90b906cd7c9a8e5e75a977abe24f797\"" Jan 30 13:56:46.900731 containerd[1464]: time="2025-01-30T13:56:46.900690983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59b5bcffb9-86cwd,Uid:bf3a2e24-1024-4d44-97d6-556904b751fc,Namespace:calico-system,Attempt:1,} returns sandbox id \"a2498b032f73d300b7b30eb0745fc4b343389ae97e46ea3312b90b46f73fd1e3\"" Jan 30 13:56:46.930218 systemd[1]: Started cri-containerd-0a19fee0a9d5c7878bc92c5c6618bd45c90b906cd7c9a8e5e75a977abe24f797.scope - libcontainer container 0a19fee0a9d5c7878bc92c5c6618bd45c90b906cd7c9a8e5e75a977abe24f797. Jan 30 13:56:47.028671 containerd[1464]: time="2025-01-30T13:56:47.027201486Z" level=info msg="StartContainer for \"0a19fee0a9d5c7878bc92c5c6618bd45c90b906cd7c9a8e5e75a977abe24f797\" returns successfully" Jan 30 13:56:47.059290 containerd[1464]: 2025-01-30 13:56:46.964 [INFO][4539] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" Jan 30 13:56:47.059290 containerd[1464]: 2025-01-30 13:56:46.964 [INFO][4539] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" iface="eth0" netns="/var/run/netns/cni-6126b58d-32ce-b5c8-3c63-42c10ce17ddd" Jan 30 13:56:47.059290 containerd[1464]: 2025-01-30 13:56:46.965 [INFO][4539] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" iface="eth0" netns="/var/run/netns/cni-6126b58d-32ce-b5c8-3c63-42c10ce17ddd" Jan 30 13:56:47.059290 containerd[1464]: 2025-01-30 13:56:46.965 [INFO][4539] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" iface="eth0" netns="/var/run/netns/cni-6126b58d-32ce-b5c8-3c63-42c10ce17ddd" Jan 30 13:56:47.059290 containerd[1464]: 2025-01-30 13:56:46.966 [INFO][4539] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" Jan 30 13:56:47.059290 containerd[1464]: 2025-01-30 13:56:46.966 [INFO][4539] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" Jan 30 13:56:47.059290 containerd[1464]: 2025-01-30 13:56:47.034 [INFO][4575] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" HandleID="k8s-pod-network.e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" Workload="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--g59jk-eth0" Jan 30 13:56:47.059290 containerd[1464]: 2025-01-30 13:56:47.035 [INFO][4575] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:47.059290 containerd[1464]: 2025-01-30 13:56:47.035 [INFO][4575] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:47.059290 containerd[1464]: 2025-01-30 13:56:47.049 [WARNING][4575] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" HandleID="k8s-pod-network.e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" Workload="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--g59jk-eth0" Jan 30 13:56:47.059290 containerd[1464]: 2025-01-30 13:56:47.049 [INFO][4575] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" HandleID="k8s-pod-network.e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" Workload="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--g59jk-eth0" Jan 30 13:56:47.059290 containerd[1464]: 2025-01-30 13:56:47.052 [INFO][4575] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:47.059290 containerd[1464]: 2025-01-30 13:56:47.056 [INFO][4539] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" Jan 30 13:56:47.060214 containerd[1464]: time="2025-01-30T13:56:47.059615034Z" level=info msg="TearDown network for sandbox \"e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f\" successfully" Jan 30 13:56:47.060214 containerd[1464]: time="2025-01-30T13:56:47.059654603Z" level=info msg="StopPodSandbox for \"e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f\" returns successfully" Jan 30 13:56:47.060805 kubelet[2576]: E0130 13:56:47.060596 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:47.062250 containerd[1464]: time="2025-01-30T13:56:47.061458173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-g59jk,Uid:1ed9698b-56af-4e7f-90ec-aa46e4b9c7f6,Namespace:kube-system,Attempt:1,}" Jan 30 13:56:47.161551 kubelet[2576]: E0130 13:56:47.158290 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:47.225678 systemd-networkd[1365]: cali42520583144: Gained IPv6LL Jan 30 13:56:47.228636 systemd[1]: run-netns-cni\x2d6126b58d\x2d32ce\x2db5c8\x2d3c63\x2d42c10ce17ddd.mount: Deactivated successfully. Jan 30 13:56:47.314188 systemd-networkd[1365]: cali49be0c20575: Link UP Jan 30 13:56:47.316197 systemd-networkd[1365]: cali49be0c20575: Gained carrier Jan 30 13:56:47.336010 kubelet[2576]: I0130 13:56:47.335929 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zhk5s" podStartSLOduration=34.335909368 podStartE2EDuration="34.335909368s" podCreationTimestamp="2025-01-30 13:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:56:47.180798085 +0000 UTC m=+49.519709222" watchObservedRunningTime="2025-01-30 13:56:47.335909368 +0000 UTC m=+49.674820500" Jan 30 13:56:47.343784 containerd[1464]: 2025-01-30 13:56:47.157 [INFO][4591] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--g59jk-eth0 coredns-7db6d8ff4d- kube-system 1ed9698b-56af-4e7f-90ec-aa46e4b9c7f6 864 0 2025-01-30 13:56:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-04505505d0 coredns-7db6d8ff4d-g59jk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali49be0c20575 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="66e7927aaea31dbac86b327a03fffbd6f32e22d27f9b8dc4e688b2bedd954949" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g59jk" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--g59jk-" Jan 30 13:56:47.343784 containerd[1464]: 2025-01-30 13:56:47.157 [INFO][4591] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="66e7927aaea31dbac86b327a03fffbd6f32e22d27f9b8dc4e688b2bedd954949" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g59jk" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--g59jk-eth0" Jan 30 13:56:47.343784 containerd[1464]: 2025-01-30 13:56:47.241 [INFO][4604] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="66e7927aaea31dbac86b327a03fffbd6f32e22d27f9b8dc4e688b2bedd954949" HandleID="k8s-pod-network.66e7927aaea31dbac86b327a03fffbd6f32e22d27f9b8dc4e688b2bedd954949" Workload="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--g59jk-eth0" Jan 30 13:56:47.343784 containerd[1464]: 2025-01-30 13:56:47.262 [INFO][4604] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="66e7927aaea31dbac86b327a03fffbd6f32e22d27f9b8dc4e688b2bedd954949" HandleID="k8s-pod-network.66e7927aaea31dbac86b327a03fffbd6f32e22d27f9b8dc4e688b2bedd954949" Workload="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--g59jk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00048c050), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-04505505d0", "pod":"coredns-7db6d8ff4d-g59jk", "timestamp":"2025-01-30 13:56:47.240989582 +0000 UTC"}, Hostname:"ci-4081.3.0-a-04505505d0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:56:47.343784 containerd[1464]: 2025-01-30 13:56:47.262 [INFO][4604] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:47.343784 containerd[1464]: 2025-01-30 13:56:47.263 [INFO][4604] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:47.343784 containerd[1464]: 2025-01-30 13:56:47.263 [INFO][4604] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-04505505d0' Jan 30 13:56:47.343784 containerd[1464]: 2025-01-30 13:56:47.267 [INFO][4604] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.66e7927aaea31dbac86b327a03fffbd6f32e22d27f9b8dc4e688b2bedd954949" host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:47.343784 containerd[1464]: 2025-01-30 13:56:47.274 [INFO][4604] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:47.343784 containerd[1464]: 2025-01-30 13:56:47.281 [INFO][4604] ipam/ipam.go 489: Trying affinity for 192.168.52.0/26 host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:47.343784 containerd[1464]: 2025-01-30 13:56:47.284 [INFO][4604] ipam/ipam.go 155: Attempting to load block cidr=192.168.52.0/26 host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:47.343784 containerd[1464]: 2025-01-30 13:56:47.288 [INFO][4604] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.0/26 host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:47.343784 containerd[1464]: 2025-01-30 13:56:47.289 [INFO][4604] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.0/26 handle="k8s-pod-network.66e7927aaea31dbac86b327a03fffbd6f32e22d27f9b8dc4e688b2bedd954949" host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:47.343784 containerd[1464]: 2025-01-30 13:56:47.291 [INFO][4604] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.66e7927aaea31dbac86b327a03fffbd6f32e22d27f9b8dc4e688b2bedd954949 Jan 30 13:56:47.343784 containerd[1464]: 2025-01-30 13:56:47.296 [INFO][4604] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.52.0/26 handle="k8s-pod-network.66e7927aaea31dbac86b327a03fffbd6f32e22d27f9b8dc4e688b2bedd954949" host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:47.343784 containerd[1464]: 2025-01-30 13:56:47.306 [INFO][4604] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.52.6/26] block=192.168.52.0/26 handle="k8s-pod-network.66e7927aaea31dbac86b327a03fffbd6f32e22d27f9b8dc4e688b2bedd954949" host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:47.343784 containerd[1464]: 2025-01-30 13:56:47.306 [INFO][4604] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.6/26] handle="k8s-pod-network.66e7927aaea31dbac86b327a03fffbd6f32e22d27f9b8dc4e688b2bedd954949" host="ci-4081.3.0-a-04505505d0" Jan 30 13:56:47.343784 containerd[1464]: 2025-01-30 13:56:47.306 [INFO][4604] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:47.343784 containerd[1464]: 2025-01-30 13:56:47.306 [INFO][4604] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.6/26] IPv6=[] ContainerID="66e7927aaea31dbac86b327a03fffbd6f32e22d27f9b8dc4e688b2bedd954949" HandleID="k8s-pod-network.66e7927aaea31dbac86b327a03fffbd6f32e22d27f9b8dc4e688b2bedd954949" Workload="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--g59jk-eth0" Jan 30 13:56:47.344617 containerd[1464]: 2025-01-30 13:56:47.309 [INFO][4591] cni-plugin/k8s.go 386: Populated endpoint ContainerID="66e7927aaea31dbac86b327a03fffbd6f32e22d27f9b8dc4e688b2bedd954949" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g59jk" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--g59jk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--g59jk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1ed9698b-56af-4e7f-90ec-aa46e4b9c7f6", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-04505505d0", ContainerID:"", Pod:"coredns-7db6d8ff4d-g59jk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali49be0c20575", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:47.344617 containerd[1464]: 2025-01-30 13:56:47.309 [INFO][4591] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.52.6/32] ContainerID="66e7927aaea31dbac86b327a03fffbd6f32e22d27f9b8dc4e688b2bedd954949" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g59jk" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--g59jk-eth0" Jan 30 13:56:47.344617 containerd[1464]: 2025-01-30 13:56:47.309 [INFO][4591] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali49be0c20575 ContainerID="66e7927aaea31dbac86b327a03fffbd6f32e22d27f9b8dc4e688b2bedd954949" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g59jk" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--g59jk-eth0" Jan 30 13:56:47.344617 containerd[1464]: 2025-01-30 13:56:47.316 [INFO][4591] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="66e7927aaea31dbac86b327a03fffbd6f32e22d27f9b8dc4e688b2bedd954949" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g59jk" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--g59jk-eth0" Jan 30 13:56:47.344617 containerd[1464]: 2025-01-30 13:56:47.318 [INFO][4591] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="66e7927aaea31dbac86b327a03fffbd6f32e22d27f9b8dc4e688b2bedd954949" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g59jk" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--g59jk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--g59jk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1ed9698b-56af-4e7f-90ec-aa46e4b9c7f6", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-04505505d0", ContainerID:"66e7927aaea31dbac86b327a03fffbd6f32e22d27f9b8dc4e688b2bedd954949", Pod:"coredns-7db6d8ff4d-g59jk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali49be0c20575", MAC:"92:53:25:7d:00:22", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:47.344617 containerd[1464]: 2025-01-30 13:56:47.338 [INFO][4591] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="66e7927aaea31dbac86b327a03fffbd6f32e22d27f9b8dc4e688b2bedd954949" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g59jk" WorkloadEndpoint="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--g59jk-eth0" Jan 30 13:56:47.410832 containerd[1464]: time="2025-01-30T13:56:47.410290170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:47.410832 containerd[1464]: time="2025-01-30T13:56:47.410388472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:47.410832 containerd[1464]: time="2025-01-30T13:56:47.410455857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:47.410832 containerd[1464]: time="2025-01-30T13:56:47.410601342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:47.444754 systemd[1]: Started cri-containerd-66e7927aaea31dbac86b327a03fffbd6f32e22d27f9b8dc4e688b2bedd954949.scope - libcontainer container 66e7927aaea31dbac86b327a03fffbd6f32e22d27f9b8dc4e688b2bedd954949. Jan 30 13:56:47.534527 containerd[1464]: time="2025-01-30T13:56:47.534362830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-g59jk,Uid:1ed9698b-56af-4e7f-90ec-aa46e4b9c7f6,Namespace:kube-system,Attempt:1,} returns sandbox id \"66e7927aaea31dbac86b327a03fffbd6f32e22d27f9b8dc4e688b2bedd954949\"" Jan 30 13:56:47.536640 kubelet[2576]: E0130 13:56:47.536598 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:47.542704 containerd[1464]: time="2025-01-30T13:56:47.542342831Z" level=info msg="CreateContainer within sandbox \"66e7927aaea31dbac86b327a03fffbd6f32e22d27f9b8dc4e688b2bedd954949\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:56:47.565980 containerd[1464]: time="2025-01-30T13:56:47.565637656Z" level=info msg="CreateContainer within sandbox \"66e7927aaea31dbac86b327a03fffbd6f32e22d27f9b8dc4e688b2bedd954949\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eb96f3281bea7676c75df33d23c215b92b16da244d219eb4da40530aef007a7a\"" Jan 30 13:56:47.569090 containerd[1464]: time="2025-01-30T13:56:47.568914242Z" level=info msg="StartContainer for \"eb96f3281bea7676c75df33d23c215b92b16da244d219eb4da40530aef007a7a\"" Jan 30 13:56:47.621889 systemd[1]: Started cri-containerd-eb96f3281bea7676c75df33d23c215b92b16da244d219eb4da40530aef007a7a.scope - libcontainer container eb96f3281bea7676c75df33d23c215b92b16da244d219eb4da40530aef007a7a. Jan 30 13:56:47.685912 containerd[1464]: time="2025-01-30T13:56:47.685854069Z" level=info msg="StartContainer for \"eb96f3281bea7676c75df33d23c215b92b16da244d219eb4da40530aef007a7a\" returns successfully" Jan 30 13:56:47.866343 systemd-networkd[1365]: calibb48a07b4ce: Gained IPv6LL Jan 30 13:56:48.168781 kubelet[2576]: E0130 13:56:48.167019 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:48.191900 kubelet[2576]: E0130 13:56:48.190841 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:48.196219 kubelet[2576]: I0130 13:56:48.193126 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-g59jk" podStartSLOduration=35.19310398 podStartE2EDuration="35.19310398s" podCreationTimestamp="2025-01-30 13:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:56:48.192611158 +0000 UTC m=+50.531522297" watchObservedRunningTime="2025-01-30 13:56:48.19310398 +0000 UTC m=+50.532015117" Jan 30 13:56:48.507391 systemd-networkd[1365]: calid6d638bd5b7: Gained IPv6LL Jan 30 13:56:48.954798 systemd-networkd[1365]: cali49be0c20575: Gained IPv6LL Jan 30 13:56:49.019572 containerd[1464]: time="2025-01-30T13:56:49.019515748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:49.021370 containerd[1464]: time="2025-01-30T13:56:49.021310376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 13:56:49.022335 containerd[1464]: time="2025-01-30T13:56:49.022209390Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:49.027244 containerd[1464]: time="2025-01-30T13:56:49.025537929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:49.027244 containerd[1464]: time="2025-01-30T13:56:49.026934700Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.694334364s" Jan 30 13:56:49.027244 containerd[1464]: time="2025-01-30T13:56:49.026996789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:56:49.030615 containerd[1464]: time="2025-01-30T13:56:49.030561736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:56:49.034582 containerd[1464]: time="2025-01-30T13:56:49.034520032Z" level=info msg="CreateContainer within sandbox \"38ee6a79b60317781c20b1748a363b321511afc394d09b69ec9b1889cf182047\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:56:49.061430 containerd[1464]: time="2025-01-30T13:56:49.060061530Z" level=info msg="CreateContainer within sandbox \"38ee6a79b60317781c20b1748a363b321511afc394d09b69ec9b1889cf182047\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ef41dcbbcdfd172a1d5c2d373c4160356033f889282f3859d5841bf0cb532249\"" Jan 30 13:56:49.061062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount809226974.mount: Deactivated successfully. Jan 30 13:56:49.067670 containerd[1464]: time="2025-01-30T13:56:49.062744667Z" level=info msg="StartContainer for \"ef41dcbbcdfd172a1d5c2d373c4160356033f889282f3859d5841bf0cb532249\"" Jan 30 13:56:49.115727 systemd[1]: Started cri-containerd-ef41dcbbcdfd172a1d5c2d373c4160356033f889282f3859d5841bf0cb532249.scope - libcontainer container ef41dcbbcdfd172a1d5c2d373c4160356033f889282f3859d5841bf0cb532249. Jan 30 13:56:49.176034 kubelet[2576]: E0130 13:56:49.175292 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:49.176034 kubelet[2576]: E0130 13:56:49.175488 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:49.181083 containerd[1464]: time="2025-01-30T13:56:49.180967273Z" level=info msg="StartContainer for \"ef41dcbbcdfd172a1d5c2d373c4160356033f889282f3859d5841bf0cb532249\" returns successfully" Jan 30 13:56:49.398494 containerd[1464]: time="2025-01-30T13:56:49.398381629Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:49.399623 containerd[1464]: time="2025-01-30T13:56:49.399526764Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 13:56:49.403833 containerd[1464]: time="2025-01-30T13:56:49.403768308Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 372.920575ms" Jan 30 13:56:49.403833 containerd[1464]: time="2025-01-30T13:56:49.403833490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:56:49.407095 containerd[1464]: time="2025-01-30T13:56:49.406596603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 13:56:49.410447 containerd[1464]: time="2025-01-30T13:56:49.410111823Z" level=info msg="CreateContainer within sandbox \"f9c35256784f07e128503956e361a86168f637910b00c3c2130533a0ca877bd2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:56:49.441155 containerd[1464]: time="2025-01-30T13:56:49.441096077Z" level=info msg="CreateContainer within sandbox \"f9c35256784f07e128503956e361a86168f637910b00c3c2130533a0ca877bd2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3e2d2b6d593b344624256762544c63f0084622e59bcd4f3a2c08fb4e591c2762\"" Jan 30 13:56:49.444029 containerd[1464]: time="2025-01-30T13:56:49.443964498Z" level=info msg="StartContainer for \"3e2d2b6d593b344624256762544c63f0084622e59bcd4f3a2c08fb4e591c2762\"" Jan 30 13:56:49.515736 systemd[1]: Started cri-containerd-3e2d2b6d593b344624256762544c63f0084622e59bcd4f3a2c08fb4e591c2762.scope - libcontainer container 3e2d2b6d593b344624256762544c63f0084622e59bcd4f3a2c08fb4e591c2762. Jan 30 13:56:49.590644 containerd[1464]: time="2025-01-30T13:56:49.590574904Z" level=info msg="StartContainer for \"3e2d2b6d593b344624256762544c63f0084622e59bcd4f3a2c08fb4e591c2762\" returns successfully" Jan 30 13:56:50.235557 kubelet[2576]: I0130 13:56:50.234846 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7846bc9c4f-pphrg" podStartSLOduration=27.408601159 podStartE2EDuration="31.234812369s" podCreationTimestamp="2025-01-30 13:56:19 +0000 UTC" firstStartedPulling="2025-01-30 13:56:45.578609594 +0000 UTC m=+47.917520714" lastFinishedPulling="2025-01-30 13:56:49.404820807 +0000 UTC m=+51.743731924" observedRunningTime="2025-01-30 13:56:50.208505669 +0000 UTC m=+52.547416807" watchObservedRunningTime="2025-01-30 13:56:50.234812369 +0000 UTC m=+52.573723509" Jan 30 13:56:50.235557 kubelet[2576]: I0130 13:56:50.235361 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7846bc9c4f-9s5kj" podStartSLOduration=27.770363875 podStartE2EDuration="31.235347028s" podCreationTimestamp="2025-01-30 13:56:19 +0000 UTC" firstStartedPulling="2025-01-30 13:56:45.563998897 +0000 UTC m=+47.902910017" lastFinishedPulling="2025-01-30 13:56:49.02898204 +0000 UTC m=+51.367893170" observedRunningTime="2025-01-30 13:56:50.232857256 +0000 UTC m=+52.571768394" watchObservedRunningTime="2025-01-30 13:56:50.235347028 +0000 UTC m=+52.574258166" Jan 30 13:56:50.992459 containerd[1464]: time="2025-01-30T13:56:50.992370256Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:50.994662 containerd[1464]: time="2025-01-30T13:56:50.994581537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 13:56:50.996729 containerd[1464]: time="2025-01-30T13:56:50.996675293Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:51.003136 containerd[1464]: time="2025-01-30T13:56:51.003037379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:51.008754 containerd[1464]: time="2025-01-30T13:56:51.007592094Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.600941634s" Jan 30 13:56:51.008754 containerd[1464]: time="2025-01-30T13:56:51.007664627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 13:56:51.012105 containerd[1464]: time="2025-01-30T13:56:51.012053415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 13:56:51.023225 containerd[1464]: time="2025-01-30T13:56:51.022958465Z" level=info msg="CreateContainer within sandbox \"503d33f9d5d2ec042241150c52d83601e138e8007d241e72d88dbfdcae717c3b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 13:56:51.066070 containerd[1464]: time="2025-01-30T13:56:51.065111216Z" level=info msg="CreateContainer within sandbox \"503d33f9d5d2ec042241150c52d83601e138e8007d241e72d88dbfdcae717c3b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9f69a2c9a953457ed04bc9b9ba80e772f8f8049129e6851d5f69bd4dac0b24d4\"" Jan 30 13:56:51.073230 containerd[1464]: time="2025-01-30T13:56:51.072951952Z" level=info msg="StartContainer for \"9f69a2c9a953457ed04bc9b9ba80e772f8f8049129e6851d5f69bd4dac0b24d4\"" Jan 30 13:56:51.165081 systemd[1]: Started cri-containerd-9f69a2c9a953457ed04bc9b9ba80e772f8f8049129e6851d5f69bd4dac0b24d4.scope - libcontainer container 9f69a2c9a953457ed04bc9b9ba80e772f8f8049129e6851d5f69bd4dac0b24d4. Jan 30 13:56:51.199846 kubelet[2576]: I0130 13:56:51.199745 2576 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:56:51.199846 kubelet[2576]: I0130 13:56:51.199758 2576 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:56:51.222473 containerd[1464]: time="2025-01-30T13:56:51.222382691Z" level=info msg="StartContainer for \"9f69a2c9a953457ed04bc9b9ba80e772f8f8049129e6851d5f69bd4dac0b24d4\" returns successfully" Jan 30 13:56:51.982714 kubelet[2576]: I0130 13:56:51.982543 2576 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 13:56:51.985358 kubelet[2576]: I0130 13:56:51.985225 2576 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 13:56:52.230373 kubelet[2576]: I0130 13:56:52.230304 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rjzh2" podStartSLOduration=26.859156019 podStartE2EDuration="33.230285778s" podCreationTimestamp="2025-01-30 13:56:19 +0000 UTC" firstStartedPulling="2025-01-30 13:56:44.640123266 +0000 UTC m=+46.979034382" lastFinishedPulling="2025-01-30 13:56:51.011253003 +0000 UTC m=+53.350164141" observedRunningTime="2025-01-30 13:56:52.226926241 +0000 UTC m=+54.565837382" watchObservedRunningTime="2025-01-30 13:56:52.230285778 +0000 UTC m=+54.569196923" Jan 30 13:56:53.235466 containerd[1464]: time="2025-01-30T13:56:53.235348318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:53.236871 containerd[1464]: time="2025-01-30T13:56:53.236795470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 13:56:53.239064 containerd[1464]: time="2025-01-30T13:56:53.239001593Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:53.243459 containerd[1464]: time="2025-01-30T13:56:53.242468224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:53.243634 containerd[1464]: time="2025-01-30T13:56:53.243533299Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.231229202s" Jan 30 13:56:53.243634 containerd[1464]: time="2025-01-30T13:56:53.243590747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 13:56:53.277802 containerd[1464]: time="2025-01-30T13:56:53.277685416Z" level=info msg="CreateContainer within sandbox \"a2498b032f73d300b7b30eb0745fc4b343389ae97e46ea3312b90b46f73fd1e3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 13:56:53.349752 containerd[1464]: time="2025-01-30T13:56:53.349675856Z" level=info msg="CreateContainer within sandbox \"a2498b032f73d300b7b30eb0745fc4b343389ae97e46ea3312b90b46f73fd1e3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c175ab69621af5db4f3d44438f271d4592b11c9056c1a8954979ea3b0e9f0752\"" Jan 30 13:56:53.350856 containerd[1464]: time="2025-01-30T13:56:53.350780493Z" level=info msg="StartContainer for \"c175ab69621af5db4f3d44438f271d4592b11c9056c1a8954979ea3b0e9f0752\"" Jan 30 13:56:53.392752 systemd[1]: Started cri-containerd-c175ab69621af5db4f3d44438f271d4592b11c9056c1a8954979ea3b0e9f0752.scope - libcontainer container c175ab69621af5db4f3d44438f271d4592b11c9056c1a8954979ea3b0e9f0752. Jan 30 13:56:53.453448 containerd[1464]: time="2025-01-30T13:56:53.451468825Z" level=info msg="StartContainer for \"c175ab69621af5db4f3d44438f271d4592b11c9056c1a8954979ea3b0e9f0752\" returns successfully" Jan 30 13:56:53.863586 kubelet[2576]: E0130 13:56:53.863532 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:54.244204 kubelet[2576]: I0130 13:56:54.242501 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-59b5bcffb9-86cwd" podStartSLOduration=27.904001124 podStartE2EDuration="34.242480336s" podCreationTimestamp="2025-01-30 13:56:20 +0000 UTC" firstStartedPulling="2025-01-30 13:56:46.906268368 +0000 UTC m=+49.245179497" lastFinishedPulling="2025-01-30 13:56:53.244747593 +0000 UTC m=+55.583658709" observedRunningTime="2025-01-30 13:56:54.2422781 +0000 UTC m=+56.581189239" watchObservedRunningTime="2025-01-30 13:56:54.242480336 +0000 UTC m=+56.581391476" Jan 30 13:56:57.930448 containerd[1464]: time="2025-01-30T13:56:57.929358303Z" level=info msg="StopPodSandbox for \"22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318\"" Jan 30 13:56:58.219507 containerd[1464]: 2025-01-30 13:56:58.137 [WARNING][4958] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--04505505d0-k8s-calico--kube--controllers--59b5bcffb9--86cwd-eth0", GenerateName:"calico-kube-controllers-59b5bcffb9-", Namespace:"calico-system", SelfLink:"", UID:"bf3a2e24-1024-4d44-97d6-556904b751fc", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59b5bcffb9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-04505505d0", ContainerID:"a2498b032f73d300b7b30eb0745fc4b343389ae97e46ea3312b90b46f73fd1e3", Pod:"calico-kube-controllers-59b5bcffb9-86cwd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid6d638bd5b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:58.219507 containerd[1464]: 2025-01-30 13:56:58.141 [INFO][4958] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" Jan 30 13:56:58.219507 containerd[1464]: 2025-01-30 13:56:58.141 [INFO][4958] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" iface="eth0" netns="" Jan 30 13:56:58.219507 containerd[1464]: 2025-01-30 13:56:58.142 [INFO][4958] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" Jan 30 13:56:58.219507 containerd[1464]: 2025-01-30 13:56:58.142 [INFO][4958] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" Jan 30 13:56:58.219507 containerd[1464]: 2025-01-30 13:56:58.200 [INFO][4964] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" HandleID="k8s-pod-network.22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--kube--controllers--59b5bcffb9--86cwd-eth0" Jan 30 13:56:58.219507 containerd[1464]: 2025-01-30 13:56:58.201 [INFO][4964] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:58.219507 containerd[1464]: 2025-01-30 13:56:58.201 [INFO][4964] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:58.219507 containerd[1464]: 2025-01-30 13:56:58.209 [WARNING][4964] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" HandleID="k8s-pod-network.22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--kube--controllers--59b5bcffb9--86cwd-eth0" Jan 30 13:56:58.219507 containerd[1464]: 2025-01-30 13:56:58.209 [INFO][4964] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" HandleID="k8s-pod-network.22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--kube--controllers--59b5bcffb9--86cwd-eth0" Jan 30 13:56:58.219507 containerd[1464]: 2025-01-30 13:56:58.214 [INFO][4964] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:58.219507 containerd[1464]: 2025-01-30 13:56:58.217 [INFO][4958] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" Jan 30 13:56:58.220823 containerd[1464]: time="2025-01-30T13:56:58.219488973Z" level=info msg="TearDown network for sandbox \"22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318\" successfully" Jan 30 13:56:58.220823 containerd[1464]: time="2025-01-30T13:56:58.219685654Z" level=info msg="StopPodSandbox for \"22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318\" returns successfully" Jan 30 13:56:58.220823 containerd[1464]: time="2025-01-30T13:56:58.220723128Z" level=info msg="RemovePodSandbox for \"22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318\"" Jan 30 13:56:58.220823 containerd[1464]: time="2025-01-30T13:56:58.220763291Z" level=info msg="Forcibly stopping sandbox \"22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318\"" Jan 30 13:56:58.378313 containerd[1464]: 2025-01-30 13:56:58.309 [WARNING][4982] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--04505505d0-k8s-calico--kube--controllers--59b5bcffb9--86cwd-eth0", GenerateName:"calico-kube-controllers-59b5bcffb9-", Namespace:"calico-system", SelfLink:"", UID:"bf3a2e24-1024-4d44-97d6-556904b751fc", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59b5bcffb9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-04505505d0", ContainerID:"a2498b032f73d300b7b30eb0745fc4b343389ae97e46ea3312b90b46f73fd1e3", Pod:"calico-kube-controllers-59b5bcffb9-86cwd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid6d638bd5b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:58.378313 containerd[1464]: 2025-01-30 13:56:58.310 [INFO][4982] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" Jan 30 13:56:58.378313 containerd[1464]: 2025-01-30 13:56:58.310 [INFO][4982] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" iface="eth0" netns="" Jan 30 13:56:58.378313 containerd[1464]: 2025-01-30 13:56:58.310 [INFO][4982] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" Jan 30 13:56:58.378313 containerd[1464]: 2025-01-30 13:56:58.310 [INFO][4982] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" Jan 30 13:56:58.378313 containerd[1464]: 2025-01-30 13:56:58.362 [INFO][4989] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" HandleID="k8s-pod-network.22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--kube--controllers--59b5bcffb9--86cwd-eth0" Jan 30 13:56:58.378313 containerd[1464]: 2025-01-30 13:56:58.363 [INFO][4989] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:58.378313 containerd[1464]: 2025-01-30 13:56:58.363 [INFO][4989] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:58.378313 containerd[1464]: 2025-01-30 13:56:58.371 [WARNING][4989] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" HandleID="k8s-pod-network.22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--kube--controllers--59b5bcffb9--86cwd-eth0" Jan 30 13:56:58.378313 containerd[1464]: 2025-01-30 13:56:58.371 [INFO][4989] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" HandleID="k8s-pod-network.22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--kube--controllers--59b5bcffb9--86cwd-eth0" Jan 30 13:56:58.378313 containerd[1464]: 2025-01-30 13:56:58.373 [INFO][4989] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:58.378313 containerd[1464]: 2025-01-30 13:56:58.375 [INFO][4982] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318" Jan 30 13:56:58.379362 containerd[1464]: time="2025-01-30T13:56:58.378383808Z" level=info msg="TearDown network for sandbox \"22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318\" successfully" Jan 30 13:56:58.416350 containerd[1464]: time="2025-01-30T13:56:58.416220155Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:58.416350 containerd[1464]: time="2025-01-30T13:56:58.416336385Z" level=info msg="RemovePodSandbox \"22252d1ae54a10c9df79c77bfbd409a9947aba21ee2d8b969aad36977643b318\" returns successfully" Jan 30 13:56:58.417339 containerd[1464]: time="2025-01-30T13:56:58.417236209Z" level=info msg="StopPodSandbox for \"4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602\"" Jan 30 13:56:58.568753 containerd[1464]: 2025-01-30 13:56:58.487 [WARNING][5008] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--9s5kj-eth0", GenerateName:"calico-apiserver-7846bc9c4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"10fe79e7-f8b8-48d0-9f50-3dcca5453972", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7846bc9c4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-04505505d0", ContainerID:"38ee6a79b60317781c20b1748a363b321511afc394d09b69ec9b1889cf182047", Pod:"calico-apiserver-7846bc9c4f-9s5kj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliabfcc8296f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:58.568753 containerd[1464]: 2025-01-30 13:56:58.488 [INFO][5008] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" Jan 30 13:56:58.568753 containerd[1464]: 2025-01-30 13:56:58.488 [INFO][5008] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" iface="eth0" netns="" Jan 30 13:56:58.568753 containerd[1464]: 2025-01-30 13:56:58.488 [INFO][5008] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" Jan 30 13:56:58.568753 containerd[1464]: 2025-01-30 13:56:58.488 [INFO][5008] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" Jan 30 13:56:58.568753 containerd[1464]: 2025-01-30 13:56:58.539 [INFO][5014] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" HandleID="k8s-pod-network.4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--9s5kj-eth0" Jan 30 13:56:58.568753 containerd[1464]: 2025-01-30 13:56:58.541 [INFO][5014] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:58.568753 containerd[1464]: 2025-01-30 13:56:58.543 [INFO][5014] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:58.568753 containerd[1464]: 2025-01-30 13:56:58.557 [WARNING][5014] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" HandleID="k8s-pod-network.4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--9s5kj-eth0" Jan 30 13:56:58.568753 containerd[1464]: 2025-01-30 13:56:58.557 [INFO][5014] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" HandleID="k8s-pod-network.4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--9s5kj-eth0" Jan 30 13:56:58.568753 containerd[1464]: 2025-01-30 13:56:58.561 [INFO][5014] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:58.568753 containerd[1464]: 2025-01-30 13:56:58.564 [INFO][5008] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" Jan 30 13:56:58.568753 containerd[1464]: time="2025-01-30T13:56:58.567466835Z" level=info msg="TearDown network for sandbox \"4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602\" successfully" Jan 30 13:56:58.568753 containerd[1464]: time="2025-01-30T13:56:58.567496529Z" level=info msg="StopPodSandbox for \"4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602\" returns successfully" Jan 30 13:56:58.568753 containerd[1464]: time="2025-01-30T13:56:58.568570793Z" level=info msg="RemovePodSandbox for \"4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602\"" Jan 30 13:56:58.572799 containerd[1464]: time="2025-01-30T13:56:58.568763753Z" level=info msg="Forcibly stopping sandbox \"4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602\"" Jan 30 13:56:58.746741 containerd[1464]: 2025-01-30 13:56:58.655 [WARNING][5033] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--9s5kj-eth0", GenerateName:"calico-apiserver-7846bc9c4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"10fe79e7-f8b8-48d0-9f50-3dcca5453972", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7846bc9c4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-04505505d0", ContainerID:"38ee6a79b60317781c20b1748a363b321511afc394d09b69ec9b1889cf182047", Pod:"calico-apiserver-7846bc9c4f-9s5kj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliabfcc8296f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:58.746741 containerd[1464]: 2025-01-30 13:56:58.655 [INFO][5033] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" Jan 30 13:56:58.746741 containerd[1464]: 2025-01-30 13:56:58.655 [INFO][5033] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" iface="eth0" netns="" Jan 30 13:56:58.746741 containerd[1464]: 2025-01-30 13:56:58.655 [INFO][5033] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" Jan 30 13:56:58.746741 containerd[1464]: 2025-01-30 13:56:58.655 [INFO][5033] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" Jan 30 13:56:58.746741 containerd[1464]: 2025-01-30 13:56:58.722 [INFO][5039] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" HandleID="k8s-pod-network.4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--9s5kj-eth0" Jan 30 13:56:58.746741 containerd[1464]: 2025-01-30 13:56:58.723 [INFO][5039] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:58.746741 containerd[1464]: 2025-01-30 13:56:58.723 [INFO][5039] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:58.746741 containerd[1464]: 2025-01-30 13:56:58.735 [WARNING][5039] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" HandleID="k8s-pod-network.4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--9s5kj-eth0" Jan 30 13:56:58.746741 containerd[1464]: 2025-01-30 13:56:58.735 [INFO][5039] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" HandleID="k8s-pod-network.4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--9s5kj-eth0" Jan 30 13:56:58.746741 containerd[1464]: 2025-01-30 13:56:58.738 [INFO][5039] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:58.746741 containerd[1464]: 2025-01-30 13:56:58.743 [INFO][5033] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602" Jan 30 13:56:58.748384 containerd[1464]: time="2025-01-30T13:56:58.746825243Z" level=info msg="TearDown network for sandbox \"4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602\" successfully" Jan 30 13:56:58.752807 containerd[1464]: time="2025-01-30T13:56:58.752712642Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:58.752969 containerd[1464]: time="2025-01-30T13:56:58.752843159Z" level=info msg="RemovePodSandbox \"4093d088a5165c04bca917947a3a4d88720137f09d2675b436604aea9e57a602\" returns successfully" Jan 30 13:56:58.753918 containerd[1464]: time="2025-01-30T13:56:58.753499894Z" level=info msg="StopPodSandbox for \"df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b\"" Jan 30 13:56:58.900921 containerd[1464]: 2025-01-30 13:56:58.823 [WARNING][5057] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--04505505d0-k8s-csi--node--driver--rjzh2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1de43d45-e363-48a6-9642-5ad8984fd09e", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-04505505d0", ContainerID:"503d33f9d5d2ec042241150c52d83601e138e8007d241e72d88dbfdcae717c3b", Pod:"csi-node-driver-rjzh2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6e1bd6f9e36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:58.900921 containerd[1464]: 2025-01-30 13:56:58.823 [INFO][5057] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" Jan 30 13:56:58.900921 containerd[1464]: 2025-01-30 13:56:58.823 [INFO][5057] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" iface="eth0" netns="" Jan 30 13:56:58.900921 containerd[1464]: 2025-01-30 13:56:58.823 [INFO][5057] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" Jan 30 13:56:58.900921 containerd[1464]: 2025-01-30 13:56:58.823 [INFO][5057] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" Jan 30 13:56:58.900921 containerd[1464]: 2025-01-30 13:56:58.873 [INFO][5063] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" HandleID="k8s-pod-network.df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" Workload="ci--4081.3.0--a--04505505d0-k8s-csi--node--driver--rjzh2-eth0" Jan 30 13:56:58.900921 containerd[1464]: 2025-01-30 13:56:58.874 [INFO][5063] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:58.900921 containerd[1464]: 2025-01-30 13:56:58.874 [INFO][5063] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:58.900921 containerd[1464]: 2025-01-30 13:56:58.891 [WARNING][5063] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" HandleID="k8s-pod-network.df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" Workload="ci--4081.3.0--a--04505505d0-k8s-csi--node--driver--rjzh2-eth0" Jan 30 13:56:58.900921 containerd[1464]: 2025-01-30 13:56:58.891 [INFO][5063] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" HandleID="k8s-pod-network.df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" Workload="ci--4081.3.0--a--04505505d0-k8s-csi--node--driver--rjzh2-eth0" Jan 30 13:56:58.900921 containerd[1464]: 2025-01-30 13:56:58.894 [INFO][5063] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:58.900921 containerd[1464]: 2025-01-30 13:56:58.897 [INFO][5057] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" Jan 30 13:56:58.902257 containerd[1464]: time="2025-01-30T13:56:58.901539129Z" level=info msg="TearDown network for sandbox \"df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b\" successfully" Jan 30 13:56:58.902257 containerd[1464]: time="2025-01-30T13:56:58.901582515Z" level=info msg="StopPodSandbox for \"df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b\" returns successfully" Jan 30 13:56:58.903835 containerd[1464]: time="2025-01-30T13:56:58.903784787Z" level=info msg="RemovePodSandbox for \"df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b\"" Jan 30 13:56:58.903835 containerd[1464]: time="2025-01-30T13:56:58.903835536Z" level=info msg="Forcibly stopping sandbox \"df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b\"" Jan 30 13:56:59.022855 containerd[1464]: 2025-01-30 13:56:58.968 [WARNING][5082] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--04505505d0-k8s-csi--node--driver--rjzh2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1de43d45-e363-48a6-9642-5ad8984fd09e", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-04505505d0", ContainerID:"503d33f9d5d2ec042241150c52d83601e138e8007d241e72d88dbfdcae717c3b", Pod:"csi-node-driver-rjzh2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6e1bd6f9e36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:59.022855 containerd[1464]: 2025-01-30 13:56:58.968 [INFO][5082] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" Jan 30 13:56:59.022855 containerd[1464]: 2025-01-30 13:56:58.968 [INFO][5082] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" iface="eth0" netns="" Jan 30 13:56:59.022855 containerd[1464]: 2025-01-30 13:56:58.968 [INFO][5082] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" Jan 30 13:56:59.022855 containerd[1464]: 2025-01-30 13:56:58.968 [INFO][5082] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" Jan 30 13:56:59.022855 containerd[1464]: 2025-01-30 13:56:59.001 [INFO][5088] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" HandleID="k8s-pod-network.df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" Workload="ci--4081.3.0--a--04505505d0-k8s-csi--node--driver--rjzh2-eth0" Jan 30 13:56:59.022855 containerd[1464]: 2025-01-30 13:56:59.001 [INFO][5088] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:59.022855 containerd[1464]: 2025-01-30 13:56:59.001 [INFO][5088] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:59.022855 containerd[1464]: 2025-01-30 13:56:59.014 [WARNING][5088] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" HandleID="k8s-pod-network.df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" Workload="ci--4081.3.0--a--04505505d0-k8s-csi--node--driver--rjzh2-eth0" Jan 30 13:56:59.022855 containerd[1464]: 2025-01-30 13:56:59.014 [INFO][5088] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" HandleID="k8s-pod-network.df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" Workload="ci--4081.3.0--a--04505505d0-k8s-csi--node--driver--rjzh2-eth0" Jan 30 13:56:59.022855 containerd[1464]: 2025-01-30 13:56:59.017 [INFO][5088] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:59.022855 containerd[1464]: 2025-01-30 13:56:59.019 [INFO][5082] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b" Jan 30 13:56:59.024931 containerd[1464]: time="2025-01-30T13:56:59.022943025Z" level=info msg="TearDown network for sandbox \"df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b\" successfully" Jan 30 13:56:59.035350 containerd[1464]: time="2025-01-30T13:56:59.035205947Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:59.035350 containerd[1464]: time="2025-01-30T13:56:59.035342469Z" level=info msg="RemovePodSandbox \"df86bc2f60fe79ba2d7b326dea8123a5ca8d0b4b229b53f625a41bc6f2310d9b\" returns successfully" Jan 30 13:56:59.036647 containerd[1464]: time="2025-01-30T13:56:59.036566343Z" level=info msg="StopPodSandbox for \"e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f\"" Jan 30 13:56:59.196041 containerd[1464]: 2025-01-30 13:56:59.129 [WARNING][5106] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--g59jk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1ed9698b-56af-4e7f-90ec-aa46e4b9c7f6", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-04505505d0", ContainerID:"66e7927aaea31dbac86b327a03fffbd6f32e22d27f9b8dc4e688b2bedd954949", Pod:"coredns-7db6d8ff4d-g59jk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali49be0c20575", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:59.196041 containerd[1464]: 2025-01-30 13:56:59.130 [INFO][5106] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" Jan 30 13:56:59.196041 containerd[1464]: 2025-01-30 13:56:59.130 [INFO][5106] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" iface="eth0" netns="" Jan 30 13:56:59.196041 containerd[1464]: 2025-01-30 13:56:59.131 [INFO][5106] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" Jan 30 13:56:59.196041 containerd[1464]: 2025-01-30 13:56:59.131 [INFO][5106] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" Jan 30 13:56:59.196041 containerd[1464]: 2025-01-30 13:56:59.176 [INFO][5113] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" HandleID="k8s-pod-network.e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" Workload="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--g59jk-eth0" Jan 30 13:56:59.196041 containerd[1464]: 2025-01-30 13:56:59.178 [INFO][5113] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:59.196041 containerd[1464]: 2025-01-30 13:56:59.178 [INFO][5113] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:59.196041 containerd[1464]: 2025-01-30 13:56:59.187 [WARNING][5113] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" HandleID="k8s-pod-network.e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" Workload="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--g59jk-eth0" Jan 30 13:56:59.196041 containerd[1464]: 2025-01-30 13:56:59.187 [INFO][5113] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" HandleID="k8s-pod-network.e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" Workload="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--g59jk-eth0" Jan 30 13:56:59.196041 containerd[1464]: 2025-01-30 13:56:59.190 [INFO][5113] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:59.196041 containerd[1464]: 2025-01-30 13:56:59.193 [INFO][5106] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" Jan 30 13:56:59.196041 containerd[1464]: time="2025-01-30T13:56:59.196008658Z" level=info msg="TearDown network for sandbox \"e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f\" successfully" Jan 30 13:56:59.196041 containerd[1464]: time="2025-01-30T13:56:59.196042945Z" level=info msg="StopPodSandbox for \"e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f\" returns successfully" Jan 30 13:56:59.199659 containerd[1464]: time="2025-01-30T13:56:59.196871573Z" level=info msg="RemovePodSandbox for \"e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f\"" Jan 30 13:56:59.199659 containerd[1464]: time="2025-01-30T13:56:59.196907267Z" level=info msg="Forcibly stopping sandbox \"e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f\"" Jan 30 13:56:59.343375 containerd[1464]: 2025-01-30 13:56:59.280 [WARNING][5131] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--g59jk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1ed9698b-56af-4e7f-90ec-aa46e4b9c7f6", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-04505505d0", ContainerID:"66e7927aaea31dbac86b327a03fffbd6f32e22d27f9b8dc4e688b2bedd954949", Pod:"coredns-7db6d8ff4d-g59jk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali49be0c20575", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:59.343375 containerd[1464]: 2025-01-30 13:56:59.280 [INFO][5131] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" Jan 30 13:56:59.343375 containerd[1464]: 2025-01-30 13:56:59.280 [INFO][5131] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" iface="eth0" netns="" Jan 30 13:56:59.343375 containerd[1464]: 2025-01-30 13:56:59.280 [INFO][5131] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" Jan 30 13:56:59.343375 containerd[1464]: 2025-01-30 13:56:59.280 [INFO][5131] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" Jan 30 13:56:59.343375 containerd[1464]: 2025-01-30 13:56:59.320 [INFO][5137] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" HandleID="k8s-pod-network.e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" Workload="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--g59jk-eth0" Jan 30 13:56:59.343375 containerd[1464]: 2025-01-30 13:56:59.320 [INFO][5137] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:59.343375 containerd[1464]: 2025-01-30 13:56:59.321 [INFO][5137] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:59.343375 containerd[1464]: 2025-01-30 13:56:59.330 [WARNING][5137] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" HandleID="k8s-pod-network.e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" Workload="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--g59jk-eth0" Jan 30 13:56:59.343375 containerd[1464]: 2025-01-30 13:56:59.330 [INFO][5137] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" HandleID="k8s-pod-network.e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" Workload="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--g59jk-eth0" Jan 30 13:56:59.343375 containerd[1464]: 2025-01-30 13:56:59.334 [INFO][5137] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:59.343375 containerd[1464]: 2025-01-30 13:56:59.338 [INFO][5131] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f" Jan 30 13:56:59.343375 containerd[1464]: time="2025-01-30T13:56:59.341831197Z" level=info msg="TearDown network for sandbox \"e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f\" successfully" Jan 30 13:56:59.349229 containerd[1464]: time="2025-01-30T13:56:59.349131747Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:59.349769 containerd[1464]: time="2025-01-30T13:56:59.349689659Z" level=info msg="RemovePodSandbox \"e61514d565e112d43dcdf6e49b4138102ea104657d86b6c43fd23a2a3cce1d2f\" returns successfully" Jan 30 13:56:59.350873 containerd[1464]: time="2025-01-30T13:56:59.350781281Z" level=info msg="StopPodSandbox for \"181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535\"" Jan 30 13:56:59.495786 containerd[1464]: 2025-01-30 13:56:59.429 [WARNING][5156] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--pphrg-eth0", GenerateName:"calico-apiserver-7846bc9c4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"0510e406-ed27-4565-a620-76d33cf07b41", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7846bc9c4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-04505505d0", ContainerID:"f9c35256784f07e128503956e361a86168f637910b00c3c2130533a0ca877bd2", Pod:"calico-apiserver-7846bc9c4f-pphrg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali42520583144", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:59.495786 containerd[1464]: 2025-01-30 13:56:59.429 [INFO][5156] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" Jan 30 13:56:59.495786 containerd[1464]: 2025-01-30 13:56:59.429 [INFO][5156] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" iface="eth0" netns="" Jan 30 13:56:59.495786 containerd[1464]: 2025-01-30 13:56:59.429 [INFO][5156] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" Jan 30 13:56:59.495786 containerd[1464]: 2025-01-30 13:56:59.429 [INFO][5156] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" Jan 30 13:56:59.495786 containerd[1464]: 2025-01-30 13:56:59.473 [INFO][5162] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" HandleID="k8s-pod-network.181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--pphrg-eth0" Jan 30 13:56:59.495786 containerd[1464]: 2025-01-30 13:56:59.473 [INFO][5162] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:59.495786 containerd[1464]: 2025-01-30 13:56:59.473 [INFO][5162] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:59.495786 containerd[1464]: 2025-01-30 13:56:59.485 [WARNING][5162] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" HandleID="k8s-pod-network.181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--pphrg-eth0" Jan 30 13:56:59.495786 containerd[1464]: 2025-01-30 13:56:59.485 [INFO][5162] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" HandleID="k8s-pod-network.181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--pphrg-eth0" Jan 30 13:56:59.495786 containerd[1464]: 2025-01-30 13:56:59.488 [INFO][5162] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:59.495786 containerd[1464]: 2025-01-30 13:56:59.491 [INFO][5156] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" Jan 30 13:56:59.495786 containerd[1464]: time="2025-01-30T13:56:59.495759411Z" level=info msg="TearDown network for sandbox \"181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535\" successfully" Jan 30 13:56:59.496343 containerd[1464]: time="2025-01-30T13:56:59.495803952Z" level=info msg="StopPodSandbox for \"181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535\" returns successfully" Jan 30 13:56:59.498982 containerd[1464]: time="2025-01-30T13:56:59.498930330Z" level=info msg="RemovePodSandbox for \"181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535\"" Jan 30 13:56:59.498982 containerd[1464]: time="2025-01-30T13:56:59.498993356Z" level=info msg="Forcibly stopping sandbox \"181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535\"" Jan 30 13:56:59.738336 containerd[1464]: 2025-01-30 13:56:59.601 [WARNING][5180] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--pphrg-eth0", GenerateName:"calico-apiserver-7846bc9c4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"0510e406-ed27-4565-a620-76d33cf07b41", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7846bc9c4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-04505505d0", ContainerID:"f9c35256784f07e128503956e361a86168f637910b00c3c2130533a0ca877bd2", Pod:"calico-apiserver-7846bc9c4f-pphrg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali42520583144", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:59.738336 containerd[1464]: 2025-01-30 13:56:59.601 [INFO][5180] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" Jan 30 13:56:59.738336 containerd[1464]: 2025-01-30 13:56:59.601 [INFO][5180] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" iface="eth0" netns="" Jan 30 13:56:59.738336 containerd[1464]: 2025-01-30 13:56:59.601 [INFO][5180] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" Jan 30 13:56:59.738336 containerd[1464]: 2025-01-30 13:56:59.601 [INFO][5180] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" Jan 30 13:56:59.738336 containerd[1464]: 2025-01-30 13:56:59.680 [INFO][5186] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" HandleID="k8s-pod-network.181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--pphrg-eth0" Jan 30 13:56:59.738336 containerd[1464]: 2025-01-30 13:56:59.682 [INFO][5186] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:59.738336 containerd[1464]: 2025-01-30 13:56:59.682 [INFO][5186] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:59.738336 containerd[1464]: 2025-01-30 13:56:59.716 [WARNING][5186] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" HandleID="k8s-pod-network.181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--pphrg-eth0" Jan 30 13:56:59.738336 containerd[1464]: 2025-01-30 13:56:59.716 [INFO][5186] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" HandleID="k8s-pod-network.181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" Workload="ci--4081.3.0--a--04505505d0-k8s-calico--apiserver--7846bc9c4f--pphrg-eth0" Jan 30 13:56:59.738336 containerd[1464]: 2025-01-30 13:56:59.730 [INFO][5186] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:59.738336 containerd[1464]: 2025-01-30 13:56:59.734 [INFO][5180] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535" Jan 30 13:56:59.739309 containerd[1464]: time="2025-01-30T13:56:59.738449712Z" level=info msg="TearDown network for sandbox \"181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535\" successfully" Jan 30 13:56:59.743996 containerd[1464]: time="2025-01-30T13:56:59.743909710Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:59.744187 containerd[1464]: time="2025-01-30T13:56:59.744045581Z" level=info msg="RemovePodSandbox \"181523b045311225f5410a286a19f56f979b4bd601e6c79b5d244029adb48535\" returns successfully" Jan 30 13:56:59.745010 containerd[1464]: time="2025-01-30T13:56:59.744963782Z" level=info msg="StopPodSandbox for \"b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae\"" Jan 30 13:56:59.952249 containerd[1464]: 2025-01-30 13:56:59.858 [WARNING][5204] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--zhk5s-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"98eaa407-4d24-4e4e-b6fc-fe8371389f6d", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-04505505d0", ContainerID:"1182bb2cf2a5363468f87940c57b7e858699dc00dc16293a913863d47f74f9da", Pod:"coredns-7db6d8ff4d-zhk5s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb48a07b4ce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:59.952249 containerd[1464]: 2025-01-30 13:56:59.859 [INFO][5204] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" Jan 30 13:56:59.952249 containerd[1464]: 2025-01-30 13:56:59.859 [INFO][5204] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" iface="eth0" netns="" Jan 30 13:56:59.952249 containerd[1464]: 2025-01-30 13:56:59.859 [INFO][5204] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" Jan 30 13:56:59.952249 containerd[1464]: 2025-01-30 13:56:59.859 [INFO][5204] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" Jan 30 13:56:59.952249 containerd[1464]: 2025-01-30 13:56:59.925 [INFO][5210] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" HandleID="k8s-pod-network.b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" Workload="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--zhk5s-eth0" Jan 30 13:56:59.952249 containerd[1464]: 2025-01-30 13:56:59.925 [INFO][5210] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:59.952249 containerd[1464]: 2025-01-30 13:56:59.925 [INFO][5210] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:59.952249 containerd[1464]: 2025-01-30 13:56:59.938 [WARNING][5210] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" HandleID="k8s-pod-network.b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" Workload="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--zhk5s-eth0" Jan 30 13:56:59.952249 containerd[1464]: 2025-01-30 13:56:59.938 [INFO][5210] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" HandleID="k8s-pod-network.b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" Workload="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--zhk5s-eth0" Jan 30 13:56:59.952249 containerd[1464]: 2025-01-30 13:56:59.945 [INFO][5210] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:59.952249 containerd[1464]: 2025-01-30 13:56:59.949 [INFO][5204] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" Jan 30 13:56:59.955463 containerd[1464]: time="2025-01-30T13:56:59.952329923Z" level=info msg="TearDown network for sandbox \"b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae\" successfully" Jan 30 13:56:59.955463 containerd[1464]: time="2025-01-30T13:56:59.952367722Z" level=info msg="StopPodSandbox for \"b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae\" returns successfully" Jan 30 13:56:59.955463 containerd[1464]: time="2025-01-30T13:56:59.953824468Z" level=info msg="RemovePodSandbox for \"b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae\"" Jan 30 13:56:59.955463 containerd[1464]: time="2025-01-30T13:56:59.953872554Z" level=info msg="Forcibly stopping sandbox \"b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae\"" Jan 30 13:57:00.086564 containerd[1464]: 2025-01-30 13:57:00.020 [WARNING][5228] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--zhk5s-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"98eaa407-4d24-4e4e-b6fc-fe8371389f6d", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-04505505d0", ContainerID:"1182bb2cf2a5363468f87940c57b7e858699dc00dc16293a913863d47f74f9da", Pod:"coredns-7db6d8ff4d-zhk5s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb48a07b4ce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:00.086564 containerd[1464]: 2025-01-30 13:57:00.021 [INFO][5228] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" Jan 30 13:57:00.086564 containerd[1464]: 2025-01-30 13:57:00.021 [INFO][5228] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" iface="eth0" netns="" Jan 30 13:57:00.086564 containerd[1464]: 2025-01-30 13:57:00.021 [INFO][5228] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" Jan 30 13:57:00.086564 containerd[1464]: 2025-01-30 13:57:00.021 [INFO][5228] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" Jan 30 13:57:00.086564 containerd[1464]: 2025-01-30 13:57:00.067 [INFO][5234] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" HandleID="k8s-pod-network.b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" Workload="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--zhk5s-eth0" Jan 30 13:57:00.086564 containerd[1464]: 2025-01-30 13:57:00.068 [INFO][5234] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:00.086564 containerd[1464]: 2025-01-30 13:57:00.068 [INFO][5234] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:00.086564 containerd[1464]: 2025-01-30 13:57:00.078 [WARNING][5234] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" HandleID="k8s-pod-network.b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" Workload="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--zhk5s-eth0" Jan 30 13:57:00.086564 containerd[1464]: 2025-01-30 13:57:00.079 [INFO][5234] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" HandleID="k8s-pod-network.b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" Workload="ci--4081.3.0--a--04505505d0-k8s-coredns--7db6d8ff4d--zhk5s-eth0" Jan 30 13:57:00.086564 containerd[1464]: 2025-01-30 13:57:00.082 [INFO][5234] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:00.086564 containerd[1464]: 2025-01-30 13:57:00.084 [INFO][5228] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae" Jan 30 13:57:00.088733 containerd[1464]: time="2025-01-30T13:57:00.086615260Z" level=info msg="TearDown network for sandbox \"b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae\" successfully" Jan 30 13:57:00.090723 containerd[1464]: time="2025-01-30T13:57:00.090639191Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:57:00.090870 containerd[1464]: time="2025-01-30T13:57:00.090773617Z" level=info msg="RemovePodSandbox \"b332e365fb01aafbcd741220c7f65ef355155af09ceded9433883f9947dcaaae\" returns successfully" Jan 30 13:57:04.184085 systemd[1]: Started sshd@9-64.23.155.240:22-147.75.109.163:44046.service - OpenSSH per-connection server daemon (147.75.109.163:44046). Jan 30 13:57:04.325561 sshd[5263]: Accepted publickey for core from 147.75.109.163 port 44046 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:04.329848 sshd[5263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:04.340434 systemd-logind[1442]: New session 10 of user core. Jan 30 13:57:04.345840 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:57:05.119650 sshd[5263]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:05.124015 systemd-logind[1442]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:57:05.124558 systemd[1]: sshd@9-64.23.155.240:22-147.75.109.163:44046.service: Deactivated successfully. Jan 30 13:57:05.127628 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:57:05.130551 systemd-logind[1442]: Removed session 10. Jan 30 13:57:05.360122 kubelet[2576]: I0130 13:57:05.360074 2576 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:57:10.137857 systemd[1]: Started sshd@10-64.23.155.240:22-147.75.109.163:59130.service - OpenSSH per-connection server daemon (147.75.109.163:59130). Jan 30 13:57:10.228948 sshd[5287]: Accepted publickey for core from 147.75.109.163 port 59130 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:10.232684 sshd[5287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:10.244148 systemd-logind[1442]: New session 11 of user core. Jan 30 13:57:10.252779 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:57:10.423741 sshd[5287]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:10.429122 systemd[1]: sshd@10-64.23.155.240:22-147.75.109.163:59130.service: Deactivated successfully. Jan 30 13:57:10.431880 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:57:10.432946 systemd-logind[1442]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:57:10.434782 systemd-logind[1442]: Removed session 11. Jan 30 13:57:14.804673 kubelet[2576]: E0130 13:57:14.804557 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:15.445853 systemd[1]: Started sshd@11-64.23.155.240:22-147.75.109.163:59134.service - OpenSSH per-connection server daemon (147.75.109.163:59134). Jan 30 13:57:15.510438 sshd[5305]: Accepted publickey for core from 147.75.109.163 port 59134 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:15.512757 sshd[5305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:15.519279 systemd-logind[1442]: New session 12 of user core. Jan 30 13:57:15.523645 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:57:15.684872 sshd[5305]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:15.689906 systemd[1]: sshd@11-64.23.155.240:22-147.75.109.163:59134.service: Deactivated successfully. Jan 30 13:57:15.693368 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:57:15.695315 systemd-logind[1442]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:57:15.696464 systemd-logind[1442]: Removed session 12. Jan 30 13:57:17.236477 kubelet[2576]: I0130 13:57:17.236102 2576 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:57:18.805443 kubelet[2576]: E0130 13:57:18.805046 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:20.707855 systemd[1]: Started sshd@12-64.23.155.240:22-147.75.109.163:42592.service - OpenSSH per-connection server daemon (147.75.109.163:42592). Jan 30 13:57:20.795442 sshd[5321]: Accepted publickey for core from 147.75.109.163 port 42592 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:20.798631 sshd[5321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:20.805690 systemd-logind[1442]: New session 13 of user core. Jan 30 13:57:20.811205 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:57:21.036219 sshd[5321]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:21.052971 systemd[1]: sshd@12-64.23.155.240:22-147.75.109.163:42592.service: Deactivated successfully. Jan 30 13:57:21.057544 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:57:21.063393 systemd-logind[1442]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:57:21.068253 systemd[1]: Started sshd@13-64.23.155.240:22-147.75.109.163:42606.service - OpenSSH per-connection server daemon (147.75.109.163:42606). Jan 30 13:57:21.075098 systemd-logind[1442]: Removed session 13. Jan 30 13:57:21.157014 sshd[5334]: Accepted publickey for core from 147.75.109.163 port 42606 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:21.162305 sshd[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:21.170925 systemd-logind[1442]: New session 14 of user core. Jan 30 13:57:21.174727 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:57:21.527716 sshd[5334]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:21.540204 systemd[1]: sshd@13-64.23.155.240:22-147.75.109.163:42606.service: Deactivated successfully. Jan 30 13:57:21.546813 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:57:21.549671 systemd-logind[1442]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:57:21.566577 systemd[1]: Started sshd@14-64.23.155.240:22-147.75.109.163:42610.service - OpenSSH per-connection server daemon (147.75.109.163:42610). Jan 30 13:57:21.568785 systemd-logind[1442]: Removed session 14. Jan 30 13:57:21.663520 sshd[5345]: Accepted publickey for core from 147.75.109.163 port 42610 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:21.669675 sshd[5345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:21.677647 systemd-logind[1442]: New session 15 of user core. Jan 30 13:57:21.681677 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:57:21.875581 sshd[5345]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:21.879214 systemd-logind[1442]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:57:21.879627 systemd[1]: sshd@14-64.23.155.240:22-147.75.109.163:42610.service: Deactivated successfully. Jan 30 13:57:21.882485 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:57:21.885555 systemd-logind[1442]: Removed session 15. Jan 30 13:57:26.805149 kubelet[2576]: E0130 13:57:26.804967 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:26.896061 systemd[1]: Started sshd@15-64.23.155.240:22-147.75.109.163:42618.service - OpenSSH per-connection server daemon (147.75.109.163:42618). Jan 30 13:57:26.936812 sshd[5383]: Accepted publickey for core from 147.75.109.163 port 42618 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:26.938584 sshd[5383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:26.947493 systemd-logind[1442]: New session 16 of user core. Jan 30 13:57:26.952947 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:57:27.108742 sshd[5383]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:27.113429 systemd[1]: sshd@15-64.23.155.240:22-147.75.109.163:42618.service: Deactivated successfully. Jan 30 13:57:27.117056 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:57:27.117835 systemd-logind[1442]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:57:27.118881 systemd-logind[1442]: Removed session 16. Jan 30 13:57:32.123590 systemd[1]: Started sshd@16-64.23.155.240:22-147.75.109.163:44008.service - OpenSSH per-connection server daemon (147.75.109.163:44008). Jan 30 13:57:32.213871 sshd[5421]: Accepted publickey for core from 147.75.109.163 port 44008 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:32.216836 sshd[5421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:32.224010 systemd-logind[1442]: New session 17 of user core. Jan 30 13:57:32.230212 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:57:32.435769 sshd[5421]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:32.441083 systemd[1]: sshd@16-64.23.155.240:22-147.75.109.163:44008.service: Deactivated successfully. Jan 30 13:57:32.443989 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:57:32.445551 systemd-logind[1442]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:57:32.446887 systemd-logind[1442]: Removed session 17. Jan 30 13:57:37.456841 systemd[1]: Started sshd@17-64.23.155.240:22-147.75.109.163:60392.service - OpenSSH per-connection server daemon (147.75.109.163:60392). Jan 30 13:57:37.557944 sshd[5433]: Accepted publickey for core from 147.75.109.163 port 60392 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:37.561950 sshd[5433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:37.568689 systemd-logind[1442]: New session 18 of user core. Jan 30 13:57:37.575698 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:57:37.905696 sshd[5433]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:37.913223 systemd-logind[1442]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:57:37.914100 systemd[1]: sshd@17-64.23.155.240:22-147.75.109.163:60392.service: Deactivated successfully. Jan 30 13:57:37.916794 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:57:37.917953 systemd-logind[1442]: Removed session 18. Jan 30 13:57:38.804807 kubelet[2576]: E0130 13:57:38.804326 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:42.925852 systemd[1]: Started sshd@18-64.23.155.240:22-147.75.109.163:60396.service - OpenSSH per-connection server daemon (147.75.109.163:60396). Jan 30 13:57:43.028133 sshd[5467]: Accepted publickey for core from 147.75.109.163 port 60396 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:43.030969 sshd[5467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:43.036781 systemd-logind[1442]: New session 19 of user core. Jan 30 13:57:43.043809 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:57:43.316108 sshd[5467]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:43.333854 systemd[1]: Started sshd@19-64.23.155.240:22-147.75.109.163:60410.service - OpenSSH per-connection server daemon (147.75.109.163:60410). Jan 30 13:57:43.335029 systemd[1]: sshd@18-64.23.155.240:22-147.75.109.163:60396.service: Deactivated successfully. Jan 30 13:57:43.338834 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:57:43.342249 systemd-logind[1442]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:57:43.347390 systemd-logind[1442]: Removed session 19. Jan 30 13:57:43.421660 sshd[5478]: Accepted publickey for core from 147.75.109.163 port 60410 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:43.424807 sshd[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:43.432951 systemd-logind[1442]: New session 20 of user core. Jan 30 13:57:43.442860 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:57:43.798721 sshd[5478]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:43.806412 kubelet[2576]: E0130 13:57:43.805730 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:43.813589 systemd[1]: sshd@19-64.23.155.240:22-147.75.109.163:60410.service: Deactivated successfully. Jan 30 13:57:43.816084 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:57:43.819370 systemd-logind[1442]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:57:43.825942 systemd[1]: Started sshd@20-64.23.155.240:22-147.75.109.163:60412.service - OpenSSH per-connection server daemon (147.75.109.163:60412). Jan 30 13:57:43.827857 systemd-logind[1442]: Removed session 20. Jan 30 13:57:43.886525 sshd[5492]: Accepted publickey for core from 147.75.109.163 port 60412 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:43.888932 sshd[5492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:43.897730 systemd-logind[1442]: New session 21 of user core. Jan 30 13:57:43.901653 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:57:46.122473 sshd[5492]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:46.132962 systemd[1]: Started sshd@21-64.23.155.240:22-147.75.109.163:60420.service - OpenSSH per-connection server daemon (147.75.109.163:60420). Jan 30 13:57:46.145138 systemd[1]: sshd@20-64.23.155.240:22-147.75.109.163:60412.service: Deactivated successfully. Jan 30 13:57:46.152970 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:57:46.161211 systemd-logind[1442]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:57:46.172538 systemd-logind[1442]: Removed session 21. Jan 30 13:57:46.256818 sshd[5510]: Accepted publickey for core from 147.75.109.163 port 60420 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:46.261162 sshd[5510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:46.268160 systemd-logind[1442]: New session 22 of user core. Jan 30 13:57:46.274879 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:57:46.962294 sshd[5510]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:46.976808 systemd[1]: sshd@21-64.23.155.240:22-147.75.109.163:60420.service: Deactivated successfully. Jan 30 13:57:46.981078 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:57:46.983901 systemd-logind[1442]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:57:46.992302 systemd[1]: Started sshd@22-64.23.155.240:22-147.75.109.163:60432.service - OpenSSH per-connection server daemon (147.75.109.163:60432). Jan 30 13:57:46.995553 systemd-logind[1442]: Removed session 22. Jan 30 13:57:47.042611 sshd[5523]: Accepted publickey for core from 147.75.109.163 port 60432 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:47.044621 sshd[5523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:47.050555 systemd-logind[1442]: New session 23 of user core. Jan 30 13:57:47.059737 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:57:47.211308 sshd[5523]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:47.216608 systemd[1]: sshd@22-64.23.155.240:22-147.75.109.163:60432.service: Deactivated successfully. Jan 30 13:57:47.222928 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:57:47.224250 systemd-logind[1442]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:57:47.226927 systemd-logind[1442]: Removed session 23. Jan 30 13:57:52.231035 systemd[1]: Started sshd@23-64.23.155.240:22-147.75.109.163:45404.service - OpenSSH per-connection server daemon (147.75.109.163:45404). Jan 30 13:57:52.280046 sshd[5536]: Accepted publickey for core from 147.75.109.163 port 45404 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:52.281829 sshd[5536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:52.286596 systemd-logind[1442]: New session 24 of user core. Jan 30 13:57:52.308740 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 13:57:52.445681 sshd[5536]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:52.450684 systemd[1]: sshd@23-64.23.155.240:22-147.75.109.163:45404.service: Deactivated successfully. Jan 30 13:57:52.453264 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 13:57:52.454476 systemd-logind[1442]: Session 24 logged out. Waiting for processes to exit. Jan 30 13:57:52.455415 systemd-logind[1442]: Removed session 24. Jan 30 13:57:53.780816 systemd[1]: run-containerd-runc-k8s.io-684b2bf1033d80f4b215300ea3424f41677ce8aa560a4cb4d86d78f5204c11dc-runc.oiaIyb.mount: Deactivated successfully. Jan 30 13:57:57.458252 systemd[1]: Started sshd@24-64.23.155.240:22-147.75.109.163:58774.service - OpenSSH per-connection server daemon (147.75.109.163:58774). Jan 30 13:57:57.549596 sshd[5574]: Accepted publickey for core from 147.75.109.163 port 58774 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:57.553588 sshd[5574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:57.563693 systemd-logind[1442]: New session 25 of user core. Jan 30 13:57:57.568161 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 13:57:57.770250 sshd[5574]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:57.775495 systemd[1]: sshd@24-64.23.155.240:22-147.75.109.163:58774.service: Deactivated successfully. Jan 30 13:57:57.777580 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 13:57:57.778383 systemd-logind[1442]: Session 25 logged out. Waiting for processes to exit. Jan 30 13:57:57.780543 systemd-logind[1442]: Removed session 25. Jan 30 13:58:02.793098 systemd[1]: Started sshd@25-64.23.155.240:22-147.75.109.163:58786.service - OpenSSH per-connection server daemon (147.75.109.163:58786). Jan 30 13:58:02.837546 sshd[5608]: Accepted publickey for core from 147.75.109.163 port 58786 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:58:02.840144 sshd[5608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:02.846528 systemd-logind[1442]: New session 26 of user core. Jan 30 13:58:02.851713 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 13:58:03.036746 sshd[5608]: pam_unix(sshd:session): session closed for user core Jan 30 13:58:03.042027 systemd-logind[1442]: Session 26 logged out. Waiting for processes to exit. Jan 30 13:58:03.042230 systemd[1]: sshd@25-64.23.155.240:22-147.75.109.163:58786.service: Deactivated successfully. Jan 30 13:58:03.046626 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 13:58:03.050087 systemd-logind[1442]: Removed session 26. Jan 30 13:58:08.053865 systemd[1]: Started sshd@26-64.23.155.240:22-147.75.109.163:48336.service - OpenSSH per-connection server daemon (147.75.109.163:48336). Jan 30 13:58:08.120119 sshd[5629]: Accepted publickey for core from 147.75.109.163 port 48336 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:58:08.122281 sshd[5629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:08.127087 systemd-logind[1442]: New session 27 of user core. Jan 30 13:58:08.138810 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 13:58:08.286757 sshd[5629]: pam_unix(sshd:session): session closed for user core Jan 30 13:58:08.291308 systemd[1]: sshd@26-64.23.155.240:22-147.75.109.163:48336.service: Deactivated successfully. Jan 30 13:58:08.294101 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 13:58:08.295577 systemd-logind[1442]: Session 27 logged out. Waiting for processes to exit. Jan 30 13:58:08.296708 systemd-logind[1442]: Removed session 27.