Jan 16 09:05:04.175368 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 16 09:05:04.182484 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 16 09:05:04.182511 kernel: BIOS-provided physical RAM map: Jan 16 09:05:04.182523 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 16 09:05:04.182535 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 16 09:05:04.182546 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 16 09:05:04.182560 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Jan 16 09:05:04.182573 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Jan 16 09:05:04.182585 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 16 09:05:04.182601 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 16 09:05:04.182615 kernel: NX (Execute Disable) protection: active Jan 16 09:05:04.182627 kernel: APIC: Static calls initialized Jan 16 09:05:04.182640 kernel: SMBIOS 2.8 present. Jan 16 09:05:04.182682 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 16 09:05:04.182698 kernel: Hypervisor detected: KVM Jan 16 09:05:04.182717 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 16 09:05:04.182731 kernel: kvm-clock: using sched offset of 4863480645 cycles Jan 16 09:05:04.182746 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 16 09:05:04.182760 kernel: tsc: Detected 2494.138 MHz processor Jan 16 09:05:04.182774 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 16 09:05:04.182789 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 16 09:05:04.182803 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Jan 16 09:05:04.182817 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 16 09:05:04.182831 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 16 09:05:04.182851 kernel: ACPI: Early table checksum verification disabled Jan 16 09:05:04.182864 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Jan 16 09:05:04.182879 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:05:04.182893 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:05:04.182907 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:05:04.182920 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 16 09:05:04.182934 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:05:04.182948 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:05:04.182962 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:05:04.182980 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:05:04.182991 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 16 09:05:04.183002 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 16 09:05:04.183015 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 16 09:05:04.183027 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 16 09:05:04.183040 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 16 09:05:04.183053 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 16 09:05:04.183074 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 16 09:05:04.183092 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 16 09:05:04.183107 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 16 09:05:04.183122 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 16 09:05:04.183140 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 16 09:05:04.183152 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Jan 16 09:05:04.183165 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Jan 16 09:05:04.183181 kernel: Zone ranges: Jan 16 09:05:04.183194 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 16 09:05:04.183208 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Jan 16 09:05:04.183221 kernel: Normal empty Jan 16 09:05:04.183236 kernel: Movable zone start for each node Jan 16 09:05:04.183250 kernel: Early memory node ranges Jan 16 09:05:04.183264 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 16 09:05:04.183278 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Jan 16 09:05:04.183292 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Jan 16 09:05:04.183312 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 16 09:05:04.183325 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 16 09:05:04.183337 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Jan 16 09:05:04.183349 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 16 09:05:04.183378 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 16 09:05:04.183391 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 16 09:05:04.183404 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 16 09:05:04.183416 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 16 09:05:04.183428 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 16 09:05:04.183448 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 16 09:05:04.183460 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 16 09:05:04.183472 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 16 09:05:04.183483 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 16 09:05:04.183511 kernel: TSC deadline timer available Jan 16 09:05:04.183523 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 16 09:05:04.183536 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 16 09:05:04.183548 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 16 09:05:04.183562 kernel: Booting paravirtualized kernel on KVM Jan 16 09:05:04.183583 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 16 09:05:04.183595 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 16 09:05:04.183610 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 16 09:05:04.183626 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 16 09:05:04.183640 kernel: pcpu-alloc: [0] 0 1 Jan 16 09:05:04.183654 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 16 09:05:04.183669 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 16 09:05:04.183683 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 16 09:05:04.183701 kernel: random: crng init done Jan 16 09:05:04.183714 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 16 09:05:04.183728 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 16 09:05:04.183742 kernel: Fallback order for Node 0: 0 Jan 16 09:05:04.183756 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Jan 16 09:05:04.183770 kernel: Policy zone: DMA32 Jan 16 09:05:04.183785 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 16 09:05:04.183800 kernel: Memory: 1971192K/2096600K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Jan 16 09:05:04.183815 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 16 09:05:04.183835 kernel: Kernel/User page tables isolation: enabled Jan 16 09:05:04.183849 kernel: ftrace: allocating 37918 entries in 149 pages Jan 16 09:05:04.183864 kernel: ftrace: allocated 149 pages with 4 groups Jan 16 09:05:04.183879 kernel: Dynamic Preempt: voluntary Jan 16 09:05:04.183893 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 16 09:05:04.183909 kernel: rcu: RCU event tracing is enabled. Jan 16 09:05:04.183922 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 16 09:05:04.183935 kernel: Trampoline variant of Tasks RCU enabled. Jan 16 09:05:04.183948 kernel: Rude variant of Tasks RCU enabled. Jan 16 09:05:04.183967 kernel: Tracing variant of Tasks RCU enabled. Jan 16 09:05:04.183982 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 16 09:05:04.183997 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 16 09:05:04.184011 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 16 09:05:04.184026 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 16 09:05:04.184041 kernel: Console: colour VGA+ 80x25 Jan 16 09:05:04.184068 kernel: printk: console [tty0] enabled Jan 16 09:05:04.184083 kernel: printk: console [ttyS0] enabled Jan 16 09:05:04.184098 kernel: ACPI: Core revision 20230628 Jan 16 09:05:04.184113 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 16 09:05:04.184133 kernel: APIC: Switch to symmetric I/O mode setup Jan 16 09:05:04.184148 kernel: x2apic enabled Jan 16 09:05:04.184163 kernel: APIC: Switched APIC routing to: physical x2apic Jan 16 09:05:04.184178 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 16 09:05:04.184194 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jan 16 09:05:04.184223 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Jan 16 09:05:04.184241 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 16 09:05:04.184253 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 16 09:05:04.184284 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 16 09:05:04.184300 kernel: Spectre V2 : Mitigation: Retpolines Jan 16 09:05:04.184314 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 16 09:05:04.184333 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 16 09:05:04.184350 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 16 09:05:04.184404 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 16 09:05:04.184419 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 16 09:05:04.184435 kernel: MDS: Mitigation: Clear CPU buffers Jan 16 09:05:04.184450 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 16 09:05:04.184472 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 16 09:05:04.184489 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 16 09:05:04.184505 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 16 09:05:04.184522 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 16 09:05:04.184539 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 16 09:05:04.184555 kernel: Freeing SMP alternatives memory: 32K Jan 16 09:05:04.184571 kernel: pid_max: default: 32768 minimum: 301 Jan 16 09:05:04.184588 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 16 09:05:04.184608 kernel: landlock: Up and running. Jan 16 09:05:04.184625 kernel: SELinux: Initializing. Jan 16 09:05:04.184641 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 16 09:05:04.184658 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 16 09:05:04.184674 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 16 09:05:04.184687 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 09:05:04.184702 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 09:05:04.184716 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 09:05:04.184735 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 16 09:05:04.184748 kernel: signal: max sigframe size: 1776 Jan 16 09:05:04.184763 kernel: rcu: Hierarchical SRCU implementation. Jan 16 09:05:04.184779 kernel: rcu: Max phase no-delay instances is 400. Jan 16 09:05:04.184794 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 16 09:05:04.184819 kernel: smp: Bringing up secondary CPUs ... Jan 16 09:05:04.184836 kernel: smpboot: x86: Booting SMP configuration: Jan 16 09:05:04.184854 kernel: .... node #0, CPUs: #1 Jan 16 09:05:04.184872 kernel: smp: Brought up 1 node, 2 CPUs Jan 16 09:05:04.184895 kernel: smpboot: Max logical packages: 1 Jan 16 09:05:04.184913 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Jan 16 09:05:04.184931 kernel: devtmpfs: initialized Jan 16 09:05:04.184949 kernel: x86/mm: Memory block size: 128MB Jan 16 09:05:04.184967 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 16 09:05:04.184985 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 16 09:05:04.185003 kernel: pinctrl core: initialized pinctrl subsystem Jan 16 09:05:04.185021 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 16 09:05:04.185039 kernel: audit: initializing netlink subsys (disabled) Jan 16 09:05:04.185057 kernel: audit: type=2000 audit(1737018303.173:1): state=initialized audit_enabled=0 res=1 Jan 16 09:05:04.185080 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 16 09:05:04.185098 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 16 09:05:04.185116 kernel: cpuidle: using governor menu Jan 16 09:05:04.185134 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 16 09:05:04.185151 kernel: dca service started, version 1.12.1 Jan 16 09:05:04.185168 kernel: PCI: Using configuration type 1 for base access Jan 16 09:05:04.185183 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 16 09:05:04.185198 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 16 09:05:04.185220 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 16 09:05:04.185237 kernel: ACPI: Added _OSI(Module Device) Jan 16 09:05:04.185255 kernel: ACPI: Added _OSI(Processor Device) Jan 16 09:05:04.185272 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 16 09:05:04.185290 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 16 09:05:04.185308 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 16 09:05:04.185326 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 16 09:05:04.185344 kernel: ACPI: Interpreter enabled Jan 16 09:05:04.185362 kernel: ACPI: PM: (supports S0 S5) Jan 16 09:05:04.185399 kernel: ACPI: Using IOAPIC for interrupt routing Jan 16 09:05:04.185422 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 16 09:05:04.185439 kernel: PCI: Using E820 reservations for host bridge windows Jan 16 09:05:04.185456 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 16 09:05:04.185482 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 16 09:05:04.185813 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 16 09:05:04.185993 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 16 09:05:04.186145 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 16 09:05:04.186176 kernel: acpiphp: Slot [3] registered Jan 16 09:05:04.186192 kernel: acpiphp: Slot [4] registered Jan 16 09:05:04.186209 kernel: acpiphp: Slot [5] registered Jan 16 09:05:04.186224 kernel: acpiphp: Slot [6] registered Jan 16 09:05:04.186237 kernel: acpiphp: Slot [7] registered Jan 16 09:05:04.186250 kernel: acpiphp: Slot [8] registered Jan 16 09:05:04.186265 kernel: acpiphp: Slot [9] registered Jan 16 09:05:04.186280 kernel: acpiphp: Slot [10] registered Jan 16 09:05:04.186296 kernel: acpiphp: Slot [11] registered Jan 16 09:05:04.186309 kernel: acpiphp: Slot [12] registered Jan 16 09:05:04.186328 kernel: acpiphp: Slot [13] registered Jan 16 09:05:04.186341 kernel: acpiphp: Slot [14] registered Jan 16 09:05:04.186355 kernel: acpiphp: Slot [15] registered Jan 16 09:05:04.186390 kernel: acpiphp: Slot [16] registered Jan 16 09:05:04.186406 kernel: acpiphp: Slot [17] registered Jan 16 09:05:04.186422 kernel: acpiphp: Slot [18] registered Jan 16 09:05:04.186437 kernel: acpiphp: Slot [19] registered Jan 16 09:05:04.186453 kernel: acpiphp: Slot [20] registered Jan 16 09:05:04.186469 kernel: acpiphp: Slot [21] registered Jan 16 09:05:04.186492 kernel: acpiphp: Slot [22] registered Jan 16 09:05:04.186508 kernel: acpiphp: Slot [23] registered Jan 16 09:05:04.186523 kernel: acpiphp: Slot [24] registered Jan 16 09:05:04.186539 kernel: acpiphp: Slot [25] registered Jan 16 09:05:04.186555 kernel: acpiphp: Slot [26] registered Jan 16 09:05:04.186570 kernel: acpiphp: Slot [27] registered Jan 16 09:05:04.186585 kernel: acpiphp: Slot [28] registered Jan 16 09:05:04.186601 kernel: acpiphp: Slot [29] registered Jan 16 09:05:04.186616 kernel: acpiphp: Slot [30] registered Jan 16 09:05:04.186628 kernel: acpiphp: Slot [31] registered Jan 16 09:05:04.186641 kernel: PCI host bridge to bus 0000:00 Jan 16 09:05:04.186830 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 16 09:05:04.186965 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 16 09:05:04.187113 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 16 09:05:04.187248 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 16 09:05:04.187426 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 16 09:05:04.187602 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 16 09:05:04.187792 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 16 09:05:04.188008 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 16 09:05:04.188192 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 16 09:05:04.188354 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 16 09:05:04.188649 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 16 09:05:04.189601 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 16 09:05:04.189782 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 16 09:05:04.189929 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 16 09:05:04.190105 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 16 09:05:04.190305 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 16 09:05:04.190507 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 16 09:05:04.190654 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 16 09:05:04.190797 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 16 09:05:04.190952 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 16 09:05:04.191116 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 16 09:05:04.191268 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 16 09:05:04.193613 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 16 09:05:04.193829 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 16 09:05:04.194004 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 16 09:05:04.194198 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 16 09:05:04.194378 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 16 09:05:04.194552 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 16 09:05:04.194774 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 16 09:05:04.197202 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 16 09:05:04.198480 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 16 09:05:04.198692 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 16 09:05:04.198840 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 16 09:05:04.199011 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 16 09:05:04.199167 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 16 09:05:04.199316 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 16 09:05:04.199660 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 16 09:05:04.199884 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 16 09:05:04.200045 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 16 09:05:04.200199 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 16 09:05:04.200347 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 16 09:05:04.200554 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 16 09:05:04.200709 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 16 09:05:04.200870 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 16 09:05:04.201039 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 16 09:05:04.201239 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 16 09:05:04.201413 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 16 09:05:04.201573 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 16 09:05:04.201594 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 16 09:05:04.201610 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 16 09:05:04.201627 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 16 09:05:04.201644 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 16 09:05:04.201661 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 16 09:05:04.201677 kernel: iommu: Default domain type: Translated Jan 16 09:05:04.201694 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 16 09:05:04.201711 kernel: PCI: Using ACPI for IRQ routing Jan 16 09:05:04.201733 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 16 09:05:04.201747 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 16 09:05:04.201761 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Jan 16 09:05:04.201927 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 16 09:05:04.202078 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 16 09:05:04.202225 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 16 09:05:04.202246 kernel: vgaarb: loaded Jan 16 09:05:04.202262 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 16 09:05:04.202286 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 16 09:05:04.202303 kernel: clocksource: Switched to clocksource kvm-clock Jan 16 09:05:04.202320 kernel: VFS: Disk quotas dquot_6.6.0 Jan 16 09:05:04.202337 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 16 09:05:04.202354 kernel: pnp: PnP ACPI init Jan 16 09:05:04.202398 kernel: pnp: PnP ACPI: found 4 devices Jan 16 09:05:04.202415 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 16 09:05:04.202432 kernel: NET: Registered PF_INET protocol family Jan 16 09:05:04.202448 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 16 09:05:04.202470 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 16 09:05:04.202487 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 16 09:05:04.202503 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 16 09:05:04.202520 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 16 09:05:04.202537 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 16 09:05:04.202553 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 16 09:05:04.202570 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 16 09:05:04.202587 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 16 09:05:04.202606 kernel: NET: Registered PF_XDP protocol family Jan 16 09:05:04.202769 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 16 09:05:04.202911 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 16 09:05:04.203040 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 16 09:05:04.203178 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 16 09:05:04.203309 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 16 09:05:04.203630 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 16 09:05:04.203794 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 16 09:05:04.203817 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 16 09:05:04.203979 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 38139 usecs Jan 16 09:05:04.204002 kernel: PCI: CLS 0 bytes, default 64 Jan 16 09:05:04.204019 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 16 09:05:04.204036 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jan 16 09:05:04.204053 kernel: Initialise system trusted keyrings Jan 16 09:05:04.204070 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 16 09:05:04.204087 kernel: Key type asymmetric registered Jan 16 09:05:04.204103 kernel: Asymmetric key parser 'x509' registered Jan 16 09:05:04.204121 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 16 09:05:04.204141 kernel: io scheduler mq-deadline registered Jan 16 09:05:04.204155 kernel: io scheduler kyber registered Jan 16 09:05:04.204172 kernel: io scheduler bfq registered Jan 16 09:05:04.204193 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 16 09:05:04.204217 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 16 09:05:04.204240 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 16 09:05:04.204261 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 16 09:05:04.204284 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 16 09:05:04.204306 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 16 09:05:04.204336 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 16 09:05:04.204356 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 16 09:05:04.204409 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 16 09:05:04.204426 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 16 09:05:04.204627 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 16 09:05:04.204772 kernel: rtc_cmos 00:03: registered as rtc0 Jan 16 09:05:04.204910 kernel: rtc_cmos 00:03: setting system clock to 2025-01-16T09:05:03 UTC (1737018303) Jan 16 09:05:04.205051 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 16 09:05:04.205070 kernel: intel_pstate: CPU model not supported Jan 16 09:05:04.205087 kernel: NET: Registered PF_INET6 protocol family Jan 16 09:05:04.205101 kernel: Segment Routing with IPv6 Jan 16 09:05:04.205115 kernel: In-situ OAM (IOAM) with IPv6 Jan 16 09:05:04.205129 kernel: NET: Registered PF_PACKET protocol family Jan 16 09:05:04.205144 kernel: Key type dns_resolver registered Jan 16 09:05:04.205159 kernel: IPI shorthand broadcast: enabled Jan 16 09:05:04.205175 kernel: sched_clock: Marking stable (1401005939, 139562647)->(1672076048, -131507462) Jan 16 09:05:04.205191 kernel: registered taskstats version 1 Jan 16 09:05:04.205213 kernel: Loading compiled-in X.509 certificates Jan 16 09:05:04.205229 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 16 09:05:04.205245 kernel: Key type .fscrypt registered Jan 16 09:05:04.205261 kernel: Key type fscrypt-provisioning registered Jan 16 09:05:04.205277 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 16 09:05:04.205293 kernel: ima: Allocated hash algorithm: sha1 Jan 16 09:05:04.205310 kernel: ima: No architecture policies found Jan 16 09:05:04.205323 kernel: clk: Disabling unused clocks Jan 16 09:05:04.205341 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 16 09:05:04.205384 kernel: Write protecting the kernel read-only data: 36864k Jan 16 09:05:04.205447 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 16 09:05:04.205475 kernel: Run /init as init process Jan 16 09:05:04.205494 kernel: with arguments: Jan 16 09:05:04.205587 kernel: /init Jan 16 09:05:04.205604 kernel: with environment: Jan 16 09:05:04.205621 kernel: HOME=/ Jan 16 09:05:04.205637 kernel: TERM=linux Jan 16 09:05:04.205655 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 16 09:05:04.205681 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 16 09:05:04.205703 systemd[1]: Detected virtualization kvm. Jan 16 09:05:04.205721 systemd[1]: Detected architecture x86-64. Jan 16 09:05:04.205739 systemd[1]: Running in initrd. Jan 16 09:05:04.205756 systemd[1]: No hostname configured, using default hostname. Jan 16 09:05:04.205773 systemd[1]: Hostname set to . Jan 16 09:05:04.205796 systemd[1]: Initializing machine ID from VM UUID. Jan 16 09:05:04.205813 systemd[1]: Queued start job for default target initrd.target. Jan 16 09:05:04.205832 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 09:05:04.205850 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 09:05:04.205869 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 16 09:05:04.205888 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 09:05:04.205907 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 16 09:05:04.205926 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 16 09:05:04.205952 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 16 09:05:04.205971 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 16 09:05:04.205989 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 09:05:04.206008 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 09:05:04.206025 systemd[1]: Reached target paths.target - Path Units. Jan 16 09:05:04.206044 systemd[1]: Reached target slices.target - Slice Units. Jan 16 09:05:04.206062 systemd[1]: Reached target swap.target - Swaps. Jan 16 09:05:04.206085 systemd[1]: Reached target timers.target - Timer Units. Jan 16 09:05:04.206104 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 09:05:04.206122 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 09:05:04.206140 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 16 09:05:04.206159 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 16 09:05:04.206177 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 09:05:04.206204 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 09:05:04.206222 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 09:05:04.206241 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 09:05:04.206260 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 16 09:05:04.206279 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 09:05:04.206295 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 16 09:05:04.206312 systemd[1]: Starting systemd-fsck-usr.service... Jan 16 09:05:04.206329 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 09:05:04.206345 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 09:05:04.206386 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 09:05:04.206404 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 16 09:05:04.206421 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 09:05:04.206439 systemd[1]: Finished systemd-fsck-usr.service. Jan 16 09:05:04.206460 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 09:05:04.206484 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 09:05:04.206503 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 09:05:04.206574 systemd-journald[183]: Collecting audit messages is disabled. Jan 16 09:05:04.206619 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:05:04.206642 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 16 09:05:04.206661 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 09:05:04.206680 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 09:05:04.206699 kernel: Bridge firewalling registered Jan 16 09:05:04.206717 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 09:05:04.206738 systemd-journald[183]: Journal started Jan 16 09:05:04.206780 systemd-journald[183]: Runtime Journal (/run/log/journal/c1229e7d7d20486287aa65290ae37eb4) is 4.9M, max 39.3M, 34.4M free. Jan 16 09:05:04.122554 systemd-modules-load[184]: Inserted module 'overlay' Jan 16 09:05:04.197508 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 16 09:05:04.212800 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 09:05:04.216411 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 09:05:04.237742 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 09:05:04.243756 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 09:05:04.245296 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 09:05:04.256760 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 16 09:05:04.267936 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 09:05:04.278724 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 09:05:04.286244 dracut-cmdline[218]: dracut-dracut-053 Jan 16 09:05:04.291414 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 16 09:05:04.331203 systemd-resolved[220]: Positive Trust Anchors: Jan 16 09:05:04.331224 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 09:05:04.331272 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 09:05:04.335722 systemd-resolved[220]: Defaulting to hostname 'linux'. Jan 16 09:05:04.337588 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 09:05:04.340431 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 09:05:04.449534 kernel: SCSI subsystem initialized Jan 16 09:05:04.462443 kernel: Loading iSCSI transport class v2.0-870. Jan 16 09:05:04.477465 kernel: iscsi: registered transport (tcp) Jan 16 09:05:04.509343 kernel: iscsi: registered transport (qla4xxx) Jan 16 09:05:04.509474 kernel: QLogic iSCSI HBA Driver Jan 16 09:05:04.621394 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 16 09:05:04.649258 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 16 09:05:04.694810 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 16 09:05:04.694891 kernel: device-mapper: uevent: version 1.0.3 Jan 16 09:05:04.694907 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 16 09:05:04.747453 kernel: raid6: avx2x4 gen() 13070 MB/s Jan 16 09:05:04.764452 kernel: raid6: avx2x2 gen() 13103 MB/s Jan 16 09:05:04.781557 kernel: raid6: avx2x1 gen() 10434 MB/s Jan 16 09:05:04.781687 kernel: raid6: using algorithm avx2x2 gen() 13103 MB/s Jan 16 09:05:04.800529 kernel: raid6: .... xor() 12093 MB/s, rmw enabled Jan 16 09:05:04.800641 kernel: raid6: using avx2x2 recovery algorithm Jan 16 09:05:04.830433 kernel: xor: automatically using best checksumming function avx Jan 16 09:05:05.038416 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 16 09:05:05.058987 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 16 09:05:05.075859 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 09:05:05.115779 systemd-udevd[404]: Using default interface naming scheme 'v255'. Jan 16 09:05:05.122834 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 09:05:05.132108 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 16 09:05:05.179422 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Jan 16 09:05:05.259412 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 09:05:05.278320 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 09:05:05.390165 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 09:05:05.398756 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 16 09:05:05.437661 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 16 09:05:05.440226 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 09:05:05.441934 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 09:05:05.443082 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 09:05:05.449467 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 16 09:05:05.491444 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 16 09:05:05.532695 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 16 09:05:05.544616 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 16 09:05:05.544665 kernel: GPT:9289727 != 125829119 Jan 16 09:05:05.544684 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 16 09:05:05.544700 kernel: GPT:9289727 != 125829119 Jan 16 09:05:05.544719 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 16 09:05:05.544740 kernel: scsi host0: Virtio SCSI HBA Jan 16 09:05:05.559086 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 09:05:05.559134 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 16 09:05:05.572613 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Jan 16 09:05:05.572821 kernel: cryptd: max_cpu_qlen set to 1000 Jan 16 09:05:05.491414 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 16 09:05:05.589483 kernel: libata version 3.00 loaded. Jan 16 09:05:05.593403 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 16 09:05:05.655703 kernel: scsi host1: ata_piix Jan 16 09:05:05.655970 kernel: scsi host2: ata_piix Jan 16 09:05:05.656177 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 16 09:05:05.656202 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 16 09:05:05.680162 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 09:05:05.680288 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 09:05:05.681051 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 09:05:05.681645 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 09:05:05.681735 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:05:05.682314 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 09:05:05.736144 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 09:05:05.742524 kernel: AVX2 version of gcm_enc/dec engaged. Jan 16 09:05:05.742598 kernel: AES CTR mode by8 optimization enabled Jan 16 09:05:05.780492 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (451) Jan 16 09:05:05.794493 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 16 09:05:05.876524 kernel: ACPI: bus type USB registered Jan 16 09:05:05.876565 kernel: usbcore: registered new interface driver usbfs Jan 16 09:05:05.876586 kernel: usbcore: registered new interface driver hub Jan 16 09:05:05.876607 kernel: usbcore: registered new device driver usb Jan 16 09:05:05.876625 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (464) Jan 16 09:05:05.882430 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 16 09:05:05.889940 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 16 09:05:05.890256 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 16 09:05:05.890516 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 16 09:05:05.890742 kernel: hub 1-0:1.0: USB hub found Jan 16 09:05:05.891320 kernel: hub 1-0:1.0: 2 ports detected Jan 16 09:05:05.888892 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:05:05.903614 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 16 09:05:05.916721 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 16 09:05:05.924982 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 16 09:05:05.925769 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 16 09:05:05.933800 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 16 09:05:05.938639 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 09:05:05.949907 disk-uuid[542]: Primary Header is updated. Jan 16 09:05:05.949907 disk-uuid[542]: Secondary Entries is updated. Jan 16 09:05:05.949907 disk-uuid[542]: Secondary Header is updated. Jan 16 09:05:05.973435 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 09:05:05.981936 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 09:05:06.007012 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 09:05:06.025485 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 09:05:07.031904 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 09:05:07.033100 disk-uuid[543]: The operation has completed successfully. Jan 16 09:05:07.126323 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 16 09:05:07.126693 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 16 09:05:07.160697 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 16 09:05:07.180182 sh[564]: Success Jan 16 09:05:07.203444 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 16 09:05:07.370706 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 16 09:05:07.371771 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 16 09:05:07.377318 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 16 09:05:07.424448 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 16 09:05:07.424544 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 16 09:05:07.427621 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 16 09:05:07.427724 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 16 09:05:07.428242 kernel: BTRFS info (device dm-0): using free space tree Jan 16 09:05:07.448817 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 16 09:05:07.450136 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 16 09:05:07.460751 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 16 09:05:07.464625 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 16 09:05:07.486927 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 16 09:05:07.487018 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 09:05:07.487042 kernel: BTRFS info (device vda6): using free space tree Jan 16 09:05:07.497450 kernel: BTRFS info (device vda6): auto enabling async discard Jan 16 09:05:07.519011 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 16 09:05:07.518583 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 16 09:05:07.530551 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 16 09:05:07.539869 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 16 09:05:07.719500 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 09:05:07.728784 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 09:05:07.746707 ignition[658]: Ignition 2.19.0 Jan 16 09:05:07.746723 ignition[658]: Stage: fetch-offline Jan 16 09:05:07.746778 ignition[658]: no configs at "/usr/lib/ignition/base.d" Jan 16 09:05:07.750881 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 09:05:07.746793 ignition[658]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:05:07.746979 ignition[658]: parsed url from cmdline: "" Jan 16 09:05:07.746986 ignition[658]: no config URL provided Jan 16 09:05:07.746996 ignition[658]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 09:05:07.747009 ignition[658]: no config at "/usr/lib/ignition/user.ign" Jan 16 09:05:07.747018 ignition[658]: failed to fetch config: resource requires networking Jan 16 09:05:07.747333 ignition[658]: Ignition finished successfully Jan 16 09:05:07.773304 systemd-networkd[753]: lo: Link UP Jan 16 09:05:07.773330 systemd-networkd[753]: lo: Gained carrier Jan 16 09:05:07.776911 systemd-networkd[753]: Enumeration completed Jan 16 09:05:07.777519 systemd-networkd[753]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 16 09:05:07.777525 systemd-networkd[753]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 16 09:05:07.777847 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 09:05:07.778750 systemd-networkd[753]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 09:05:07.778756 systemd-networkd[753]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 09:05:07.778872 systemd[1]: Reached target network.target - Network. Jan 16 09:05:07.779868 systemd-networkd[753]: eth0: Link UP Jan 16 09:05:07.779876 systemd-networkd[753]: eth0: Gained carrier Jan 16 09:05:07.779890 systemd-networkd[753]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 16 09:05:07.783964 systemd-networkd[753]: eth1: Link UP Jan 16 09:05:07.783969 systemd-networkd[753]: eth1: Gained carrier Jan 16 09:05:07.783988 systemd-networkd[753]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 09:05:07.791547 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 16 09:05:07.800482 systemd-networkd[753]: eth1: DHCPv4 address 10.124.0.2/20 acquired from 169.254.169.253 Jan 16 09:05:07.810486 systemd-networkd[753]: eth0: DHCPv4 address 146.190.127.227/20, gateway 146.190.112.1 acquired from 169.254.169.253 Jan 16 09:05:07.825204 ignition[757]: Ignition 2.19.0 Jan 16 09:05:07.825406 ignition[757]: Stage: fetch Jan 16 09:05:07.825682 ignition[757]: no configs at "/usr/lib/ignition/base.d" Jan 16 09:05:07.825695 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:05:07.825860 ignition[757]: parsed url from cmdline: "" Jan 16 09:05:07.825867 ignition[757]: no config URL provided Jan 16 09:05:07.825876 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 09:05:07.825888 ignition[757]: no config at "/usr/lib/ignition/user.ign" Jan 16 09:05:07.825916 ignition[757]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 16 09:05:07.851899 ignition[757]: GET result: OK Jan 16 09:05:07.853010 ignition[757]: parsing config with SHA512: ed24faa5dbd96f66d61f37a82f2fafcbcbd88fb9e2b2898efa8d81fffb88169eaacbf006c8626faff65c52565fb6c481101d3ffaf24947603d08b31a8a513e8f Jan 16 09:05:07.862722 unknown[757]: fetched base config from "system" Jan 16 09:05:07.862743 unknown[757]: fetched base config from "system" Jan 16 09:05:07.862753 unknown[757]: fetched user config from "digitalocean" Jan 16 09:05:07.864358 ignition[757]: fetch: fetch complete Jan 16 09:05:07.864396 ignition[757]: fetch: fetch passed Jan 16 09:05:07.864615 ignition[757]: Ignition finished successfully Jan 16 09:05:07.867663 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 16 09:05:07.896790 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 16 09:05:07.965126 ignition[764]: Ignition 2.19.0 Jan 16 09:05:07.965157 ignition[764]: Stage: kargs Jan 16 09:05:07.965599 ignition[764]: no configs at "/usr/lib/ignition/base.d" Jan 16 09:05:07.965619 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:05:07.972556 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 16 09:05:07.967249 ignition[764]: kargs: kargs passed Jan 16 09:05:07.967381 ignition[764]: Ignition finished successfully Jan 16 09:05:07.982940 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 16 09:05:08.084358 ignition[770]: Ignition 2.19.0 Jan 16 09:05:08.085718 ignition[770]: Stage: disks Jan 16 09:05:08.086257 ignition[770]: no configs at "/usr/lib/ignition/base.d" Jan 16 09:05:08.086284 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:05:08.092424 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 16 09:05:08.089921 ignition[770]: disks: disks passed Jan 16 09:05:08.102787 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 16 09:05:08.090059 ignition[770]: Ignition finished successfully Jan 16 09:05:08.104030 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 16 09:05:08.104668 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 09:05:08.105131 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 09:05:08.105568 systemd[1]: Reached target basic.target - Basic System. Jan 16 09:05:08.129389 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 16 09:05:08.196795 systemd-fsck[779]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 16 09:05:08.220150 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 16 09:05:08.241717 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 16 09:05:08.536422 kernel: EXT4-fs (vda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 16 09:05:08.542686 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 16 09:05:08.543830 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 16 09:05:08.564225 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 09:05:08.571713 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 16 09:05:08.592257 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 16 09:05:08.603439 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 16 09:05:08.623524 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (787) Jan 16 09:05:08.623600 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 16 09:05:08.623626 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 09:05:08.623645 kernel: BTRFS info (device vda6): using free space tree Jan 16 09:05:08.604212 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 16 09:05:08.604278 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 09:05:08.625788 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 16 09:05:08.694038 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 16 09:05:08.724692 kernel: BTRFS info (device vda6): auto enabling async discard Jan 16 09:05:08.778677 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 09:05:08.835417 coreos-metadata[790]: Jan 16 09:05:08.835 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 09:05:08.854482 coreos-metadata[789]: Jan 16 09:05:08.854 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 09:05:08.867203 initrd-setup-root[817]: cut: /sysroot/etc/passwd: No such file or directory Jan 16 09:05:08.875315 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Jan 16 09:05:08.885589 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Jan 16 09:05:08.893741 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Jan 16 09:05:09.026936 coreos-metadata[790]: Jan 16 09:05:09.026 INFO Fetch successful Jan 16 09:05:09.043801 coreos-metadata[790]: Jan 16 09:05:09.043 INFO wrote hostname ci-4081.3.0-a-d8418dcdb9 to /sysroot/etc/hostname Jan 16 09:05:09.046577 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 09:05:09.081578 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 16 09:05:09.093120 coreos-metadata[789]: Jan 16 09:05:09.090 INFO Fetch successful Jan 16 09:05:09.092796 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 16 09:05:09.116559 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 16 09:05:09.117788 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 16 09:05:09.117993 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 16 09:05:09.135556 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 16 09:05:09.136753 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 16 09:05:09.278646 ignition[910]: INFO : Ignition 2.19.0 Jan 16 09:05:09.278646 ignition[910]: INFO : Stage: mount Jan 16 09:05:09.278646 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 09:05:09.278646 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:05:09.286600 ignition[910]: INFO : mount: mount passed Jan 16 09:05:09.286600 ignition[910]: INFO : Ignition finished successfully Jan 16 09:05:09.288837 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 16 09:05:09.320956 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 16 09:05:09.360960 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 09:05:09.362317 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 16 09:05:09.394260 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (920) Jan 16 09:05:09.405501 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 16 09:05:09.405650 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 09:05:09.406705 kernel: BTRFS info (device vda6): using free space tree Jan 16 09:05:09.426967 kernel: BTRFS info (device vda6): auto enabling async discard Jan 16 09:05:09.465526 systemd-networkd[753]: eth0: Gained IPv6LL Jan 16 09:05:09.466199 systemd-networkd[753]: eth1: Gained IPv6LL Jan 16 09:05:09.473434 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 09:05:09.629754 ignition[939]: INFO : Ignition 2.19.0 Jan 16 09:05:09.629754 ignition[939]: INFO : Stage: files Jan 16 09:05:09.629754 ignition[939]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 09:05:09.629754 ignition[939]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:05:09.629754 ignition[939]: DEBUG : files: compiled without relabeling support, skipping Jan 16 09:05:09.656893 ignition[939]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 16 09:05:09.656893 ignition[939]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 16 09:05:09.671882 ignition[939]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 16 09:05:09.675114 ignition[939]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 16 09:05:09.675114 ignition[939]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 16 09:05:09.673794 unknown[939]: wrote ssh authorized keys file for user: core Jan 16 09:05:09.681702 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 16 09:05:09.681702 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 16 09:05:09.784183 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 16 09:05:09.948721 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 16 09:05:09.948721 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 16 09:05:09.948721 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 16 09:05:09.948721 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 16 09:05:09.948721 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 16 09:05:09.948721 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 09:05:09.948721 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 09:05:09.948721 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 09:05:09.948721 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 09:05:09.960336 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 09:05:09.961660 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 09:05:09.961660 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 16 09:05:09.964746 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 16 09:05:09.964746 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 16 09:05:09.964746 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 16 09:05:10.348459 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 16 09:05:10.999142 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 16 09:05:10.999142 ignition[939]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 16 09:05:11.004607 ignition[939]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 09:05:11.008424 ignition[939]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 09:05:11.008424 ignition[939]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 16 09:05:11.008424 ignition[939]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 16 09:05:11.008424 ignition[939]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 16 09:05:11.008424 ignition[939]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 16 09:05:11.008424 ignition[939]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 16 09:05:11.008424 ignition[939]: INFO : files: files passed Jan 16 09:05:11.008424 ignition[939]: INFO : Ignition finished successfully Jan 16 09:05:11.017822 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 16 09:05:11.051445 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 16 09:05:11.069864 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 16 09:05:11.080147 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 16 09:05:11.081307 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 16 09:05:11.138194 initrd-setup-root-after-ignition[967]: grep: Jan 16 09:05:11.139263 initrd-setup-root-after-ignition[971]: grep: Jan 16 09:05:11.139263 initrd-setup-root-after-ignition[967]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 09:05:11.139263 initrd-setup-root-after-ignition[967]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 16 09:05:11.149722 initrd-setup-root-after-ignition[971]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 09:05:11.154607 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 09:05:11.163402 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 16 09:05:11.176050 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 16 09:05:11.328892 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 16 09:05:11.329283 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 16 09:05:11.335581 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 16 09:05:11.336462 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 16 09:05:11.337709 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 16 09:05:11.355878 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 16 09:05:11.424935 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 09:05:11.486748 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 16 09:05:11.583198 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 16 09:05:11.590967 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 09:05:11.599166 systemd[1]: Stopped target timers.target - Timer Units. Jan 16 09:05:11.604841 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 16 09:05:11.605286 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 09:05:11.606654 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 16 09:05:11.610761 systemd[1]: Stopped target basic.target - Basic System. Jan 16 09:05:11.611770 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 16 09:05:11.612707 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 09:05:11.622340 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 16 09:05:11.623947 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 16 09:05:11.625933 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 09:05:11.627823 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 16 09:05:11.629860 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 16 09:05:11.630654 systemd[1]: Stopped target swap.target - Swaps. Jan 16 09:05:11.632795 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 16 09:05:11.633134 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 16 09:05:11.647352 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 16 09:05:11.649206 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 09:05:11.650736 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 16 09:05:11.652320 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 09:05:11.654700 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 16 09:05:11.654969 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 16 09:05:11.668808 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 16 09:05:11.669329 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 09:05:11.675231 systemd[1]: ignition-files.service: Deactivated successfully. Jan 16 09:05:11.675652 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 16 09:05:11.677322 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 16 09:05:11.677614 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 09:05:11.717524 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 16 09:05:11.723341 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 16 09:05:11.733122 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 16 09:05:11.738653 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 09:05:11.740150 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 16 09:05:11.741091 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 09:05:11.754691 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 16 09:05:11.760827 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 16 09:05:11.776576 ignition[991]: INFO : Ignition 2.19.0 Jan 16 09:05:11.776576 ignition[991]: INFO : Stage: umount Jan 16 09:05:11.776576 ignition[991]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 09:05:11.776576 ignition[991]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:05:11.781311 ignition[991]: INFO : umount: umount passed Jan 16 09:05:11.781311 ignition[991]: INFO : Ignition finished successfully Jan 16 09:05:11.785911 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 16 09:05:11.786097 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 16 09:05:11.792131 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 16 09:05:11.792341 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 16 09:05:11.827853 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 16 09:05:11.828197 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 16 09:05:11.831564 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 16 09:05:11.831700 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 16 09:05:11.842843 systemd[1]: Stopped target network.target - Network. Jan 16 09:05:11.843564 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 16 09:05:11.843720 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 09:05:11.850895 systemd[1]: Stopped target paths.target - Path Units. Jan 16 09:05:11.852393 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 16 09:05:11.854703 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 09:05:11.858451 systemd[1]: Stopped target slices.target - Slice Units. Jan 16 09:05:11.859009 systemd[1]: Stopped target sockets.target - Socket Units. Jan 16 09:05:11.860338 systemd[1]: iscsid.socket: Deactivated successfully. Jan 16 09:05:11.860473 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 09:05:11.863730 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 16 09:05:11.863841 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 09:05:11.864415 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 16 09:05:11.864522 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 16 09:05:11.865153 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 16 09:05:11.865238 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 16 09:05:11.866411 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 16 09:05:11.867143 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 16 09:05:11.869541 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 16 09:05:11.870350 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 16 09:05:11.870584 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 16 09:05:11.872876 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 16 09:05:11.873078 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 16 09:05:11.879222 systemd-networkd[753]: eth1: DHCPv6 lease lost Jan 16 09:05:11.905125 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 16 09:05:11.906542 systemd-networkd[753]: eth0: DHCPv6 lease lost Jan 16 09:05:11.906633 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 16 09:05:11.913333 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 16 09:05:11.913646 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 16 09:05:11.945210 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 16 09:05:11.945329 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 16 09:05:11.968118 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 16 09:05:11.969071 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 16 09:05:11.969219 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 09:05:11.983892 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 16 09:05:11.984021 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 16 09:05:11.988467 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 16 09:05:11.988609 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 16 09:05:11.996349 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 16 09:05:11.996527 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 09:05:11.997754 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 09:05:12.056480 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 16 09:05:12.056801 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 16 09:05:12.062927 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 16 09:05:12.063220 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 09:05:12.067270 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 16 09:05:12.070799 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 16 09:05:12.073288 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 16 09:05:12.073525 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 09:05:12.074513 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 16 09:05:12.074638 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 16 09:05:12.077710 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 16 09:05:12.077940 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 16 09:05:12.083301 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 09:05:12.084799 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 09:05:12.105032 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 16 09:05:12.105757 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 16 09:05:12.106009 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 09:05:12.106853 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 16 09:05:12.106954 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 09:05:12.110053 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 16 09:05:12.110183 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 09:05:12.110885 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 09:05:12.111030 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:05:12.123766 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 16 09:05:12.124073 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 16 09:05:12.125286 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 16 09:05:12.134140 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 16 09:05:12.159580 systemd[1]: Switching root. Jan 16 09:05:12.236715 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 16 09:05:12.236852 systemd-journald[183]: Journal stopped Jan 16 09:05:14.592590 kernel: SELinux: policy capability network_peer_controls=1 Jan 16 09:05:14.592713 kernel: SELinux: policy capability open_perms=1 Jan 16 09:05:14.592736 kernel: SELinux: policy capability extended_socket_class=1 Jan 16 09:05:14.592755 kernel: SELinux: policy capability always_check_network=0 Jan 16 09:05:14.592781 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 16 09:05:14.592806 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 16 09:05:14.592828 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 16 09:05:14.592840 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 16 09:05:14.592853 kernel: audit: type=1403 audit(1737018312.645:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 16 09:05:14.592877 systemd[1]: Successfully loaded SELinux policy in 56.497ms. Jan 16 09:05:14.592908 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.904ms. Jan 16 09:05:14.592922 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 16 09:05:14.592953 systemd[1]: Detected virtualization kvm. Jan 16 09:05:14.592974 systemd[1]: Detected architecture x86-64. Jan 16 09:05:14.592988 systemd[1]: Detected first boot. Jan 16 09:05:14.593002 systemd[1]: Hostname set to . Jan 16 09:05:14.593018 systemd[1]: Initializing machine ID from VM UUID. Jan 16 09:05:14.593033 zram_generator::config[1035]: No configuration found. Jan 16 09:05:14.593053 systemd[1]: Populated /etc with preset unit settings. Jan 16 09:05:14.593077 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 16 09:05:14.593104 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 16 09:05:14.593124 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 16 09:05:14.593142 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 16 09:05:14.593156 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 16 09:05:14.593168 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 16 09:05:14.593181 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 16 09:05:14.593194 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 16 09:05:14.593207 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 16 09:05:14.593220 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 16 09:05:14.593241 systemd[1]: Created slice user.slice - User and Session Slice. Jan 16 09:05:14.593260 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 09:05:14.593279 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 09:05:14.593297 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 16 09:05:14.593315 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 16 09:05:14.593335 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 16 09:05:14.593359 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 09:05:14.593408 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 16 09:05:14.593426 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 09:05:14.593450 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 16 09:05:14.593468 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 16 09:05:14.593486 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 16 09:05:14.593506 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 16 09:05:14.593524 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 09:05:14.593543 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 09:05:14.593565 systemd[1]: Reached target slices.target - Slice Units. Jan 16 09:05:14.593583 systemd[1]: Reached target swap.target - Swaps. Jan 16 09:05:14.593603 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 16 09:05:14.593623 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 16 09:05:14.593643 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 09:05:14.593666 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 09:05:14.593689 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 09:05:14.593714 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 16 09:05:14.593731 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 16 09:05:14.593755 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 16 09:05:14.593773 systemd[1]: Mounting media.mount - External Media Directory... Jan 16 09:05:14.593792 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:05:14.593809 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 16 09:05:14.593827 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 16 09:05:14.593847 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 16 09:05:14.593869 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 16 09:05:14.593888 systemd[1]: Reached target machines.target - Containers. Jan 16 09:05:14.593912 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 16 09:05:14.593932 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 09:05:14.593958 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 09:05:14.593977 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 16 09:05:14.593996 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 09:05:14.594043 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 09:05:14.594077 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 09:05:14.594099 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 16 09:05:14.594121 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 09:05:14.594149 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 16 09:05:14.594172 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 16 09:05:14.594194 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 16 09:05:14.594228 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 16 09:05:14.594250 systemd[1]: Stopped systemd-fsck-usr.service. Jan 16 09:05:14.594273 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 09:05:14.594294 kernel: fuse: init (API version 7.39) Jan 16 09:05:14.594317 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 09:05:14.594341 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 16 09:05:14.594887 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 16 09:05:14.594927 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 09:05:14.594950 systemd[1]: verity-setup.service: Deactivated successfully. Jan 16 09:05:14.594973 systemd[1]: Stopped verity-setup.service. Jan 16 09:05:14.594998 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:05:14.595078 systemd-journald[1103]: Collecting audit messages is disabled. Jan 16 09:05:14.595131 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 16 09:05:14.595149 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 16 09:05:14.595169 systemd-journald[1103]: Journal started Jan 16 09:05:14.595204 systemd-journald[1103]: Runtime Journal (/run/log/journal/c1229e7d7d20486287aa65290ae37eb4) is 4.9M, max 39.3M, 34.4M free. Jan 16 09:05:14.210952 systemd[1]: Queued start job for default target multi-user.target. Jan 16 09:05:14.250905 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 16 09:05:14.251663 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 16 09:05:14.615618 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 09:05:14.629807 systemd[1]: Mounted media.mount - External Media Directory. Jan 16 09:05:14.631943 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 16 09:05:14.632914 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 16 09:05:14.633582 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 16 09:05:14.635253 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 09:05:14.637535 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 16 09:05:14.639635 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 16 09:05:14.640902 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 09:05:14.641470 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 09:05:14.643254 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 09:05:14.643550 kernel: loop: module loaded Jan 16 09:05:14.643598 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 09:05:14.647386 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 16 09:05:14.647670 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 16 09:05:14.648706 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 09:05:14.650621 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 09:05:14.651736 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 09:05:14.653331 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 16 09:05:14.654611 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 09:05:14.687069 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 16 09:05:14.699625 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 16 09:05:14.711916 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 16 09:05:14.712644 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 16 09:05:14.712717 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 09:05:14.718464 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 16 09:05:14.745195 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 16 09:05:14.748122 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 16 09:05:14.748961 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 09:05:14.763548 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 16 09:05:14.766806 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 16 09:05:14.767502 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 09:05:14.770832 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 16 09:05:14.771603 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 09:05:14.799793 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 09:05:14.803867 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 16 09:05:14.809020 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 09:05:14.816286 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 16 09:05:14.817274 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 16 09:05:14.828393 kernel: ACPI: bus type drm_connector registered Jan 16 09:05:14.833618 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 16 09:05:14.843822 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 09:05:14.844133 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 09:05:14.870598 systemd-journald[1103]: Time spent on flushing to /var/log/journal/c1229e7d7d20486287aa65290ae37eb4 is 109.853ms for 990 entries. Jan 16 09:05:14.870598 systemd-journald[1103]: System Journal (/var/log/journal/c1229e7d7d20486287aa65290ae37eb4) is 8.0M, max 195.6M, 187.6M free. Jan 16 09:05:15.021420 systemd-journald[1103]: Received client request to flush runtime journal. Jan 16 09:05:15.021472 kernel: loop0: detected capacity change from 0 to 142488 Jan 16 09:05:14.871770 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 16 09:05:14.874109 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 16 09:05:14.877127 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 16 09:05:14.895415 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 16 09:05:14.943576 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 09:05:14.952742 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 16 09:05:14.982075 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 09:05:15.034140 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 16 09:05:15.038796 systemd-tmpfiles[1153]: ACLs are not supported, ignoring. Jan 16 09:05:15.038826 systemd-tmpfiles[1153]: ACLs are not supported, ignoring. Jan 16 09:05:15.046993 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 16 09:05:15.063952 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 09:05:15.083325 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 16 09:05:15.090643 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 16 09:05:15.131395 kernel: loop1: detected capacity change from 0 to 210664 Jan 16 09:05:15.182588 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 16 09:05:15.185185 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 16 09:05:15.216857 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 16 09:05:15.258520 kernel: loop2: detected capacity change from 0 to 8 Jan 16 09:05:15.256999 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 09:05:15.323683 kernel: loop3: detected capacity change from 0 to 140768 Jan 16 09:05:15.350796 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Jan 16 09:05:15.350828 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Jan 16 09:05:15.360026 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 09:05:15.428992 kernel: loop4: detected capacity change from 0 to 142488 Jan 16 09:05:15.472421 kernel: loop5: detected capacity change from 0 to 210664 Jan 16 09:05:15.504424 kernel: loop6: detected capacity change from 0 to 8 Jan 16 09:05:15.512464 kernel: loop7: detected capacity change from 0 to 140768 Jan 16 09:05:15.569335 (sd-merge)[1183]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 16 09:05:15.570423 (sd-merge)[1183]: Merged extensions into '/usr'. Jan 16 09:05:15.583944 systemd[1]: Reloading requested from client PID 1152 ('systemd-sysext') (unit systemd-sysext.service)... Jan 16 09:05:15.583971 systemd[1]: Reloading... Jan 16 09:05:15.721399 zram_generator::config[1212]: No configuration found. Jan 16 09:05:16.098206 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 09:05:16.188162 systemd[1]: Reloading finished in 603 ms. Jan 16 09:05:16.248043 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 16 09:05:16.276721 systemd[1]: Starting ensure-sysext.service... Jan 16 09:05:16.285725 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 09:05:16.298299 ldconfig[1144]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 16 09:05:16.320504 systemd[1]: Reloading requested from client PID 1251 ('systemctl') (unit ensure-sysext.service)... Jan 16 09:05:16.320527 systemd[1]: Reloading... Jan 16 09:05:16.337549 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 16 09:05:16.338686 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 16 09:05:16.341119 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 16 09:05:16.341863 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Jan 16 09:05:16.342101 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Jan 16 09:05:16.347658 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 09:05:16.347896 systemd-tmpfiles[1252]: Skipping /boot Jan 16 09:05:16.363204 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 09:05:16.363557 systemd-tmpfiles[1252]: Skipping /boot Jan 16 09:05:16.492411 zram_generator::config[1279]: No configuration found. Jan 16 09:05:16.721336 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 09:05:16.801522 systemd[1]: Reloading finished in 480 ms. Jan 16 09:05:16.824500 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 16 09:05:16.826527 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 16 09:05:16.832828 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 09:05:16.867949 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 16 09:05:16.873899 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 16 09:05:16.878685 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 16 09:05:16.885130 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 09:05:16.889735 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 09:05:16.898638 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 16 09:05:16.906992 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:05:16.908717 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 09:05:16.916914 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 09:05:16.928540 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 09:05:16.934919 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 09:05:16.936796 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 09:05:16.937120 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:05:16.953597 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 16 09:05:16.965182 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:05:16.967230 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 09:05:16.967976 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 09:05:16.968217 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:05:16.975723 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 16 09:05:16.991959 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:05:16.992865 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 09:05:17.003158 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 09:05:17.005297 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 09:05:17.005695 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:05:17.012689 systemd[1]: Finished ensure-sysext.service. Jan 16 09:05:17.018691 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 09:05:17.019823 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 09:05:17.040019 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 16 09:05:17.051106 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 16 09:05:17.068225 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 16 09:05:17.069565 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 09:05:17.069859 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 09:05:17.072016 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 09:05:17.072112 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 09:05:17.093991 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 09:05:17.094295 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 09:05:17.096281 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 09:05:17.111765 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 09:05:17.112085 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 09:05:17.135800 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 16 09:05:17.139126 augenrules[1364]: No rules Jan 16 09:05:17.139520 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 16 09:05:17.141168 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Jan 16 09:05:17.152782 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 16 09:05:17.197033 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 09:05:17.209671 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 09:05:17.212785 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 16 09:05:17.349676 systemd-resolved[1331]: Positive Trust Anchors: Jan 16 09:05:17.350214 systemd-resolved[1331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 09:05:17.350274 systemd-resolved[1331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 09:05:17.355899 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 16 09:05:17.358304 systemd[1]: Reached target time-set.target - System Time Set. Jan 16 09:05:17.363719 systemd-resolved[1331]: Using system hostname 'ci-4081.3.0-a-d8418dcdb9'. Jan 16 09:05:17.367803 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 09:05:17.368646 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 09:05:17.406209 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 16 09:05:17.407293 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:05:17.407715 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 09:05:17.417636 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 09:05:17.426755 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 09:05:17.432760 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 09:05:17.434659 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 09:05:17.434726 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 09:05:17.434750 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:05:17.439246 systemd-networkd[1374]: lo: Link UP Jan 16 09:05:17.439261 systemd-networkd[1374]: lo: Gained carrier Jan 16 09:05:17.445052 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 09:05:17.445410 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 09:05:17.457063 systemd-networkd[1374]: Enumeration completed Jan 16 09:05:17.458574 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 09:05:17.462216 systemd[1]: Reached target network.target - Network. Jan 16 09:05:17.476877 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 16 09:05:17.492398 kernel: ISO 9660 Extensions: RRIP_1991A Jan 16 09:05:17.497345 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 16 09:05:17.518696 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 09:05:17.519048 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 09:05:17.521348 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 09:05:17.523358 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 09:05:17.524942 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 09:05:17.525020 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 09:05:17.542023 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1383) Jan 16 09:05:17.548948 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 16 09:05:17.628479 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 16 09:05:17.659483 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 16 09:05:17.676428 kernel: ACPI: button: Power Button [PWRF] Jan 16 09:05:17.698071 systemd-networkd[1374]: eth1: Configuring with /run/systemd/network/10-02:be:c0:f5:32:df.network. Jan 16 09:05:17.700739 systemd-networkd[1374]: eth1: Link UP Jan 16 09:05:17.701123 systemd-networkd[1374]: eth1: Gained carrier Jan 16 09:05:17.709189 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Jan 16 09:05:17.717556 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 16 09:05:17.725400 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 16 09:05:17.733696 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 16 09:05:17.745972 systemd-networkd[1374]: eth0: Configuring with /run/systemd/network/10-82:7e:84:55:1e:26.network. Jan 16 09:05:17.750091 systemd-networkd[1374]: eth0: Link UP Jan 16 09:05:17.750261 systemd-networkd[1374]: eth0: Gained carrier Jan 16 09:05:17.750913 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Jan 16 09:05:17.757738 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Jan 16 09:05:17.780492 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 16 09:05:17.803094 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 09:05:17.844411 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 16 09:05:17.847402 kernel: mousedev: PS/2 mouse device common for all mice Jan 16 09:05:17.854431 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 16 09:05:17.864501 kernel: Console: switching to colour dummy device 80x25 Jan 16 09:05:17.867691 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 16 09:05:17.867816 kernel: [drm] features: -context_init Jan 16 09:05:17.874595 kernel: [drm] number of scanouts: 1 Jan 16 09:05:17.874734 kernel: [drm] number of cap sets: 0 Jan 16 09:05:17.882405 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 16 09:05:17.891117 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 16 09:05:17.891234 kernel: Console: switching to colour frame buffer device 128x48 Jan 16 09:05:17.909553 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 16 09:05:17.913230 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 09:05:17.915539 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:05:17.936005 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 09:05:17.964904 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 09:05:17.966579 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:05:18.038824 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 09:05:18.128748 kernel: EDAC MC: Ver: 3.0.0 Jan 16 09:05:18.159282 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:05:18.187468 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 16 09:05:18.197744 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 16 09:05:18.221596 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 16 09:05:18.257694 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 16 09:05:18.261292 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 09:05:18.262640 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 09:05:18.262911 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 16 09:05:18.263032 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 16 09:05:18.263360 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 16 09:05:18.263749 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 16 09:05:18.263845 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 16 09:05:18.263918 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 16 09:05:18.263945 systemd[1]: Reached target paths.target - Path Units. Jan 16 09:05:18.263999 systemd[1]: Reached target timers.target - Timer Units. Jan 16 09:05:18.265835 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 16 09:05:18.269163 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 16 09:05:18.285991 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 16 09:05:18.291269 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 16 09:05:18.297291 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 16 09:05:18.298340 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 09:05:18.300122 systemd[1]: Reached target basic.target - Basic System. Jan 16 09:05:18.303842 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 16 09:05:18.303898 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 16 09:05:18.311730 systemd[1]: Starting containerd.service - containerd container runtime... Jan 16 09:05:18.317133 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 16 09:05:18.326778 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 16 09:05:18.343841 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 16 09:05:18.349563 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 16 09:05:18.362704 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 16 09:05:18.366626 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 16 09:05:18.383304 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 16 09:05:18.406367 jq[1444]: false Jan 16 09:05:18.409755 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 16 09:05:18.422179 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 16 09:05:18.435552 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 16 09:05:18.457104 extend-filesystems[1447]: Found loop4 Jan 16 09:05:18.457104 extend-filesystems[1447]: Found loop5 Jan 16 09:05:18.457104 extend-filesystems[1447]: Found loop6 Jan 16 09:05:18.457104 extend-filesystems[1447]: Found loop7 Jan 16 09:05:18.457104 extend-filesystems[1447]: Found vda Jan 16 09:05:18.457104 extend-filesystems[1447]: Found vda1 Jan 16 09:05:18.457104 extend-filesystems[1447]: Found vda2 Jan 16 09:05:18.457104 extend-filesystems[1447]: Found vda3 Jan 16 09:05:18.457104 extend-filesystems[1447]: Found usr Jan 16 09:05:18.457104 extend-filesystems[1447]: Found vda4 Jan 16 09:05:18.457104 extend-filesystems[1447]: Found vda6 Jan 16 09:05:18.457104 extend-filesystems[1447]: Found vda7 Jan 16 09:05:18.457104 extend-filesystems[1447]: Found vda9 Jan 16 09:05:18.457104 extend-filesystems[1447]: Checking size of /dev/vda9 Jan 16 09:05:18.456506 dbus-daemon[1443]: [system] SELinux support is enabled Jan 16 09:05:18.450632 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 16 09:05:18.563330 coreos-metadata[1442]: Jan 16 09:05:18.503 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 09:05:18.563330 coreos-metadata[1442]: Jan 16 09:05:18.548 INFO Fetch successful Jan 16 09:05:18.563918 extend-filesystems[1447]: Resized partition /dev/vda9 Jan 16 09:05:18.451942 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 16 09:05:18.568492 extend-filesystems[1468]: resize2fs 1.47.1 (20-May-2024) Jan 16 09:05:18.454763 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 16 09:05:18.461670 systemd[1]: Starting update-engine.service - Update Engine... Jan 16 09:05:18.468741 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 16 09:05:18.571850 jq[1462]: true Jan 16 09:05:18.533729 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 16 09:05:18.545589 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 16 09:05:18.557642 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 16 09:05:18.557923 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 16 09:05:18.558541 systemd[1]: motdgen.service: Deactivated successfully. Jan 16 09:05:18.558824 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 16 09:05:18.577588 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 16 09:05:18.578791 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 16 09:05:18.579055 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 16 09:05:18.613500 update_engine[1460]: I20250116 09:05:18.606634 1460 main.cc:92] Flatcar Update Engine starting Jan 16 09:05:18.633633 jq[1471]: true Jan 16 09:05:18.640233 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1383) Jan 16 09:05:18.641767 update_engine[1460]: I20250116 09:05:18.637583 1460 update_check_scheduler.cc:74] Next update check in 2m37s Jan 16 09:05:18.656972 systemd[1]: Started update-engine.service - Update Engine. Jan 16 09:05:18.661166 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 16 09:05:18.661236 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 16 09:05:18.665275 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 16 09:05:18.666289 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 16 09:05:18.666346 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 16 09:05:18.681550 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 16 09:05:18.690422 tar[1470]: linux-amd64/helm Jan 16 09:05:18.699992 (ntainerd)[1482]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 16 09:05:18.773268 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 16 09:05:18.779920 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 16 09:05:18.963213 bash[1505]: Updated "/home/core/.ssh/authorized_keys" Jan 16 09:05:18.966062 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 16 09:05:18.967733 systemd-logind[1456]: New seat seat0. Jan 16 09:05:18.977687 systemd-logind[1456]: Watching system buttons on /dev/input/event1 (Power Button) Jan 16 09:05:18.977732 systemd-logind[1456]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 16 09:05:18.993660 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 16 09:05:18.988521 systemd[1]: Starting sshkeys.service... Jan 16 09:05:18.992114 systemd[1]: Started systemd-logind.service - User Login Management. Jan 16 09:05:19.079465 extend-filesystems[1468]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 16 09:05:19.079465 extend-filesystems[1468]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 16 09:05:19.079465 extend-filesystems[1468]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 16 09:05:19.090152 extend-filesystems[1447]: Resized filesystem in /dev/vda9 Jan 16 09:05:19.090152 extend-filesystems[1447]: Found vdb Jan 16 09:05:19.081358 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 16 09:05:19.082055 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 16 09:05:19.091177 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 16 09:05:19.106976 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 16 09:05:19.192768 systemd-networkd[1374]: eth1: Gained IPv6LL Jan 16 09:05:19.193711 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Jan 16 09:05:19.208352 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 16 09:05:19.215599 systemd[1]: Reached target network-online.target - Network is Online. Jan 16 09:05:19.234819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 09:05:19.247056 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 16 09:05:19.278637 locksmithd[1487]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 16 09:05:19.290545 coreos-metadata[1514]: Jan 16 09:05:19.290 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 09:05:19.293307 containerd[1482]: time="2025-01-16T09:05:19.293186931Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 16 09:05:19.309576 coreos-metadata[1514]: Jan 16 09:05:19.309 INFO Fetch successful Jan 16 09:05:19.353129 unknown[1514]: wrote ssh authorized keys file for user: core Jan 16 09:05:19.393472 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 16 09:05:19.421402 update-ssh-keys[1532]: Updated "/home/core/.ssh/authorized_keys" Jan 16 09:05:19.424008 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 16 09:05:19.429159 systemd[1]: Finished sshkeys.service. Jan 16 09:05:19.478410 containerd[1482]: time="2025-01-16T09:05:19.476426755Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 16 09:05:19.484757 containerd[1482]: time="2025-01-16T09:05:19.484694631Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 16 09:05:19.488243 containerd[1482]: time="2025-01-16T09:05:19.486451289Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 16 09:05:19.488243 containerd[1482]: time="2025-01-16T09:05:19.486521880Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 16 09:05:19.488243 containerd[1482]: time="2025-01-16T09:05:19.486772177Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 16 09:05:19.488243 containerd[1482]: time="2025-01-16T09:05:19.486806360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 16 09:05:19.488243 containerd[1482]: time="2025-01-16T09:05:19.486892464Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 09:05:19.488243 containerd[1482]: time="2025-01-16T09:05:19.486911979Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 16 09:05:19.488243 containerd[1482]: time="2025-01-16T09:05:19.487172195Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 09:05:19.488243 containerd[1482]: time="2025-01-16T09:05:19.487193447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 16 09:05:19.488243 containerd[1482]: time="2025-01-16T09:05:19.487208989Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 09:05:19.488243 containerd[1482]: time="2025-01-16T09:05:19.487221093Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 16 09:05:19.488243 containerd[1482]: time="2025-01-16T09:05:19.487309086Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 16 09:05:19.488243 containerd[1482]: time="2025-01-16T09:05:19.487660645Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 16 09:05:19.488690 containerd[1482]: time="2025-01-16T09:05:19.487846828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 09:05:19.488690 containerd[1482]: time="2025-01-16T09:05:19.487868780Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 16 09:05:19.488690 containerd[1482]: time="2025-01-16T09:05:19.487997902Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 16 09:05:19.488690 containerd[1482]: time="2025-01-16T09:05:19.488059498Z" level=info msg="metadata content store policy set" policy=shared Jan 16 09:05:19.504659 containerd[1482]: time="2025-01-16T09:05:19.504592992Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 16 09:05:19.504925 containerd[1482]: time="2025-01-16T09:05:19.504898233Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 16 09:05:19.508027 containerd[1482]: time="2025-01-16T09:05:19.507425163Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 16 09:05:19.508027 containerd[1482]: time="2025-01-16T09:05:19.507500549Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 16 09:05:19.508027 containerd[1482]: time="2025-01-16T09:05:19.507528410Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 16 09:05:19.508027 containerd[1482]: time="2025-01-16T09:05:19.507773377Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 16 09:05:19.508276 containerd[1482]: time="2025-01-16T09:05:19.508232450Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 16 09:05:19.508570 containerd[1482]: time="2025-01-16T09:05:19.508540901Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 16 09:05:19.508649 containerd[1482]: time="2025-01-16T09:05:19.508576802Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 16 09:05:19.508649 containerd[1482]: time="2025-01-16T09:05:19.508598499Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 16 09:05:19.508649 containerd[1482]: time="2025-01-16T09:05:19.508622290Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 16 09:05:19.508649 containerd[1482]: time="2025-01-16T09:05:19.508644175Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 16 09:05:19.508772 containerd[1482]: time="2025-01-16T09:05:19.508666337Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 16 09:05:19.508772 containerd[1482]: time="2025-01-16T09:05:19.508691139Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 16 09:05:19.508772 containerd[1482]: time="2025-01-16T09:05:19.508734972Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 16 09:05:19.508772 containerd[1482]: time="2025-01-16T09:05:19.508760992Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 16 09:05:19.508899 containerd[1482]: time="2025-01-16T09:05:19.508781622Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 16 09:05:19.508899 containerd[1482]: time="2025-01-16T09:05:19.508800727Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 16 09:05:19.508899 containerd[1482]: time="2025-01-16T09:05:19.508835929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 16 09:05:19.508899 containerd[1482]: time="2025-01-16T09:05:19.508860281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 16 09:05:19.508899 containerd[1482]: time="2025-01-16T09:05:19.508879738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 16 09:05:19.509170 containerd[1482]: time="2025-01-16T09:05:19.508903567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 16 09:05:19.509170 containerd[1482]: time="2025-01-16T09:05:19.508924550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 16 09:05:19.509170 containerd[1482]: time="2025-01-16T09:05:19.508945387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 16 09:05:19.509170 containerd[1482]: time="2025-01-16T09:05:19.508966224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 16 09:05:19.509170 containerd[1482]: time="2025-01-16T09:05:19.508986234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 16 09:05:19.509170 containerd[1482]: time="2025-01-16T09:05:19.509008129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 16 09:05:19.509170 containerd[1482]: time="2025-01-16T09:05:19.509032179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 16 09:05:19.509170 containerd[1482]: time="2025-01-16T09:05:19.509049756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 16 09:05:19.509170 containerd[1482]: time="2025-01-16T09:05:19.509068450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 16 09:05:19.509170 containerd[1482]: time="2025-01-16T09:05:19.509089562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 16 09:05:19.509170 containerd[1482]: time="2025-01-16T09:05:19.509113830Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 16 09:05:19.509170 containerd[1482]: time="2025-01-16T09:05:19.509148188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 16 09:05:19.509521 containerd[1482]: time="2025-01-16T09:05:19.509185120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 16 09:05:19.509521 containerd[1482]: time="2025-01-16T09:05:19.509206329Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 16 09:05:19.509521 containerd[1482]: time="2025-01-16T09:05:19.509291174Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 16 09:05:19.509521 containerd[1482]: time="2025-01-16T09:05:19.509316933Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 16 09:05:19.509521 containerd[1482]: time="2025-01-16T09:05:19.509329097Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 16 09:05:19.509521 containerd[1482]: time="2025-01-16T09:05:19.509340733Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 16 09:05:19.509521 containerd[1482]: time="2025-01-16T09:05:19.509350131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 16 09:05:19.509521 containerd[1482]: time="2025-01-16T09:05:19.509388983Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 16 09:05:19.509521 containerd[1482]: time="2025-01-16T09:05:19.509400894Z" level=info msg="NRI interface is disabled by configuration." Jan 16 09:05:19.509521 containerd[1482]: time="2025-01-16T09:05:19.509430173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 16 09:05:19.510432 containerd[1482]: time="2025-01-16T09:05:19.509785520Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 16 09:05:19.510432 containerd[1482]: time="2025-01-16T09:05:19.509878325Z" level=info msg="Connect containerd service" Jan 16 09:05:19.510432 containerd[1482]: time="2025-01-16T09:05:19.509933502Z" level=info msg="using legacy CRI server" Jan 16 09:05:19.510432 containerd[1482]: time="2025-01-16T09:05:19.509942779Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 16 09:05:19.510432 containerd[1482]: time="2025-01-16T09:05:19.510129963Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 16 09:05:19.519402 containerd[1482]: time="2025-01-16T09:05:19.517875118Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 09:05:19.519402 containerd[1482]: time="2025-01-16T09:05:19.518266189Z" level=info msg="Start subscribing containerd event" Jan 16 09:05:19.519402 containerd[1482]: time="2025-01-16T09:05:19.518349725Z" level=info msg="Start recovering state" Jan 16 09:05:19.519402 containerd[1482]: time="2025-01-16T09:05:19.518490187Z" level=info msg="Start event monitor" Jan 16 09:05:19.519402 containerd[1482]: time="2025-01-16T09:05:19.518522024Z" level=info msg="Start snapshots syncer" Jan 16 09:05:19.519402 containerd[1482]: time="2025-01-16T09:05:19.518540421Z" level=info msg="Start cni network conf syncer for default" Jan 16 09:05:19.519402 containerd[1482]: time="2025-01-16T09:05:19.518560321Z" level=info msg="Start streaming server" Jan 16 09:05:19.520580 containerd[1482]: time="2025-01-16T09:05:19.520463729Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 16 09:05:19.520580 containerd[1482]: time="2025-01-16T09:05:19.520546532Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 16 09:05:19.521812 systemd[1]: Started containerd.service - containerd container runtime. Jan 16 09:05:19.525586 containerd[1482]: time="2025-01-16T09:05:19.525397052Z" level=info msg="containerd successfully booted in 0.235326s" Jan 16 09:05:19.575618 systemd-networkd[1374]: eth0: Gained IPv6LL Jan 16 09:05:19.576734 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Jan 16 09:05:19.836953 sshd_keygen[1472]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 16 09:05:19.879172 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 16 09:05:19.893342 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 16 09:05:19.922001 systemd[1]: issuegen.service: Deactivated successfully. Jan 16 09:05:19.922294 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 16 09:05:19.934850 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 16 09:05:19.982352 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 16 09:05:19.995009 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 16 09:05:20.006656 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 16 09:05:20.008898 systemd[1]: Reached target getty.target - Login Prompts. Jan 16 09:05:20.162535 tar[1470]: linux-amd64/LICENSE Jan 16 09:05:20.162535 tar[1470]: linux-amd64/README.md Jan 16 09:05:20.175308 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 16 09:05:21.118444 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:05:21.126308 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 16 09:05:21.134475 systemd[1]: Startup finished in 1.590s (kernel) + 8.864s (initrd) + 8.542s (userspace) = 18.998s. Jan 16 09:05:21.141206 (kubelet)[1564]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 09:05:21.689159 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 16 09:05:21.702824 systemd[1]: Started sshd@0-146.190.127.227:22-139.178.68.195:41284.service - OpenSSH per-connection server daemon (139.178.68.195:41284). Jan 16 09:05:21.865894 sshd[1575]: Accepted publickey for core from 139.178.68.195 port 41284 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:05:21.871895 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:05:21.899057 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 16 09:05:21.908449 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 16 09:05:21.925598 systemd-logind[1456]: New session 1 of user core. Jan 16 09:05:21.975400 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 16 09:05:21.994864 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 16 09:05:22.024037 (systemd)[1580]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 16 09:05:22.250418 systemd[1580]: Queued start job for default target default.target. Jan 16 09:05:22.261999 systemd[1580]: Created slice app.slice - User Application Slice. Jan 16 09:05:22.262978 systemd[1580]: Reached target paths.target - Paths. Jan 16 09:05:22.263192 systemd[1580]: Reached target timers.target - Timers. Jan 16 09:05:22.269683 systemd[1580]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 16 09:05:22.298043 systemd[1580]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 16 09:05:22.299878 systemd[1580]: Reached target sockets.target - Sockets. Jan 16 09:05:22.299946 systemd[1580]: Reached target basic.target - Basic System. Jan 16 09:05:22.300031 systemd[1580]: Reached target default.target - Main User Target. Jan 16 09:05:22.300086 systemd[1580]: Startup finished in 253ms. Jan 16 09:05:22.301658 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 16 09:05:22.308042 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 16 09:05:22.408940 systemd[1]: Started sshd@1-146.190.127.227:22-139.178.68.195:41296.service - OpenSSH per-connection server daemon (139.178.68.195:41296). Jan 16 09:05:22.498655 sshd[1592]: Accepted publickey for core from 139.178.68.195 port 41296 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:05:22.504482 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:05:22.513609 kubelet[1564]: E0116 09:05:22.513453 1564 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 09:05:22.516395 systemd-logind[1456]: New session 2 of user core. Jan 16 09:05:22.517345 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 09:05:22.517918 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 09:05:22.518355 systemd[1]: kubelet.service: Consumed 1.517s CPU time. Jan 16 09:05:22.524694 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 16 09:05:22.604152 sshd[1592]: pam_unix(sshd:session): session closed for user core Jan 16 09:05:22.614164 systemd[1]: sshd@1-146.190.127.227:22-139.178.68.195:41296.service: Deactivated successfully. Jan 16 09:05:22.624284 systemd[1]: session-2.scope: Deactivated successfully. Jan 16 09:05:22.633250 systemd-logind[1456]: Session 2 logged out. Waiting for processes to exit. Jan 16 09:05:22.637181 systemd[1]: Started sshd@2-146.190.127.227:22-139.178.68.195:41304.service - OpenSSH per-connection server daemon (139.178.68.195:41304). Jan 16 09:05:22.640442 systemd-logind[1456]: Removed session 2. Jan 16 09:05:22.702952 sshd[1600]: Accepted publickey for core from 139.178.68.195 port 41304 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:05:22.705542 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:05:22.725854 systemd-logind[1456]: New session 3 of user core. Jan 16 09:05:22.732797 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 16 09:05:22.822739 sshd[1600]: pam_unix(sshd:session): session closed for user core Jan 16 09:05:22.835801 systemd[1]: sshd@2-146.190.127.227:22-139.178.68.195:41304.service: Deactivated successfully. Jan 16 09:05:22.840688 systemd[1]: session-3.scope: Deactivated successfully. Jan 16 09:05:22.850881 systemd-logind[1456]: Session 3 logged out. Waiting for processes to exit. Jan 16 09:05:22.860196 systemd[1]: Started sshd@3-146.190.127.227:22-139.178.68.195:41320.service - OpenSSH per-connection server daemon (139.178.68.195:41320). Jan 16 09:05:22.863017 systemd-logind[1456]: Removed session 3. Jan 16 09:05:22.926429 sshd[1607]: Accepted publickey for core from 139.178.68.195 port 41320 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:05:22.929433 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:05:22.944274 systemd-logind[1456]: New session 4 of user core. Jan 16 09:05:22.957549 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 16 09:05:23.032472 sshd[1607]: pam_unix(sshd:session): session closed for user core Jan 16 09:05:23.044354 systemd[1]: sshd@3-146.190.127.227:22-139.178.68.195:41320.service: Deactivated successfully. Jan 16 09:05:23.049557 systemd[1]: session-4.scope: Deactivated successfully. Jan 16 09:05:23.053340 systemd-logind[1456]: Session 4 logged out. Waiting for processes to exit. Jan 16 09:05:23.068895 systemd[1]: Started sshd@4-146.190.127.227:22-139.178.68.195:41326.service - OpenSSH per-connection server daemon (139.178.68.195:41326). Jan 16 09:05:23.072401 systemd-logind[1456]: Removed session 4. Jan 16 09:05:23.118477 sshd[1614]: Accepted publickey for core from 139.178.68.195 port 41326 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:05:23.120338 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:05:23.136198 systemd-logind[1456]: New session 5 of user core. Jan 16 09:05:23.142242 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 16 09:05:23.240210 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 16 09:05:23.243823 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 09:05:23.276400 sudo[1617]: pam_unix(sudo:session): session closed for user root Jan 16 09:05:23.281062 sshd[1614]: pam_unix(sshd:session): session closed for user core Jan 16 09:05:23.296314 systemd[1]: sshd@4-146.190.127.227:22-139.178.68.195:41326.service: Deactivated successfully. Jan 16 09:05:23.302636 systemd[1]: session-5.scope: Deactivated successfully. Jan 16 09:05:23.307642 systemd-logind[1456]: Session 5 logged out. Waiting for processes to exit. Jan 16 09:05:23.314900 systemd[1]: Started sshd@5-146.190.127.227:22-139.178.68.195:41340.service - OpenSSH per-connection server daemon (139.178.68.195:41340). Jan 16 09:05:23.317653 systemd-logind[1456]: Removed session 5. Jan 16 09:05:23.368970 sshd[1622]: Accepted publickey for core from 139.178.68.195 port 41340 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:05:23.372504 sshd[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:05:23.379609 systemd-logind[1456]: New session 6 of user core. Jan 16 09:05:23.389739 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 16 09:05:23.458875 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 16 09:05:23.459865 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 09:05:23.465488 sudo[1626]: pam_unix(sudo:session): session closed for user root Jan 16 09:05:23.475884 sudo[1625]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 16 09:05:23.476882 sudo[1625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 09:05:23.501007 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 16 09:05:23.506879 auditctl[1629]: No rules Jan 16 09:05:23.507987 systemd[1]: audit-rules.service: Deactivated successfully. Jan 16 09:05:23.508533 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 16 09:05:23.517100 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 16 09:05:23.579154 augenrules[1647]: No rules Jan 16 09:05:23.581642 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 16 09:05:23.584320 sudo[1625]: pam_unix(sudo:session): session closed for user root Jan 16 09:05:23.592020 sshd[1622]: pam_unix(sshd:session): session closed for user core Jan 16 09:05:23.603214 systemd[1]: sshd@5-146.190.127.227:22-139.178.68.195:41340.service: Deactivated successfully. Jan 16 09:05:23.606194 systemd[1]: session-6.scope: Deactivated successfully. Jan 16 09:05:23.609969 systemd-logind[1456]: Session 6 logged out. Waiting for processes to exit. Jan 16 09:05:23.622192 systemd[1]: Started sshd@6-146.190.127.227:22-139.178.68.195:39764.service - OpenSSH per-connection server daemon (139.178.68.195:39764). Jan 16 09:05:23.626600 systemd-logind[1456]: Removed session 6. Jan 16 09:05:23.679937 sshd[1655]: Accepted publickey for core from 139.178.68.195 port 39764 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:05:23.683052 sshd[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:05:23.692593 systemd-logind[1456]: New session 7 of user core. Jan 16 09:05:23.704743 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 16 09:05:23.778422 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 16 09:05:23.788195 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 09:05:24.470899 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 16 09:05:24.471167 (dockerd)[1673]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 16 09:05:25.238812 dockerd[1673]: time="2025-01-16T09:05:25.238688011Z" level=info msg="Starting up" Jan 16 09:05:25.481325 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2625496409-merged.mount: Deactivated successfully. Jan 16 09:05:25.534412 dockerd[1673]: time="2025-01-16T09:05:25.533877718Z" level=info msg="Loading containers: start." Jan 16 09:05:25.784411 kernel: Initializing XFRM netlink socket Jan 16 09:05:25.836709 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Jan 16 09:05:25.846484 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Jan 16 09:05:25.977803 systemd-networkd[1374]: docker0: Link UP Jan 16 09:05:25.978731 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Jan 16 09:05:26.024441 dockerd[1673]: time="2025-01-16T09:05:26.024294591Z" level=info msg="Loading containers: done." Jan 16 09:05:26.060128 dockerd[1673]: time="2025-01-16T09:05:26.059652470Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 16 09:05:26.060128 dockerd[1673]: time="2025-01-16T09:05:26.059837528Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 16 09:05:26.060128 dockerd[1673]: time="2025-01-16T09:05:26.060029976Z" level=info msg="Daemon has completed initialization" Jan 16 09:05:26.171048 dockerd[1673]: time="2025-01-16T09:05:26.170630509Z" level=info msg="API listen on /run/docker.sock" Jan 16 09:05:26.172068 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 16 09:05:26.470778 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck580790518-merged.mount: Deactivated successfully. Jan 16 09:05:27.563588 containerd[1482]: time="2025-01-16T09:05:27.563050354Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 16 09:05:28.363213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2825054151.mount: Deactivated successfully. Jan 16 09:05:30.695301 containerd[1482]: time="2025-01-16T09:05:30.693711486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:30.696154 containerd[1482]: time="2025-01-16T09:05:30.696088043Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 16 09:05:30.699046 containerd[1482]: time="2025-01-16T09:05:30.698981423Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:30.718824 containerd[1482]: time="2025-01-16T09:05:30.718747166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:30.720485 containerd[1482]: time="2025-01-16T09:05:30.720422083Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 3.157321296s" Jan 16 09:05:30.720485 containerd[1482]: time="2025-01-16T09:05:30.720481131Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 16 09:05:30.759794 containerd[1482]: time="2025-01-16T09:05:30.759749473Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 16 09:05:32.768897 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 16 09:05:32.794122 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 09:05:33.061775 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:05:33.082148 (kubelet)[1896]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 09:05:33.218969 kubelet[1896]: E0116 09:05:33.218864 1896 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 09:05:33.234259 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 09:05:33.234577 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 09:05:33.539790 containerd[1482]: time="2025-01-16T09:05:33.539669634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:33.543059 containerd[1482]: time="2025-01-16T09:05:33.542977048Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 16 09:05:33.545803 containerd[1482]: time="2025-01-16T09:05:33.545680055Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:33.553056 containerd[1482]: time="2025-01-16T09:05:33.552945238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:33.554993 containerd[1482]: time="2025-01-16T09:05:33.554786920Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.794780858s" Jan 16 09:05:33.554993 containerd[1482]: time="2025-01-16T09:05:33.554856366Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 16 09:05:33.602132 containerd[1482]: time="2025-01-16T09:05:33.601655940Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 16 09:05:33.777417 systemd-resolved[1331]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 16 09:05:35.483046 containerd[1482]: time="2025-01-16T09:05:35.482945760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:35.485477 containerd[1482]: time="2025-01-16T09:05:35.484929628Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 16 09:05:35.486666 containerd[1482]: time="2025-01-16T09:05:35.486569786Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:35.496424 containerd[1482]: time="2025-01-16T09:05:35.496269993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:35.498491 containerd[1482]: time="2025-01-16T09:05:35.498243700Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.896527733s" Jan 16 09:05:35.498491 containerd[1482]: time="2025-01-16T09:05:35.498315838Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 16 09:05:35.547253 containerd[1482]: time="2025-01-16T09:05:35.547202327Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 16 09:05:36.856579 systemd-resolved[1331]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 16 09:05:37.351143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2412789912.mount: Deactivated successfully. Jan 16 09:05:38.404308 containerd[1482]: time="2025-01-16T09:05:38.402770883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:38.408412 containerd[1482]: time="2025-01-16T09:05:38.407586669Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 16 09:05:38.421419 containerd[1482]: time="2025-01-16T09:05:38.420603210Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:38.431536 containerd[1482]: time="2025-01-16T09:05:38.430860765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:38.433021 containerd[1482]: time="2025-01-16T09:05:38.432954103Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 2.88570195s" Jan 16 09:05:38.433289 containerd[1482]: time="2025-01-16T09:05:38.433257895Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 16 09:05:38.493420 containerd[1482]: time="2025-01-16T09:05:38.493343857Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 16 09:05:39.214008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1573835225.mount: Deactivated successfully. Jan 16 09:05:40.869118 containerd[1482]: time="2025-01-16T09:05:40.869033029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:40.870518 containerd[1482]: time="2025-01-16T09:05:40.870163471Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 16 09:05:40.871566 containerd[1482]: time="2025-01-16T09:05:40.871349546Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:40.876807 containerd[1482]: time="2025-01-16T09:05:40.876727114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:40.880333 containerd[1482]: time="2025-01-16T09:05:40.878716569Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.385305908s" Jan 16 09:05:40.880333 containerd[1482]: time="2025-01-16T09:05:40.878788024Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 16 09:05:40.930229 containerd[1482]: time="2025-01-16T09:05:40.929862608Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 16 09:05:40.934831 systemd-resolved[1331]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jan 16 09:05:41.552145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1300410244.mount: Deactivated successfully. Jan 16 09:05:41.571953 containerd[1482]: time="2025-01-16T09:05:41.568348800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:41.574060 containerd[1482]: time="2025-01-16T09:05:41.573949822Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 16 09:05:41.579585 containerd[1482]: time="2025-01-16T09:05:41.576648512Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:41.582306 containerd[1482]: time="2025-01-16T09:05:41.582228716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:41.586036 containerd[1482]: time="2025-01-16T09:05:41.585183030Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 654.631164ms" Jan 16 09:05:41.586036 containerd[1482]: time="2025-01-16T09:05:41.585267237Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 16 09:05:41.640767 containerd[1482]: time="2025-01-16T09:05:41.640463491Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 16 09:05:42.393169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3115428472.mount: Deactivated successfully. Jan 16 09:05:43.393542 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 16 09:05:43.405116 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 09:05:43.619802 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:05:43.628476 (kubelet)[2039]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 09:05:43.750691 kubelet[2039]: E0116 09:05:43.749662 2039 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 09:05:43.756132 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 09:05:43.757032 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 09:05:45.588912 containerd[1482]: time="2025-01-16T09:05:45.588816772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:45.592070 containerd[1482]: time="2025-01-16T09:05:45.591525390Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 16 09:05:45.593721 containerd[1482]: time="2025-01-16T09:05:45.593615377Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:45.598945 containerd[1482]: time="2025-01-16T09:05:45.598859008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:45.601722 containerd[1482]: time="2025-01-16T09:05:45.601309074Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.96079065s" Jan 16 09:05:45.601722 containerd[1482]: time="2025-01-16T09:05:45.601405132Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 16 09:05:50.211786 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:05:50.221752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 09:05:50.255787 systemd[1]: Reloading requested from client PID 2114 ('systemctl') (unit session-7.scope)... Jan 16 09:05:50.255811 systemd[1]: Reloading... Jan 16 09:05:50.471502 zram_generator::config[2156]: No configuration found. Jan 16 09:05:50.662929 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 09:05:50.772283 systemd[1]: Reloading finished in 513 ms. Jan 16 09:05:50.869750 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 16 09:05:50.869912 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 16 09:05:50.870823 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:05:50.881946 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 09:05:51.086587 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:05:51.108295 (kubelet)[2207]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 09:05:51.205719 kubelet[2207]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 09:05:51.206277 kubelet[2207]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 16 09:05:51.206344 kubelet[2207]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 09:05:51.208221 kubelet[2207]: I0116 09:05:51.208108 2207 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 09:05:51.908972 kubelet[2207]: I0116 09:05:51.908343 2207 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 16 09:05:51.908972 kubelet[2207]: I0116 09:05:51.908435 2207 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 09:05:51.908972 kubelet[2207]: I0116 09:05:51.908785 2207 server.go:927] "Client rotation is on, will bootstrap in background" Jan 16 09:05:51.938864 kubelet[2207]: I0116 09:05:51.938811 2207 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 09:05:51.941287 kubelet[2207]: E0116 09:05:51.940921 2207 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://146.190.127.227:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 146.190.127.227:6443: connect: connection refused Jan 16 09:05:51.975244 kubelet[2207]: I0116 09:05:51.975201 2207 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 09:05:51.976678 kubelet[2207]: I0116 09:05:51.975984 2207 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 09:05:51.976678 kubelet[2207]: I0116 09:05:51.976048 2207 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-d8418dcdb9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 16 09:05:51.976678 kubelet[2207]: I0116 09:05:51.976347 2207 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 09:05:51.976678 kubelet[2207]: I0116 09:05:51.976361 2207 container_manager_linux.go:301] "Creating device plugin manager" Jan 16 09:05:51.977870 kubelet[2207]: I0116 09:05:51.977647 2207 state_mem.go:36] "Initialized new in-memory state store" Jan 16 09:05:51.978976 kubelet[2207]: I0116 09:05:51.978817 2207 kubelet.go:400] "Attempting to sync node with API server" Jan 16 09:05:51.978976 kubelet[2207]: I0116 09:05:51.978855 2207 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 09:05:51.978976 kubelet[2207]: I0116 09:05:51.978886 2207 kubelet.go:312] "Adding apiserver pod source" Jan 16 09:05:51.978976 kubelet[2207]: I0116 09:05:51.978917 2207 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 09:05:51.993929 kubelet[2207]: W0116 09:05:51.982757 2207 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.127.227:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-d8418dcdb9&limit=500&resourceVersion=0": dial tcp 146.190.127.227:6443: connect: connection refused Jan 16 09:05:51.993929 kubelet[2207]: E0116 09:05:51.982886 2207 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://146.190.127.227:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-d8418dcdb9&limit=500&resourceVersion=0": dial tcp 146.190.127.227:6443: connect: connection refused Jan 16 09:05:51.993929 kubelet[2207]: W0116 09:05:51.988705 2207 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.127.227:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.127.227:6443: connect: connection refused Jan 16 09:05:51.993929 kubelet[2207]: E0116 09:05:51.988778 2207 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://146.190.127.227:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.127.227:6443: connect: connection refused Jan 16 09:05:51.994937 kubelet[2207]: I0116 09:05:51.994907 2207 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 16 09:05:51.998086 kubelet[2207]: I0116 09:05:51.998038 2207 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 09:05:51.998402 kubelet[2207]: W0116 09:05:51.998387 2207 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 16 09:05:51.999397 kubelet[2207]: I0116 09:05:51.999351 2207 server.go:1264] "Started kubelet" Jan 16 09:05:52.011918 kubelet[2207]: I0116 09:05:52.011869 2207 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 09:05:52.023689 kubelet[2207]: E0116 09:05:52.022192 2207 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.127.227:6443/api/v1/namespaces/default/events\": dial tcp 146.190.127.227:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-d8418dcdb9.181b20fcef409e37 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-d8418dcdb9,UID:ci-4081.3.0-a-d8418dcdb9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-d8418dcdb9,},FirstTimestamp:2025-01-16 09:05:51.999295031 +0000 UTC m=+0.880582415,LastTimestamp:2025-01-16 09:05:51.999295031 +0000 UTC m=+0.880582415,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-d8418dcdb9,}" Jan 16 09:05:52.029220 kubelet[2207]: I0116 09:05:52.029103 2207 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 16 09:05:52.030871 kubelet[2207]: I0116 09:05:52.030197 2207 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 16 09:05:52.030871 kubelet[2207]: I0116 09:05:52.030350 2207 reconciler.go:26] "Reconciler: start to sync state" Jan 16 09:05:52.031092 kubelet[2207]: I0116 09:05:52.030982 2207 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 09:05:52.035725 kubelet[2207]: I0116 09:05:52.034406 2207 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 09:05:52.035725 kubelet[2207]: I0116 09:05:52.034869 2207 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 09:05:52.040936 kubelet[2207]: I0116 09:05:52.040897 2207 server.go:455] "Adding debug handlers to kubelet server" Jan 16 09:05:52.041260 kubelet[2207]: E0116 09:05:52.041206 2207 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.127.227:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-d8418dcdb9?timeout=10s\": dial tcp 146.190.127.227:6443: connect: connection refused" interval="200ms" Jan 16 09:05:52.041586 kubelet[2207]: W0116 09:05:52.041518 2207 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.127.227:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.127.227:6443: connect: connection refused Jan 16 09:05:52.041733 kubelet[2207]: E0116 09:05:52.041715 2207 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://146.190.127.227:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.127.227:6443: connect: connection refused Jan 16 09:05:52.043927 kubelet[2207]: I0116 09:05:52.043873 2207 factory.go:221] Registration of the systemd container factory successfully Jan 16 09:05:52.044698 kubelet[2207]: I0116 09:05:52.044649 2207 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 09:05:52.045591 kubelet[2207]: E0116 09:05:52.045562 2207 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 09:05:52.047849 kubelet[2207]: I0116 09:05:52.047814 2207 factory.go:221] Registration of the containerd container factory successfully Jan 16 09:05:52.073491 kubelet[2207]: I0116 09:05:52.072705 2207 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 09:05:52.074485 kubelet[2207]: I0116 09:05:52.074450 2207 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 16 09:05:52.074485 kubelet[2207]: I0116 09:05:52.074474 2207 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 16 09:05:52.074689 kubelet[2207]: I0116 09:05:52.074500 2207 state_mem.go:36] "Initialized new in-memory state store" Jan 16 09:05:52.077613 kubelet[2207]: I0116 09:05:52.077522 2207 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 09:05:52.077613 kubelet[2207]: I0116 09:05:52.077580 2207 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 16 09:05:52.077613 kubelet[2207]: I0116 09:05:52.077616 2207 kubelet.go:2337] "Starting kubelet main sync loop" Jan 16 09:05:52.077923 kubelet[2207]: E0116 09:05:52.077691 2207 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 09:05:52.079041 kubelet[2207]: W0116 09:05:52.078776 2207 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.127.227:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.127.227:6443: connect: connection refused Jan 16 09:05:52.079041 kubelet[2207]: E0116 09:05:52.078869 2207 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://146.190.127.227:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.127.227:6443: connect: connection refused Jan 16 09:05:52.084677 kubelet[2207]: I0116 09:05:52.084638 2207 policy_none.go:49] "None policy: Start" Jan 16 09:05:52.085792 kubelet[2207]: I0116 09:05:52.085767 2207 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 16 09:05:52.086101 kubelet[2207]: I0116 09:05:52.086051 2207 state_mem.go:35] "Initializing new in-memory state store" Jan 16 09:05:52.113253 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 16 09:05:52.132542 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 16 09:05:52.134341 kubelet[2207]: I0116 09:05:52.134219 2207 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:05:52.135543 kubelet[2207]: E0116 09:05:52.135237 2207 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.127.227:6443/api/v1/nodes\": dial tcp 146.190.127.227:6443: connect: connection refused" node="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:05:52.141088 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 16 09:05:52.179583 kubelet[2207]: E0116 09:05:52.178592 2207 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 16 09:05:52.180786 kubelet[2207]: I0116 09:05:52.180717 2207 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 09:05:52.182159 kubelet[2207]: I0116 09:05:52.181058 2207 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 09:05:52.182159 kubelet[2207]: I0116 09:05:52.181284 2207 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 09:05:52.186889 kubelet[2207]: E0116 09:05:52.186843 2207 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-a-d8418dcdb9\" not found" Jan 16 09:05:52.242609 kubelet[2207]: E0116 09:05:52.242555 2207 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.127.227:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-d8418dcdb9?timeout=10s\": dial tcp 146.190.127.227:6443: connect: connection refused" interval="400ms" Jan 16 09:05:52.339959 kubelet[2207]: I0116 09:05:52.339901 2207 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:05:52.340559 kubelet[2207]: E0116 09:05:52.340512 2207 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.127.227:6443/api/v1/nodes\": dial tcp 146.190.127.227:6443: connect: connection refused" node="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:05:52.379997 kubelet[2207]: I0116 09:05:52.379903 2207 topology_manager.go:215] "Topology Admit Handler" podUID="db00a56d55c5877daf1335933d80a3b1" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-d8418dcdb9" Jan 16 09:05:52.382790 kubelet[2207]: I0116 09:05:52.381286 2207 topology_manager.go:215] "Topology Admit Handler" podUID="000fd9f127f718d3fa9ae3cd9ce989fd" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-d8418dcdb9" Jan 16 09:05:52.383323 kubelet[2207]: I0116 09:05:52.383263 2207 topology_manager.go:215] "Topology Admit Handler" podUID="335ed276fd1c0191fe9a9d97a1adabf3" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-d8418dcdb9" Jan 16 09:05:52.394856 systemd[1]: Created slice kubepods-burstable-poddb00a56d55c5877daf1335933d80a3b1.slice - libcontainer container kubepods-burstable-poddb00a56d55c5877daf1335933d80a3b1.slice. Jan 16 09:05:52.428809 systemd[1]: Created slice kubepods-burstable-pod335ed276fd1c0191fe9a9d97a1adabf3.slice - libcontainer container kubepods-burstable-pod335ed276fd1c0191fe9a9d97a1adabf3.slice. Jan 16 09:05:52.433005 kubelet[2207]: I0116 09:05:52.432156 2207 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db00a56d55c5877daf1335933d80a3b1-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-d8418dcdb9\" (UID: \"db00a56d55c5877daf1335933d80a3b1\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-d8418dcdb9" Jan 16 09:05:52.433005 kubelet[2207]: I0116 09:05:52.432213 2207 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/335ed276fd1c0191fe9a9d97a1adabf3-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-d8418dcdb9\" (UID: \"335ed276fd1c0191fe9a9d97a1adabf3\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-d8418dcdb9" Jan 16 09:05:52.433005 kubelet[2207]: I0116 09:05:52.432250 2207 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/335ed276fd1c0191fe9a9d97a1adabf3-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-d8418dcdb9\" (UID: \"335ed276fd1c0191fe9a9d97a1adabf3\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-d8418dcdb9" Jan 16 09:05:52.433005 kubelet[2207]: I0116 09:05:52.432276 2207 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/335ed276fd1c0191fe9a9d97a1adabf3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-d8418dcdb9\" (UID: \"335ed276fd1c0191fe9a9d97a1adabf3\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-d8418dcdb9" Jan 16 09:05:52.433005 kubelet[2207]: I0116 09:05:52.432299 2207 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/000fd9f127f718d3fa9ae3cd9ce989fd-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-d8418dcdb9\" (UID: \"000fd9f127f718d3fa9ae3cd9ce989fd\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-d8418dcdb9" Jan 16 09:05:52.433308 kubelet[2207]: I0116 09:05:52.432323 2207 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/000fd9f127f718d3fa9ae3cd9ce989fd-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-d8418dcdb9\" (UID: \"000fd9f127f718d3fa9ae3cd9ce989fd\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-d8418dcdb9" Jan 16 09:05:52.433308 kubelet[2207]: I0116 09:05:52.432348 2207 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/000fd9f127f718d3fa9ae3cd9ce989fd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-d8418dcdb9\" (UID: \"000fd9f127f718d3fa9ae3cd9ce989fd\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-d8418dcdb9" Jan 16 09:05:52.433308 kubelet[2207]: I0116 09:05:52.432400 2207 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/335ed276fd1c0191fe9a9d97a1adabf3-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-d8418dcdb9\" (UID: \"335ed276fd1c0191fe9a9d97a1adabf3\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-d8418dcdb9" Jan 16 09:05:52.433308 kubelet[2207]: I0116 09:05:52.432424 2207 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/335ed276fd1c0191fe9a9d97a1adabf3-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-d8418dcdb9\" (UID: \"335ed276fd1c0191fe9a9d97a1adabf3\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-d8418dcdb9" Jan 16 09:05:52.449752 systemd[1]: Created slice kubepods-burstable-pod000fd9f127f718d3fa9ae3cd9ce989fd.slice - libcontainer container kubepods-burstable-pod000fd9f127f718d3fa9ae3cd9ce989fd.slice. Jan 16 09:05:52.644274 kubelet[2207]: E0116 09:05:52.644175 2207 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.127.227:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-d8418dcdb9?timeout=10s\": dial tcp 146.190.127.227:6443: connect: connection refused" interval="800ms" Jan 16 09:05:52.722849 kubelet[2207]: E0116 09:05:52.722678 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:52.725001 containerd[1482]: time="2025-01-16T09:05:52.724000544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-d8418dcdb9,Uid:db00a56d55c5877daf1335933d80a3b1,Namespace:kube-system,Attempt:0,}" Jan 16 09:05:52.743520 kubelet[2207]: E0116 09:05:52.741952 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:52.743698 containerd[1482]: time="2025-01-16T09:05:52.742755408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-d8418dcdb9,Uid:335ed276fd1c0191fe9a9d97a1adabf3,Namespace:kube-system,Attempt:0,}" Jan 16 09:05:52.748422 kubelet[2207]: I0116 09:05:52.747804 2207 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:05:52.748422 kubelet[2207]: E0116 09:05:52.748261 2207 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.127.227:6443/api/v1/nodes\": dial tcp 146.190.127.227:6443: connect: connection refused" node="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:05:52.756170 kubelet[2207]: E0116 09:05:52.755721 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:52.757953 containerd[1482]: time="2025-01-16T09:05:52.757897891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-d8418dcdb9,Uid:000fd9f127f718d3fa9ae3cd9ce989fd,Namespace:kube-system,Attempt:0,}" Jan 16 09:05:52.884803 kubelet[2207]: W0116 09:05:52.884707 2207 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.127.227:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-d8418dcdb9&limit=500&resourceVersion=0": dial tcp 146.190.127.227:6443: connect: connection refused Jan 16 09:05:52.885161 kubelet[2207]: E0116 09:05:52.885120 2207 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://146.190.127.227:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-d8418dcdb9&limit=500&resourceVersion=0": dial tcp 146.190.127.227:6443: connect: connection refused Jan 16 09:05:53.163017 kubelet[2207]: W0116 09:05:53.162922 2207 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.127.227:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.127.227:6443: connect: connection refused Jan 16 09:05:53.163017 kubelet[2207]: E0116 09:05:53.163019 2207 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://146.190.127.227:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.127.227:6443: connect: connection refused Jan 16 09:05:53.308047 kubelet[2207]: W0116 09:05:53.307874 2207 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.127.227:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.127.227:6443: connect: connection refused Jan 16 09:05:53.308047 kubelet[2207]: E0116 09:05:53.308008 2207 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://146.190.127.227:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.127.227:6443: connect: connection refused Jan 16 09:05:53.341731 kubelet[2207]: W0116 09:05:53.341591 2207 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.127.227:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.127.227:6443: connect: connection refused Jan 16 09:05:53.341731 kubelet[2207]: E0116 09:05:53.341693 2207 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://146.190.127.227:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.127.227:6443: connect: connection refused Jan 16 09:05:53.377193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount627039799.mount: Deactivated successfully. Jan 16 09:05:53.384874 containerd[1482]: time="2025-01-16T09:05:53.384797126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 09:05:53.398612 containerd[1482]: time="2025-01-16T09:05:53.398346886Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 16 09:05:53.402185 containerd[1482]: time="2025-01-16T09:05:53.402108009Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 09:05:53.411174 containerd[1482]: time="2025-01-16T09:05:53.411073393Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 16 09:05:53.416531 containerd[1482]: time="2025-01-16T09:05:53.416181370Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 09:05:53.427538 containerd[1482]: time="2025-01-16T09:05:53.427224539Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 09:05:53.428164 containerd[1482]: time="2025-01-16T09:05:53.428064356Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 16 09:05:53.437478 containerd[1482]: time="2025-01-16T09:05:53.436574030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 09:05:53.437957 containerd[1482]: time="2025-01-16T09:05:53.437770608Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 679.774909ms" Jan 16 09:05:53.443491 containerd[1482]: time="2025-01-16T09:05:53.442314259Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 718.211897ms" Jan 16 09:05:53.445809 containerd[1482]: time="2025-01-16T09:05:53.445741460Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 702.8689ms" Jan 16 09:05:53.446523 kubelet[2207]: E0116 09:05:53.446332 2207 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.127.227:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-d8418dcdb9?timeout=10s\": dial tcp 146.190.127.227:6443: connect: connection refused" interval="1.6s" Jan 16 09:05:53.552449 kubelet[2207]: I0116 09:05:53.550611 2207 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:05:53.552449 kubelet[2207]: E0116 09:05:53.551070 2207 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.127.227:6443/api/v1/nodes\": dial tcp 146.190.127.227:6443: connect: connection refused" node="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:05:53.710931 containerd[1482]: time="2025-01-16T09:05:53.710513307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:05:53.711442 containerd[1482]: time="2025-01-16T09:05:53.710593095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:05:53.711442 containerd[1482]: time="2025-01-16T09:05:53.711176618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:05:53.711666 containerd[1482]: time="2025-01-16T09:05:53.711535970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:05:53.718755 containerd[1482]: time="2025-01-16T09:05:53.718603903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:05:53.719583 containerd[1482]: time="2025-01-16T09:05:53.719089940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:05:53.719583 containerd[1482]: time="2025-01-16T09:05:53.719125633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:05:53.719583 containerd[1482]: time="2025-01-16T09:05:53.719268361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:05:53.731434 containerd[1482]: time="2025-01-16T09:05:53.730852308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:05:53.733572 containerd[1482]: time="2025-01-16T09:05:53.732674900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:05:53.733572 containerd[1482]: time="2025-01-16T09:05:53.732725949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:05:53.733572 containerd[1482]: time="2025-01-16T09:05:53.732890308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:05:53.751723 systemd[1]: Started cri-containerd-79b3175e152ea37d9b5ca715115f32c3b52cad4437f3d9bfceb57ff0cacd018f.scope - libcontainer container 79b3175e152ea37d9b5ca715115f32c3b52cad4437f3d9bfceb57ff0cacd018f. Jan 16 09:05:53.778712 systemd[1]: Started cri-containerd-3cb857c07b8daa1edd5151680f917b4d43580d21a5573f7dd6bbb673319de81e.scope - libcontainer container 3cb857c07b8daa1edd5151680f917b4d43580d21a5573f7dd6bbb673319de81e. Jan 16 09:05:53.795900 systemd[1]: Started cri-containerd-029120667d318ade778c4641730df55dce8393f89901e0bc7b1f86250ce24946.scope - libcontainer container 029120667d318ade778c4641730df55dce8393f89901e0bc7b1f86250ce24946. Jan 16 09:05:53.900960 containerd[1482]: time="2025-01-16T09:05:53.900378764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-d8418dcdb9,Uid:000fd9f127f718d3fa9ae3cd9ce989fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"79b3175e152ea37d9b5ca715115f32c3b52cad4437f3d9bfceb57ff0cacd018f\"" Jan 16 09:05:53.903247 kubelet[2207]: E0116 09:05:53.903195 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:53.926751 containerd[1482]: time="2025-01-16T09:05:53.926337156Z" level=info msg="CreateContainer within sandbox \"79b3175e152ea37d9b5ca715115f32c3b52cad4437f3d9bfceb57ff0cacd018f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 16 09:05:53.928080 containerd[1482]: time="2025-01-16T09:05:53.927810442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-d8418dcdb9,Uid:db00a56d55c5877daf1335933d80a3b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cb857c07b8daa1edd5151680f917b4d43580d21a5573f7dd6bbb673319de81e\"" Jan 16 09:05:53.932860 kubelet[2207]: E0116 09:05:53.932520 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:53.937696 kubelet[2207]: E0116 09:05:53.935247 2207 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.127.227:6443/api/v1/namespaces/default/events\": dial tcp 146.190.127.227:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-d8418dcdb9.181b20fcef409e37 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-d8418dcdb9,UID:ci-4081.3.0-a-d8418dcdb9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-d8418dcdb9,},FirstTimestamp:2025-01-16 09:05:51.999295031 +0000 UTC m=+0.880582415,LastTimestamp:2025-01-16 09:05:51.999295031 +0000 UTC m=+0.880582415,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-d8418dcdb9,}" Jan 16 09:05:53.940190 containerd[1482]: time="2025-01-16T09:05:53.939838465Z" level=info msg="CreateContainer within sandbox \"3cb857c07b8daa1edd5151680f917b4d43580d21a5573f7dd6bbb673319de81e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 16 09:05:53.941589 containerd[1482]: time="2025-01-16T09:05:53.941533480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-d8418dcdb9,Uid:335ed276fd1c0191fe9a9d97a1adabf3,Namespace:kube-system,Attempt:0,} returns sandbox id \"029120667d318ade778c4641730df55dce8393f89901e0bc7b1f86250ce24946\"" Jan 16 09:05:53.943020 kubelet[2207]: E0116 09:05:53.942900 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:53.947946 containerd[1482]: time="2025-01-16T09:05:53.947886058Z" level=info msg="CreateContainer within sandbox \"029120667d318ade778c4641730df55dce8393f89901e0bc7b1f86250ce24946\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 16 09:05:54.015654 kubelet[2207]: E0116 09:05:54.014218 2207 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://146.190.127.227:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 146.190.127.227:6443: connect: connection refused Jan 16 09:05:54.041325 containerd[1482]: time="2025-01-16T09:05:54.041159287Z" level=info msg="CreateContainer within sandbox \"3cb857c07b8daa1edd5151680f917b4d43580d21a5573f7dd6bbb673319de81e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"191e5e9ef44ae03a4561947da3ca301175a85afd7c2be0d515026a0f5a99f138\"" Jan 16 09:05:54.042663 containerd[1482]: time="2025-01-16T09:05:54.042510021Z" level=info msg="StartContainer for \"191e5e9ef44ae03a4561947da3ca301175a85afd7c2be0d515026a0f5a99f138\"" Jan 16 09:05:54.056211 containerd[1482]: time="2025-01-16T09:05:54.055989809Z" level=info msg="CreateContainer within sandbox \"029120667d318ade778c4641730df55dce8393f89901e0bc7b1f86250ce24946\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1a4b0636e20d6464cab8ae2ba15e4f43013205f2bc7ae45225a8d8f691977657\"" Jan 16 09:05:54.058299 containerd[1482]: time="2025-01-16T09:05:54.056893213Z" level=info msg="StartContainer for \"1a4b0636e20d6464cab8ae2ba15e4f43013205f2bc7ae45225a8d8f691977657\"" Jan 16 09:05:54.064710 containerd[1482]: time="2025-01-16T09:05:54.064644375Z" level=info msg="CreateContainer within sandbox \"79b3175e152ea37d9b5ca715115f32c3b52cad4437f3d9bfceb57ff0cacd018f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"92eb3db77868304578a1223ba3302fcde2f49a14725f6d74fa389303fbe78feb\"" Jan 16 09:05:54.066550 containerd[1482]: time="2025-01-16T09:05:54.066503042Z" level=info msg="StartContainer for \"92eb3db77868304578a1223ba3302fcde2f49a14725f6d74fa389303fbe78feb\"" Jan 16 09:05:54.097749 systemd[1]: Started cri-containerd-191e5e9ef44ae03a4561947da3ca301175a85afd7c2be0d515026a0f5a99f138.scope - libcontainer container 191e5e9ef44ae03a4561947da3ca301175a85afd7c2be0d515026a0f5a99f138. Jan 16 09:05:54.133698 systemd[1]: Started cri-containerd-1a4b0636e20d6464cab8ae2ba15e4f43013205f2bc7ae45225a8d8f691977657.scope - libcontainer container 1a4b0636e20d6464cab8ae2ba15e4f43013205f2bc7ae45225a8d8f691977657. Jan 16 09:05:54.154680 systemd[1]: Started cri-containerd-92eb3db77868304578a1223ba3302fcde2f49a14725f6d74fa389303fbe78feb.scope - libcontainer container 92eb3db77868304578a1223ba3302fcde2f49a14725f6d74fa389303fbe78feb. Jan 16 09:05:54.213774 containerd[1482]: time="2025-01-16T09:05:54.213312635Z" level=info msg="StartContainer for \"191e5e9ef44ae03a4561947da3ca301175a85afd7c2be0d515026a0f5a99f138\" returns successfully" Jan 16 09:05:54.273343 containerd[1482]: time="2025-01-16T09:05:54.273178680Z" level=info msg="StartContainer for \"1a4b0636e20d6464cab8ae2ba15e4f43013205f2bc7ae45225a8d8f691977657\" returns successfully" Jan 16 09:05:54.292618 containerd[1482]: time="2025-01-16T09:05:54.292561100Z" level=info msg="StartContainer for \"92eb3db77868304578a1223ba3302fcde2f49a14725f6d74fa389303fbe78feb\" returns successfully" Jan 16 09:05:55.113717 kubelet[2207]: E0116 09:05:55.113629 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:55.118866 kubelet[2207]: E0116 09:05:55.118443 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:55.124010 kubelet[2207]: E0116 09:05:55.123938 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:55.156146 kubelet[2207]: I0116 09:05:55.155174 2207 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:05:56.129155 kubelet[2207]: E0116 09:05:56.129101 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:56.130267 kubelet[2207]: E0116 09:05:56.130225 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:56.131216 kubelet[2207]: E0116 09:05:56.131187 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:57.368588 systemd-timesyncd[1348]: Contacted time server 66.228.58.20:123 (2.flatcar.pool.ntp.org). Jan 16 09:05:57.368686 systemd-timesyncd[1348]: Initial clock synchronization to Thu 2025-01-16 09:05:57.368197 UTC. Jan 16 09:05:57.368804 systemd-resolved[1331]: Clock change detected. Flushing caches. Jan 16 09:05:58.121878 kubelet[2207]: E0116 09:05:58.121810 2207 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-a-d8418dcdb9\" not found" node="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:05:58.176888 kubelet[2207]: I0116 09:05:58.176836 2207 apiserver.go:52] "Watching apiserver" Jan 16 09:05:58.193665 kubelet[2207]: I0116 09:05:58.193604 2207 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 16 09:05:58.294960 kubelet[2207]: E0116 09:05:58.293622 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:58.299094 kubelet[2207]: I0116 09:05:58.298757 2207 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:01.760833 kubelet[2207]: W0116 09:06:01.759593 2207 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 09:06:01.764571 kubelet[2207]: E0116 09:06:01.763991 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:01.812904 systemd[1]: Reloading requested from client PID 2482 ('systemctl') (unit session-7.scope)... Jan 16 09:06:01.813425 systemd[1]: Reloading... Jan 16 09:06:02.043288 zram_generator::config[2527]: No configuration found. Jan 16 09:06:02.305355 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 09:06:02.328732 kubelet[2207]: E0116 09:06:02.328639 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:02.517325 systemd[1]: Reloading finished in 702 ms. Jan 16 09:06:02.618711 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 09:06:02.621053 kubelet[2207]: E0116 09:06:02.619348 2207 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4081.3.0-a-d8418dcdb9.181b20fcef409e37 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-d8418dcdb9,UID:ci-4081.3.0-a-d8418dcdb9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-d8418dcdb9,},FirstTimestamp:2025-01-16 09:05:51.999295031 +0000 UTC m=+0.880582415,LastTimestamp:2025-01-16 09:05:51.999295031 +0000 UTC m=+0.880582415,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-d8418dcdb9,}" Jan 16 09:06:02.639043 systemd[1]: kubelet.service: Deactivated successfully. Jan 16 09:06:02.639702 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:06:02.639822 systemd[1]: kubelet.service: Consumed 1.414s CPU time, 112.7M memory peak, 0B memory swap peak. Jan 16 09:06:02.661461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 09:06:02.986685 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:06:03.043315 (kubelet)[2571]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 09:06:03.186834 kubelet[2571]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 09:06:03.186834 kubelet[2571]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 16 09:06:03.186834 kubelet[2571]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 09:06:03.186834 kubelet[2571]: I0116 09:06:03.185938 2571 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 09:06:03.218555 kubelet[2571]: I0116 09:06:03.218477 2571 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 16 09:06:03.218555 kubelet[2571]: I0116 09:06:03.218530 2571 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 09:06:03.219120 kubelet[2571]: I0116 09:06:03.219037 2571 server.go:927] "Client rotation is on, will bootstrap in background" Jan 16 09:06:03.221373 kubelet[2571]: I0116 09:06:03.221331 2571 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 16 09:06:03.246399 kubelet[2571]: I0116 09:06:03.246206 2571 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 09:06:03.268581 kubelet[2571]: I0116 09:06:03.268384 2571 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 09:06:03.270232 kubelet[2571]: I0116 09:06:03.269985 2571 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 09:06:03.272224 kubelet[2571]: I0116 09:06:03.270820 2571 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-d8418dcdb9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 16 09:06:03.272224 kubelet[2571]: I0116 09:06:03.271148 2571 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 09:06:03.272224 kubelet[2571]: I0116 09:06:03.271168 2571 container_manager_linux.go:301] "Creating device plugin manager" Jan 16 09:06:03.272224 kubelet[2571]: I0116 09:06:03.271241 2571 state_mem.go:36] "Initialized new in-memory state store" Jan 16 09:06:03.275324 kubelet[2571]: I0116 09:06:03.274319 2571 kubelet.go:400] "Attempting to sync node with API server" Jan 16 09:06:03.275798 kubelet[2571]: I0116 09:06:03.275578 2571 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 09:06:03.275798 kubelet[2571]: I0116 09:06:03.275643 2571 kubelet.go:312] "Adding apiserver pod source" Jan 16 09:06:03.275798 kubelet[2571]: I0116 09:06:03.275672 2571 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 09:06:03.278485 kubelet[2571]: I0116 09:06:03.278441 2571 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 16 09:06:03.300039 kubelet[2571]: I0116 09:06:03.296999 2571 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 09:06:03.304158 kubelet[2571]: I0116 09:06:03.303115 2571 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 09:06:03.318254 kubelet[2571]: I0116 09:06:03.318210 2571 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 09:06:03.349182 kubelet[2571]: I0116 09:06:03.348245 2571 server.go:1264] "Started kubelet" Jan 16 09:06:03.386485 kubelet[2571]: I0116 09:06:03.381059 2571 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 09:06:03.398220 kubelet[2571]: I0116 09:06:03.381115 2571 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 09:06:03.398220 kubelet[2571]: I0116 09:06:03.398177 2571 server.go:455] "Adding debug handlers to kubelet server" Jan 16 09:06:03.412185 kubelet[2571]: I0116 09:06:03.412027 2571 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 16 09:06:03.421128 kubelet[2571]: I0116 09:06:03.414391 2571 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 16 09:06:03.422364 kubelet[2571]: I0116 09:06:03.421659 2571 reconciler.go:26] "Reconciler: start to sync state" Jan 16 09:06:03.429697 kubelet[2571]: I0116 09:06:03.429595 2571 factory.go:221] Registration of the systemd container factory successfully Jan 16 09:06:03.430636 kubelet[2571]: I0116 09:06:03.430567 2571 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 09:06:03.444886 kubelet[2571]: I0116 09:06:03.444493 2571 factory.go:221] Registration of the containerd container factory successfully Jan 16 09:06:03.459394 kubelet[2571]: I0116 09:06:03.459305 2571 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 09:06:03.461904 kubelet[2571]: I0116 09:06:03.461855 2571 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 09:06:03.462691 kubelet[2571]: I0116 09:06:03.462175 2571 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 16 09:06:03.462691 kubelet[2571]: I0116 09:06:03.462214 2571 kubelet.go:2337] "Starting kubelet main sync loop" Jan 16 09:06:03.462691 kubelet[2571]: E0116 09:06:03.462303 2571 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 09:06:03.522146 kubelet[2571]: I0116 09:06:03.521996 2571 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:03.563271 kubelet[2571]: E0116 09:06:03.563216 2571 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 16 09:06:03.566556 kubelet[2571]: I0116 09:06:03.566311 2571 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:03.567866 kubelet[2571]: I0116 09:06:03.567664 2571 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:03.643128 kubelet[2571]: I0116 09:06:03.641069 2571 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 16 09:06:03.643128 kubelet[2571]: I0116 09:06:03.641105 2571 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 16 09:06:03.643128 kubelet[2571]: I0116 09:06:03.641144 2571 state_mem.go:36] "Initialized new in-memory state store" Jan 16 09:06:03.643128 kubelet[2571]: I0116 09:06:03.641624 2571 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 16 09:06:03.643128 kubelet[2571]: I0116 09:06:03.641645 2571 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 16 09:06:03.643128 kubelet[2571]: I0116 09:06:03.641676 2571 policy_none.go:49] "None policy: Start" Jan 16 09:06:03.645075 kubelet[2571]: I0116 09:06:03.645028 2571 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 16 09:06:03.645363 kubelet[2571]: I0116 09:06:03.645347 2571 state_mem.go:35] "Initializing new in-memory state store" Jan 16 09:06:03.646152 kubelet[2571]: I0116 09:06:03.646109 2571 state_mem.go:75] "Updated machine memory state" Jan 16 09:06:03.656640 kubelet[2571]: I0116 09:06:03.656602 2571 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 09:06:03.657188 kubelet[2571]: I0116 09:06:03.657118 2571 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 09:06:03.658418 kubelet[2571]: I0116 09:06:03.658385 2571 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 09:06:03.770566 kubelet[2571]: I0116 09:06:03.764395 2571 topology_manager.go:215] "Topology Admit Handler" podUID="335ed276fd1c0191fe9a9d97a1adabf3" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:03.770566 kubelet[2571]: I0116 09:06:03.764539 2571 topology_manager.go:215] "Topology Admit Handler" podUID="db00a56d55c5877daf1335933d80a3b1" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:03.770566 kubelet[2571]: I0116 09:06:03.764604 2571 topology_manager.go:215] "Topology Admit Handler" podUID="000fd9f127f718d3fa9ae3cd9ce989fd" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:03.800195 kubelet[2571]: W0116 09:06:03.799552 2571 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 09:06:03.800195 kubelet[2571]: W0116 09:06:03.799622 2571 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 09:06:03.802596 kubelet[2571]: W0116 09:06:03.801238 2571 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 09:06:03.804110 kubelet[2571]: E0116 09:06:03.803810 2571 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.0-a-d8418dcdb9\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:03.845731 kubelet[2571]: I0116 09:06:03.842484 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/335ed276fd1c0191fe9a9d97a1adabf3-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-d8418dcdb9\" (UID: \"335ed276fd1c0191fe9a9d97a1adabf3\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:03.845731 kubelet[2571]: I0116 09:06:03.842536 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/335ed276fd1c0191fe9a9d97a1adabf3-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-d8418dcdb9\" (UID: \"335ed276fd1c0191fe9a9d97a1adabf3\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:03.845731 kubelet[2571]: I0116 09:06:03.842572 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/335ed276fd1c0191fe9a9d97a1adabf3-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-d8418dcdb9\" (UID: \"335ed276fd1c0191fe9a9d97a1adabf3\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:03.845731 kubelet[2571]: I0116 09:06:03.842608 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/335ed276fd1c0191fe9a9d97a1adabf3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-d8418dcdb9\" (UID: \"335ed276fd1c0191fe9a9d97a1adabf3\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:03.845731 kubelet[2571]: I0116 09:06:03.842636 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db00a56d55c5877daf1335933d80a3b1-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-d8418dcdb9\" (UID: \"db00a56d55c5877daf1335933d80a3b1\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:03.846162 kubelet[2571]: I0116 09:06:03.842660 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/000fd9f127f718d3fa9ae3cd9ce989fd-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-d8418dcdb9\" (UID: \"000fd9f127f718d3fa9ae3cd9ce989fd\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:03.846162 kubelet[2571]: I0116 09:06:03.842689 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/335ed276fd1c0191fe9a9d97a1adabf3-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-d8418dcdb9\" (UID: \"335ed276fd1c0191fe9a9d97a1adabf3\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:03.846162 kubelet[2571]: I0116 09:06:03.842711 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/000fd9f127f718d3fa9ae3cd9ce989fd-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-d8418dcdb9\" (UID: \"000fd9f127f718d3fa9ae3cd9ce989fd\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:03.846162 kubelet[2571]: I0116 09:06:03.842736 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/000fd9f127f718d3fa9ae3cd9ce989fd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-d8418dcdb9\" (UID: \"000fd9f127f718d3fa9ae3cd9ce989fd\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:04.103107 kubelet[2571]: E0116 09:06:04.102234 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:04.103107 kubelet[2571]: E0116 09:06:04.102620 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:04.106426 kubelet[2571]: E0116 09:06:04.106383 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:04.301071 kubelet[2571]: I0116 09:06:04.300927 2571 apiserver.go:52] "Watching apiserver" Jan 16 09:06:04.322355 kubelet[2571]: I0116 09:06:04.322292 2571 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 16 09:06:04.564506 kubelet[2571]: E0116 09:06:04.561139 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:04.564506 kubelet[2571]: E0116 09:06:04.564252 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:04.578923 kubelet[2571]: W0116 09:06:04.578860 2571 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 09:06:04.579152 kubelet[2571]: E0116 09:06:04.578973 2571 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.0-a-d8418dcdb9\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:04.579578 kubelet[2571]: E0116 09:06:04.579546 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:04.615751 kubelet[2571]: I0116 09:06:04.615653 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-a-d8418dcdb9" podStartSLOduration=1.6155952660000001 podStartE2EDuration="1.615595266s" podCreationTimestamp="2025-01-16 09:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 09:06:04.610539757 +0000 UTC m=+1.546943590" watchObservedRunningTime="2025-01-16 09:06:04.615595266 +0000 UTC m=+1.551999103" Jan 16 09:06:04.636486 kubelet[2571]: I0116 09:06:04.636094 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-d8418dcdb9" podStartSLOduration=3.6360700660000003 podStartE2EDuration="3.636070066s" podCreationTimestamp="2025-01-16 09:06:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 09:06:04.635719683 +0000 UTC m=+1.572123520" watchObservedRunningTime="2025-01-16 09:06:04.636070066 +0000 UTC m=+1.572473914" Jan 16 09:06:04.656293 kubelet[2571]: I0116 09:06:04.656183 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-a-d8418dcdb9" podStartSLOduration=1.656157423 podStartE2EDuration="1.656157423s" podCreationTimestamp="2025-01-16 09:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 09:06:04.655320315 +0000 UTC m=+1.591724151" watchObservedRunningTime="2025-01-16 09:06:04.656157423 +0000 UTC m=+1.592561259" Jan 16 09:06:05.301491 update_engine[1460]: I20250116 09:06:05.301349 1460 update_attempter.cc:509] Updating boot flags... Jan 16 09:06:05.387945 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2620) Jan 16 09:06:05.551996 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2619) Jan 16 09:06:05.575078 kubelet[2571]: E0116 09:06:05.573356 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:05.581046 kubelet[2571]: E0116 09:06:05.578281 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:05.713875 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2619) Jan 16 09:06:07.773464 kubelet[2571]: E0116 09:06:07.773412 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:08.584866 kubelet[2571]: E0116 09:06:08.583685 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:09.586549 kubelet[2571]: E0116 09:06:09.586511 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:09.605181 kubelet[2571]: E0116 09:06:09.605111 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:10.588859 kubelet[2571]: E0116 09:06:10.588594 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:11.267766 sudo[1658]: pam_unix(sudo:session): session closed for user root Jan 16 09:06:11.281063 sshd[1655]: pam_unix(sshd:session): session closed for user core Jan 16 09:06:11.288073 systemd[1]: sshd@6-146.190.127.227:22-139.178.68.195:39764.service: Deactivated successfully. Jan 16 09:06:11.293217 systemd[1]: session-7.scope: Deactivated successfully. Jan 16 09:06:11.294147 systemd[1]: session-7.scope: Consumed 7.525s CPU time, 191.0M memory peak, 0B memory swap peak. Jan 16 09:06:11.295718 systemd-logind[1456]: Session 7 logged out. Waiting for processes to exit. Jan 16 09:06:11.299275 systemd-logind[1456]: Removed session 7. Jan 16 09:06:14.197019 kubelet[2571]: E0116 09:06:14.196958 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:14.949178 kubelet[2571]: I0116 09:06:14.949105 2571 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 16 09:06:14.952170 containerd[1482]: time="2025-01-16T09:06:14.951132162Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 16 09:06:14.952759 kubelet[2571]: I0116 09:06:14.951490 2571 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 16 09:06:15.422193 kubelet[2571]: I0116 09:06:15.419336 2571 topology_manager.go:215] "Topology Admit Handler" podUID="27d54352-399f-4b76-9584-50f313374d2f" podNamespace="kube-system" podName="kube-proxy-242kx" Jan 16 09:06:15.435421 systemd[1]: Created slice kubepods-besteffort-pod27d54352_399f_4b76_9584_50f313374d2f.slice - libcontainer container kubepods-besteffort-pod27d54352_399f_4b76_9584_50f313374d2f.slice. Jan 16 09:06:15.474591 kubelet[2571]: I0116 09:06:15.474149 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/27d54352-399f-4b76-9584-50f313374d2f-kube-proxy\") pod \"kube-proxy-242kx\" (UID: \"27d54352-399f-4b76-9584-50f313374d2f\") " pod="kube-system/kube-proxy-242kx" Jan 16 09:06:15.474591 kubelet[2571]: I0116 09:06:15.474228 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27d54352-399f-4b76-9584-50f313374d2f-lib-modules\") pod \"kube-proxy-242kx\" (UID: \"27d54352-399f-4b76-9584-50f313374d2f\") " pod="kube-system/kube-proxy-242kx" Jan 16 09:06:15.474591 kubelet[2571]: I0116 09:06:15.474259 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7xzj\" (UniqueName: \"kubernetes.io/projected/27d54352-399f-4b76-9584-50f313374d2f-kube-api-access-q7xzj\") pod \"kube-proxy-242kx\" (UID: \"27d54352-399f-4b76-9584-50f313374d2f\") " pod="kube-system/kube-proxy-242kx" Jan 16 09:06:15.476903 kubelet[2571]: I0116 09:06:15.474705 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27d54352-399f-4b76-9584-50f313374d2f-xtables-lock\") pod \"kube-proxy-242kx\" (UID: \"27d54352-399f-4b76-9584-50f313374d2f\") " pod="kube-system/kube-proxy-242kx" Jan 16 09:06:15.751073 kubelet[2571]: E0116 09:06:15.750178 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:15.755342 containerd[1482]: time="2025-01-16T09:06:15.754575208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-242kx,Uid:27d54352-399f-4b76-9584-50f313374d2f,Namespace:kube-system,Attempt:0,}" Jan 16 09:06:15.833983 containerd[1482]: time="2025-01-16T09:06:15.833407789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:06:15.834358 containerd[1482]: time="2025-01-16T09:06:15.834047876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:06:15.834358 containerd[1482]: time="2025-01-16T09:06:15.834251463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:15.835366 containerd[1482]: time="2025-01-16T09:06:15.835068929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:15.901666 systemd[1]: Started cri-containerd-49d706b11f9e53a8173e0cfb46117f162955a1e7f5894754cd82829902214288.scope - libcontainer container 49d706b11f9e53a8173e0cfb46117f162955a1e7f5894754cd82829902214288. Jan 16 09:06:15.981261 containerd[1482]: time="2025-01-16T09:06:15.981179266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-242kx,Uid:27d54352-399f-4b76-9584-50f313374d2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"49d706b11f9e53a8173e0cfb46117f162955a1e7f5894754cd82829902214288\"" Jan 16 09:06:15.984911 kubelet[2571]: E0116 09:06:15.984436 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:15.990444 containerd[1482]: time="2025-01-16T09:06:15.990389687Z" level=info msg="CreateContainer within sandbox \"49d706b11f9e53a8173e0cfb46117f162955a1e7f5894754cd82829902214288\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 16 09:06:16.189535 kubelet[2571]: I0116 09:06:16.188558 2571 topology_manager.go:215] "Topology Admit Handler" podUID="52328975-c152-4078-baee-969f4b54af60" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-4hc8n" Jan 16 09:06:16.205614 systemd[1]: Created slice kubepods-besteffort-pod52328975_c152_4078_baee_969f4b54af60.slice - libcontainer container kubepods-besteffort-pod52328975_c152_4078_baee_969f4b54af60.slice. Jan 16 09:06:16.213113 containerd[1482]: time="2025-01-16T09:06:16.213038231Z" level=info msg="CreateContainer within sandbox \"49d706b11f9e53a8173e0cfb46117f162955a1e7f5894754cd82829902214288\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"79ab1891dd977aa25a8a2d813a9ec2590b2532118d3f4b676a5edafe86347eb7\"" Jan 16 09:06:16.215636 containerd[1482]: time="2025-01-16T09:06:16.215588449Z" level=info msg="StartContainer for \"79ab1891dd977aa25a8a2d813a9ec2590b2532118d3f4b676a5edafe86347eb7\"" Jan 16 09:06:16.289168 kubelet[2571]: I0116 09:06:16.287581 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/52328975-c152-4078-baee-969f4b54af60-var-lib-calico\") pod \"tigera-operator-7bc55997bb-4hc8n\" (UID: \"52328975-c152-4078-baee-969f4b54af60\") " pod="tigera-operator/tigera-operator-7bc55997bb-4hc8n" Jan 16 09:06:16.289168 kubelet[2571]: I0116 09:06:16.287721 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67hcp\" (UniqueName: \"kubernetes.io/projected/52328975-c152-4078-baee-969f4b54af60-kube-api-access-67hcp\") pod \"tigera-operator-7bc55997bb-4hc8n\" (UID: \"52328975-c152-4078-baee-969f4b54af60\") " pod="tigera-operator/tigera-operator-7bc55997bb-4hc8n" Jan 16 09:06:16.294390 systemd[1]: Started cri-containerd-79ab1891dd977aa25a8a2d813a9ec2590b2532118d3f4b676a5edafe86347eb7.scope - libcontainer container 79ab1891dd977aa25a8a2d813a9ec2590b2532118d3f4b676a5edafe86347eb7. Jan 16 09:06:16.378168 containerd[1482]: time="2025-01-16T09:06:16.378043612Z" level=info msg="StartContainer for \"79ab1891dd977aa25a8a2d813a9ec2590b2532118d3f4b676a5edafe86347eb7\" returns successfully" Jan 16 09:06:16.519668 containerd[1482]: time="2025-01-16T09:06:16.519475294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-4hc8n,Uid:52328975-c152-4078-baee-969f4b54af60,Namespace:tigera-operator,Attempt:0,}" Jan 16 09:06:16.590399 containerd[1482]: time="2025-01-16T09:06:16.589991007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:06:16.590399 containerd[1482]: time="2025-01-16T09:06:16.590271894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:06:16.592172 containerd[1482]: time="2025-01-16T09:06:16.590911094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:16.592172 containerd[1482]: time="2025-01-16T09:06:16.591106408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:16.618768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1990344622.mount: Deactivated successfully. Jan 16 09:06:16.636009 kubelet[2571]: E0116 09:06:16.634178 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:16.662283 systemd[1]: Started cri-containerd-40a5fa2597a341b50c7987a3881f8085969378b8642f1a9d7b26acae90f9f51f.scope - libcontainer container 40a5fa2597a341b50c7987a3881f8085969378b8642f1a9d7b26acae90f9f51f. Jan 16 09:06:16.769392 containerd[1482]: time="2025-01-16T09:06:16.769315879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-4hc8n,Uid:52328975-c152-4078-baee-969f4b54af60,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"40a5fa2597a341b50c7987a3881f8085969378b8642f1a9d7b26acae90f9f51f\"" Jan 16 09:06:16.775045 containerd[1482]: time="2025-01-16T09:06:16.774541745Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 16 09:06:19.222084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1690545418.mount: Deactivated successfully. Jan 16 09:06:20.130865 containerd[1482]: time="2025-01-16T09:06:20.129272815Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:20.131688 containerd[1482]: time="2025-01-16T09:06:20.131520260Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764281" Jan 16 09:06:20.132397 containerd[1482]: time="2025-01-16T09:06:20.132344861Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:20.138403 containerd[1482]: time="2025-01-16T09:06:20.138332138Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:20.142320 containerd[1482]: time="2025-01-16T09:06:20.142254025Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.36765164s" Jan 16 09:06:20.142768 containerd[1482]: time="2025-01-16T09:06:20.142642811Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 16 09:06:20.157464 containerd[1482]: time="2025-01-16T09:06:20.157413251Z" level=info msg="CreateContainer within sandbox \"40a5fa2597a341b50c7987a3881f8085969378b8642f1a9d7b26acae90f9f51f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 16 09:06:20.243639 containerd[1482]: time="2025-01-16T09:06:20.243471921Z" level=info msg="CreateContainer within sandbox \"40a5fa2597a341b50c7987a3881f8085969378b8642f1a9d7b26acae90f9f51f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6e417fff602678986c18cabff40d7b4afbea50ea051902a4e99308dabc3c323c\"" Jan 16 09:06:20.246979 containerd[1482]: time="2025-01-16T09:06:20.246602145Z" level=info msg="StartContainer for \"6e417fff602678986c18cabff40d7b4afbea50ea051902a4e99308dabc3c323c\"" Jan 16 09:06:20.311076 systemd[1]: Started cri-containerd-6e417fff602678986c18cabff40d7b4afbea50ea051902a4e99308dabc3c323c.scope - libcontainer container 6e417fff602678986c18cabff40d7b4afbea50ea051902a4e99308dabc3c323c. Jan 16 09:06:20.371201 containerd[1482]: time="2025-01-16T09:06:20.370255027Z" level=info msg="StartContainer for \"6e417fff602678986c18cabff40d7b4afbea50ea051902a4e99308dabc3c323c\" returns successfully" Jan 16 09:06:20.671681 kubelet[2571]: I0116 09:06:20.671544 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-242kx" podStartSLOduration=5.669118541 podStartE2EDuration="5.669118541s" podCreationTimestamp="2025-01-16 09:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 09:06:16.674903477 +0000 UTC m=+13.611307315" watchObservedRunningTime="2025-01-16 09:06:20.669118541 +0000 UTC m=+17.605522378" Jan 16 09:06:24.180955 kubelet[2571]: I0116 09:06:24.180864 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-4hc8n" podStartSLOduration=5.808200279 podStartE2EDuration="9.18083333s" podCreationTimestamp="2025-01-16 09:06:15 +0000 UTC" firstStartedPulling="2025-01-16 09:06:16.771876732 +0000 UTC m=+13.708280546" lastFinishedPulling="2025-01-16 09:06:20.144509767 +0000 UTC m=+17.080913597" observedRunningTime="2025-01-16 09:06:20.671753759 +0000 UTC m=+17.608157593" watchObservedRunningTime="2025-01-16 09:06:24.18083333 +0000 UTC m=+21.117237171" Jan 16 09:06:24.181597 kubelet[2571]: I0116 09:06:24.181324 2571 topology_manager.go:215] "Topology Admit Handler" podUID="e8add061-66ff-4cb6-b3e6-324eba0461f9" podNamespace="calico-system" podName="calico-typha-d97bbc577-z7bfw" Jan 16 09:06:24.203655 systemd[1]: Created slice kubepods-besteffort-pode8add061_66ff_4cb6_b3e6_324eba0461f9.slice - libcontainer container kubepods-besteffort-pode8add061_66ff_4cb6_b3e6_324eba0461f9.slice. Jan 16 09:06:24.268688 kubelet[2571]: I0116 09:06:24.268559 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8add061-66ff-4cb6-b3e6-324eba0461f9-tigera-ca-bundle\") pod \"calico-typha-d97bbc577-z7bfw\" (UID: \"e8add061-66ff-4cb6-b3e6-324eba0461f9\") " pod="calico-system/calico-typha-d97bbc577-z7bfw" Jan 16 09:06:24.268688 kubelet[2571]: I0116 09:06:24.268606 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e8add061-66ff-4cb6-b3e6-324eba0461f9-typha-certs\") pod \"calico-typha-d97bbc577-z7bfw\" (UID: \"e8add061-66ff-4cb6-b3e6-324eba0461f9\") " pod="calico-system/calico-typha-d97bbc577-z7bfw" Jan 16 09:06:24.268688 kubelet[2571]: I0116 09:06:24.268631 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvlpk\" (UniqueName: \"kubernetes.io/projected/e8add061-66ff-4cb6-b3e6-324eba0461f9-kube-api-access-zvlpk\") pod \"calico-typha-d97bbc577-z7bfw\" (UID: \"e8add061-66ff-4cb6-b3e6-324eba0461f9\") " pod="calico-system/calico-typha-d97bbc577-z7bfw" Jan 16 09:06:24.424835 kubelet[2571]: I0116 09:06:24.420348 2571 topology_manager.go:215] "Topology Admit Handler" podUID="4739ced5-4f52-4587-93bd-e23be6f62634" podNamespace="calico-system" podName="calico-node-cvrz2" Jan 16 09:06:24.445439 systemd[1]: Created slice kubepods-besteffort-pod4739ced5_4f52_4587_93bd_e23be6f62634.slice - libcontainer container kubepods-besteffort-pod4739ced5_4f52_4587_93bd_e23be6f62634.slice. Jan 16 09:06:24.508802 kubelet[2571]: E0116 09:06:24.508715 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:24.510656 containerd[1482]: time="2025-01-16T09:06:24.509961642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d97bbc577-z7bfw,Uid:e8add061-66ff-4cb6-b3e6-324eba0461f9,Namespace:calico-system,Attempt:0,}" Jan 16 09:06:24.572789 kubelet[2571]: I0116 09:06:24.572709 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4739ced5-4f52-4587-93bd-e23be6f62634-var-run-calico\") pod \"calico-node-cvrz2\" (UID: \"4739ced5-4f52-4587-93bd-e23be6f62634\") " pod="calico-system/calico-node-cvrz2" Jan 16 09:06:24.575079 kubelet[2571]: I0116 09:06:24.575009 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4739ced5-4f52-4587-93bd-e23be6f62634-var-lib-calico\") pod \"calico-node-cvrz2\" (UID: \"4739ced5-4f52-4587-93bd-e23be6f62634\") " pod="calico-system/calico-node-cvrz2" Jan 16 09:06:24.575278 kubelet[2571]: I0116 09:06:24.575110 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4739ced5-4f52-4587-93bd-e23be6f62634-cni-bin-dir\") pod \"calico-node-cvrz2\" (UID: \"4739ced5-4f52-4587-93bd-e23be6f62634\") " pod="calico-system/calico-node-cvrz2" Jan 16 09:06:24.575278 kubelet[2571]: I0116 09:06:24.575154 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4739ced5-4f52-4587-93bd-e23be6f62634-policysync\") pod \"calico-node-cvrz2\" (UID: \"4739ced5-4f52-4587-93bd-e23be6f62634\") " pod="calico-system/calico-node-cvrz2" Jan 16 09:06:24.575278 kubelet[2571]: I0116 09:06:24.575188 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4739ced5-4f52-4587-93bd-e23be6f62634-node-certs\") pod \"calico-node-cvrz2\" (UID: \"4739ced5-4f52-4587-93bd-e23be6f62634\") " pod="calico-system/calico-node-cvrz2" Jan 16 09:06:24.575278 kubelet[2571]: I0116 09:06:24.575225 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4739ced5-4f52-4587-93bd-e23be6f62634-lib-modules\") pod \"calico-node-cvrz2\" (UID: \"4739ced5-4f52-4587-93bd-e23be6f62634\") " pod="calico-system/calico-node-cvrz2" Jan 16 09:06:24.575278 kubelet[2571]: I0116 09:06:24.575256 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4739ced5-4f52-4587-93bd-e23be6f62634-xtables-lock\") pod \"calico-node-cvrz2\" (UID: \"4739ced5-4f52-4587-93bd-e23be6f62634\") " pod="calico-system/calico-node-cvrz2" Jan 16 09:06:24.575625 kubelet[2571]: I0116 09:06:24.575290 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4739ced5-4f52-4587-93bd-e23be6f62634-tigera-ca-bundle\") pod \"calico-node-cvrz2\" (UID: \"4739ced5-4f52-4587-93bd-e23be6f62634\") " pod="calico-system/calico-node-cvrz2" Jan 16 09:06:24.575625 kubelet[2571]: I0116 09:06:24.575321 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4739ced5-4f52-4587-93bd-e23be6f62634-cni-log-dir\") pod \"calico-node-cvrz2\" (UID: \"4739ced5-4f52-4587-93bd-e23be6f62634\") " pod="calico-system/calico-node-cvrz2" Jan 16 09:06:24.575625 kubelet[2571]: I0116 09:06:24.575362 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4739ced5-4f52-4587-93bd-e23be6f62634-flexvol-driver-host\") pod \"calico-node-cvrz2\" (UID: \"4739ced5-4f52-4587-93bd-e23be6f62634\") " pod="calico-system/calico-node-cvrz2" Jan 16 09:06:24.575625 kubelet[2571]: I0116 09:06:24.575397 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9k7p\" (UniqueName: \"kubernetes.io/projected/4739ced5-4f52-4587-93bd-e23be6f62634-kube-api-access-w9k7p\") pod \"calico-node-cvrz2\" (UID: \"4739ced5-4f52-4587-93bd-e23be6f62634\") " pod="calico-system/calico-node-cvrz2" Jan 16 09:06:24.575625 kubelet[2571]: I0116 09:06:24.575430 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4739ced5-4f52-4587-93bd-e23be6f62634-cni-net-dir\") pod \"calico-node-cvrz2\" (UID: \"4739ced5-4f52-4587-93bd-e23be6f62634\") " pod="calico-system/calico-node-cvrz2" Jan 16 09:06:24.611649 kubelet[2571]: I0116 09:06:24.611545 2571 topology_manager.go:215] "Topology Admit Handler" podUID="b7bd711d-8793-408e-a86f-5638b4667c72" podNamespace="calico-system" podName="csi-node-driver-r7n9l" Jan 16 09:06:24.614129 kubelet[2571]: E0116 09:06:24.613042 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r7n9l" podUID="b7bd711d-8793-408e-a86f-5638b4667c72" Jan 16 09:06:24.629857 containerd[1482]: time="2025-01-16T09:06:24.629542281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:06:24.631077 containerd[1482]: time="2025-01-16T09:06:24.630945622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:06:24.631467 containerd[1482]: time="2025-01-16T09:06:24.631045570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:24.631976 containerd[1482]: time="2025-01-16T09:06:24.631750475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:24.692108 kubelet[2571]: E0116 09:06:24.691282 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.692108 kubelet[2571]: W0116 09:06:24.691338 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.692108 kubelet[2571]: E0116 09:06:24.691489 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.701098 kubelet[2571]: E0116 09:06:24.699336 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.701098 kubelet[2571]: W0116 09:06:24.699394 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.701098 kubelet[2571]: E0116 09:06:24.699430 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.706182 kubelet[2571]: E0116 09:06:24.706027 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.706182 kubelet[2571]: W0116 09:06:24.706170 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.706432 kubelet[2571]: E0116 09:06:24.706222 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.707813 kubelet[2571]: E0116 09:06:24.707142 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.707813 kubelet[2571]: W0116 09:06:24.707370 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.707813 kubelet[2571]: E0116 09:06:24.707407 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.709888 kubelet[2571]: E0116 09:06:24.709832 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.710063 kubelet[2571]: W0116 09:06:24.709953 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.710063 kubelet[2571]: E0116 09:06:24.709985 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.712815 kubelet[2571]: E0116 09:06:24.711522 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.712815 kubelet[2571]: W0116 09:06:24.711569 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.712815 kubelet[2571]: E0116 09:06:24.711602 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.713957 kubelet[2571]: E0116 09:06:24.713075 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.713957 kubelet[2571]: W0116 09:06:24.713117 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.713957 kubelet[2571]: E0116 09:06:24.713152 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.713957 kubelet[2571]: E0116 09:06:24.713564 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.713957 kubelet[2571]: W0116 09:06:24.713592 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.713957 kubelet[2571]: E0116 09:06:24.713631 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.716809 kubelet[2571]: E0116 09:06:24.715120 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.716809 kubelet[2571]: W0116 09:06:24.715592 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.716809 kubelet[2571]: E0116 09:06:24.715661 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.717461 kubelet[2571]: E0116 09:06:24.717420 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.717461 kubelet[2571]: W0116 09:06:24.717448 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.717628 kubelet[2571]: E0116 09:06:24.717567 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.721408 kubelet[2571]: E0116 09:06:24.720928 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.721408 kubelet[2571]: W0116 09:06:24.720961 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.721408 kubelet[2571]: E0116 09:06:24.720991 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.721114 systemd[1]: Started cri-containerd-c9be08edbe4acb7e6193f16d8c9a12cb5adf242741625bbac11f8dcd19302b0b.scope - libcontainer container c9be08edbe4acb7e6193f16d8c9a12cb5adf242741625bbac11f8dcd19302b0b. Jan 16 09:06:24.721769 kubelet[2571]: E0116 09:06:24.721491 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.721769 kubelet[2571]: W0116 09:06:24.721509 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.721769 kubelet[2571]: E0116 09:06:24.721532 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.724817 kubelet[2571]: E0116 09:06:24.721927 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.724817 kubelet[2571]: W0116 09:06:24.721964 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.724817 kubelet[2571]: E0116 09:06:24.721982 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.725898 kubelet[2571]: E0116 09:06:24.725859 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.725898 kubelet[2571]: W0116 09:06:24.725889 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.726061 kubelet[2571]: E0116 09:06:24.725920 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.729429 kubelet[2571]: E0116 09:06:24.729288 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.729429 kubelet[2571]: W0116 09:06:24.729418 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.729708 kubelet[2571]: E0116 09:06:24.729453 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.730401 kubelet[2571]: E0116 09:06:24.730366 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.730401 kubelet[2571]: W0116 09:06:24.730392 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.730577 kubelet[2571]: E0116 09:06:24.730441 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.730921 kubelet[2571]: E0116 09:06:24.730894 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.731001 kubelet[2571]: W0116 09:06:24.730915 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.731001 kubelet[2571]: E0116 09:06:24.730965 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.731407 kubelet[2571]: E0116 09:06:24.731379 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.731407 kubelet[2571]: W0116 09:06:24.731401 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.731506 kubelet[2571]: E0116 09:06:24.731444 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.731984 kubelet[2571]: E0116 09:06:24.731948 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.731984 kubelet[2571]: W0116 09:06:24.731970 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.732132 kubelet[2571]: E0116 09:06:24.732014 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.733565 kubelet[2571]: E0116 09:06:24.732438 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.733565 kubelet[2571]: W0116 09:06:24.732458 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.733565 kubelet[2571]: E0116 09:06:24.732476 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.733565 kubelet[2571]: E0116 09:06:24.733358 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.733565 kubelet[2571]: W0116 09:06:24.733375 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.733565 kubelet[2571]: E0116 09:06:24.733393 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.734823 kubelet[2571]: E0116 09:06:24.734675 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.734823 kubelet[2571]: W0116 09:06:24.734822 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.734975 kubelet[2571]: E0116 09:06:24.734844 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.744281 kubelet[2571]: E0116 09:06:24.744216 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.744281 kubelet[2571]: W0116 09:06:24.744255 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.744281 kubelet[2571]: E0116 09:06:24.744288 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.754749 kubelet[2571]: E0116 09:06:24.754682 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:24.756593 containerd[1482]: time="2025-01-16T09:06:24.756534883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cvrz2,Uid:4739ced5-4f52-4587-93bd-e23be6f62634,Namespace:calico-system,Attempt:0,}" Jan 16 09:06:24.784018 kubelet[2571]: E0116 09:06:24.781000 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.784018 kubelet[2571]: W0116 09:06:24.781040 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.784018 kubelet[2571]: E0116 09:06:24.781076 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.784018 kubelet[2571]: I0116 09:06:24.781126 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59jhr\" (UniqueName: \"kubernetes.io/projected/b7bd711d-8793-408e-a86f-5638b4667c72-kube-api-access-59jhr\") pod \"csi-node-driver-r7n9l\" (UID: \"b7bd711d-8793-408e-a86f-5638b4667c72\") " pod="calico-system/csi-node-driver-r7n9l" Jan 16 09:06:24.784018 kubelet[2571]: E0116 09:06:24.782516 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.784018 kubelet[2571]: W0116 09:06:24.782551 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.784018 kubelet[2571]: E0116 09:06:24.783913 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.784018 kubelet[2571]: W0116 09:06:24.783941 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.784018 kubelet[2571]: E0116 09:06:24.784012 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.784554 kubelet[2571]: I0116 09:06:24.784064 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b7bd711d-8793-408e-a86f-5638b4667c72-varrun\") pod \"csi-node-driver-r7n9l\" (UID: \"b7bd711d-8793-408e-a86f-5638b4667c72\") " pod="calico-system/csi-node-driver-r7n9l" Jan 16 09:06:24.784554 kubelet[2571]: E0116 09:06:24.784362 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.787850 kubelet[2571]: E0116 09:06:24.785946 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.787850 kubelet[2571]: W0116 09:06:24.786011 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.787850 kubelet[2571]: E0116 09:06:24.786059 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.787850 kubelet[2571]: I0116 09:06:24.786457 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b7bd711d-8793-408e-a86f-5638b4667c72-socket-dir\") pod \"csi-node-driver-r7n9l\" (UID: \"b7bd711d-8793-408e-a86f-5638b4667c72\") " pod="calico-system/csi-node-driver-r7n9l" Jan 16 09:06:24.787850 kubelet[2571]: E0116 09:06:24.787002 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.787850 kubelet[2571]: W0116 09:06:24.787022 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.787850 kubelet[2571]: E0116 09:06:24.787045 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.789092 kubelet[2571]: E0116 09:06:24.788707 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.789092 kubelet[2571]: W0116 09:06:24.788735 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.789092 kubelet[2571]: E0116 09:06:24.789030 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.793306 kubelet[2571]: E0116 09:06:24.790531 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.793306 kubelet[2571]: W0116 09:06:24.790565 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.793306 kubelet[2571]: E0116 09:06:24.790923 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.795353 kubelet[2571]: E0116 09:06:24.794179 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.795353 kubelet[2571]: W0116 09:06:24.794214 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.795353 kubelet[2571]: E0116 09:06:24.794967 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.797800 kubelet[2571]: E0116 09:06:24.797258 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.800180 kubelet[2571]: W0116 09:06:24.798161 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.800180 kubelet[2571]: E0116 09:06:24.798216 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.800180 kubelet[2571]: I0116 09:06:24.798264 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b7bd711d-8793-408e-a86f-5638b4667c72-kubelet-dir\") pod \"csi-node-driver-r7n9l\" (UID: \"b7bd711d-8793-408e-a86f-5638b4667c72\") " pod="calico-system/csi-node-driver-r7n9l" Jan 16 09:06:24.800180 kubelet[2571]: E0116 09:06:24.799108 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.800180 kubelet[2571]: W0116 09:06:24.799133 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.800180 kubelet[2571]: E0116 09:06:24.799159 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.804825 kubelet[2571]: E0116 09:06:24.802390 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.804825 kubelet[2571]: W0116 09:06:24.802423 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.804825 kubelet[2571]: E0116 09:06:24.802574 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.806030 kubelet[2571]: E0116 09:06:24.805565 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.806030 kubelet[2571]: W0116 09:06:24.805599 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.806030 kubelet[2571]: E0116 09:06:24.805729 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.806030 kubelet[2571]: I0116 09:06:24.805985 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b7bd711d-8793-408e-a86f-5638b4667c72-registration-dir\") pod \"csi-node-driver-r7n9l\" (UID: \"b7bd711d-8793-408e-a86f-5638b4667c72\") " pod="calico-system/csi-node-driver-r7n9l" Jan 16 09:06:24.806979 kubelet[2571]: E0116 09:06:24.806534 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.806979 kubelet[2571]: W0116 09:06:24.806557 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.806979 kubelet[2571]: E0116 09:06:24.806583 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.807645 kubelet[2571]: E0116 09:06:24.807605 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.807645 kubelet[2571]: W0116 09:06:24.807630 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.808260 kubelet[2571]: E0116 09:06:24.807656 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.808260 kubelet[2571]: E0116 09:06:24.808235 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.808260 kubelet[2571]: W0116 09:06:24.808253 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.808408 kubelet[2571]: E0116 09:06:24.808277 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.837218 containerd[1482]: time="2025-01-16T09:06:24.836058336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:06:24.837218 containerd[1482]: time="2025-01-16T09:06:24.837113443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:06:24.839235 containerd[1482]: time="2025-01-16T09:06:24.837152817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:24.840096 containerd[1482]: time="2025-01-16T09:06:24.839976302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:24.879235 systemd[1]: Started cri-containerd-06dddb64708c113d576c2552d1e5487c5ca4094ef8af0a058fb9416a3ee7dc5c.scope - libcontainer container 06dddb64708c113d576c2552d1e5487c5ca4094ef8af0a058fb9416a3ee7dc5c. Jan 16 09:06:24.907800 kubelet[2571]: E0116 09:06:24.907725 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.907800 kubelet[2571]: W0116 09:06:24.907763 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.907800 kubelet[2571]: E0116 09:06:24.907815 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.908315 kubelet[2571]: E0116 09:06:24.908289 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.908315 kubelet[2571]: W0116 09:06:24.908314 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.908480 kubelet[2571]: E0116 09:06:24.908346 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.908888 kubelet[2571]: E0116 09:06:24.908813 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.908888 kubelet[2571]: W0116 09:06:24.908835 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.908888 kubelet[2571]: E0116 09:06:24.908857 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.910225 kubelet[2571]: E0116 09:06:24.909184 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.910225 kubelet[2571]: W0116 09:06:24.909239 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.910225 kubelet[2571]: E0116 09:06:24.909264 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.910225 kubelet[2571]: E0116 09:06:24.910220 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.910629 kubelet[2571]: W0116 09:06:24.910238 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.910629 kubelet[2571]: E0116 09:06:24.910305 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.912231 kubelet[2571]: E0116 09:06:24.912190 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.912231 kubelet[2571]: W0116 09:06:24.912223 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.912546 kubelet[2571]: E0116 09:06:24.912303 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.913443 kubelet[2571]: E0116 09:06:24.913407 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.913443 kubelet[2571]: W0116 09:06:24.913436 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.913976 kubelet[2571]: E0116 09:06:24.913595 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.914071 kubelet[2571]: E0116 09:06:24.914036 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.914071 kubelet[2571]: W0116 09:06:24.914060 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.914235 kubelet[2571]: E0116 09:06:24.914186 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.915041 kubelet[2571]: E0116 09:06:24.915016 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.915041 kubelet[2571]: W0116 09:06:24.915036 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.915243 kubelet[2571]: E0116 09:06:24.915130 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.915543 kubelet[2571]: E0116 09:06:24.915507 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.915543 kubelet[2571]: W0116 09:06:24.915529 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.915836 kubelet[2571]: E0116 09:06:24.915621 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.916396 kubelet[2571]: E0116 09:06:24.916371 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.916396 kubelet[2571]: W0116 09:06:24.916390 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.916720 kubelet[2571]: E0116 09:06:24.916616 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.917068 kubelet[2571]: E0116 09:06:24.917047 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.917068 kubelet[2571]: W0116 09:06:24.917065 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.917403 kubelet[2571]: E0116 09:06:24.917142 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.917701 kubelet[2571]: E0116 09:06:24.917680 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.917701 kubelet[2571]: W0116 09:06:24.917699 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.918101 kubelet[2571]: E0116 09:06:24.917920 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.919281 kubelet[2571]: E0116 09:06:24.919255 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.919281 kubelet[2571]: W0116 09:06:24.919278 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.919714 kubelet[2571]: E0116 09:06:24.919379 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.919714 kubelet[2571]: E0116 09:06:24.919577 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.919714 kubelet[2571]: W0116 09:06:24.919591 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.919714 kubelet[2571]: E0116 09:06:24.919632 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.920546 kubelet[2571]: E0116 09:06:24.920429 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.920546 kubelet[2571]: W0116 09:06:24.920448 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.920841 kubelet[2571]: E0116 09:06:24.920685 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.921335 kubelet[2571]: E0116 09:06:24.921306 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.921335 kubelet[2571]: W0116 09:06:24.921332 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.921899 kubelet[2571]: E0116 09:06:24.921489 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.922651 kubelet[2571]: E0116 09:06:24.922624 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.922651 kubelet[2571]: W0116 09:06:24.922646 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.922947 kubelet[2571]: E0116 09:06:24.922750 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.923955 kubelet[2571]: E0116 09:06:24.923927 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.923955 kubelet[2571]: W0116 09:06:24.923951 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.924820 kubelet[2571]: E0116 09:06:24.924102 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.925221 kubelet[2571]: E0116 09:06:24.925193 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.925469 kubelet[2571]: W0116 09:06:24.925216 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.925469 kubelet[2571]: E0116 09:06:24.925373 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.926026 kubelet[2571]: E0116 09:06:24.925999 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.926026 kubelet[2571]: W0116 09:06:24.926021 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.926351 kubelet[2571]: E0116 09:06:24.926146 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.926815 kubelet[2571]: E0116 09:06:24.926745 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.926815 kubelet[2571]: W0116 09:06:24.926768 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.927365 kubelet[2571]: E0116 09:06:24.927270 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.927465 kubelet[2571]: E0116 09:06:24.927441 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.927465 kubelet[2571]: W0116 09:06:24.927462 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.927612 kubelet[2571]: E0116 09:06:24.927569 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.928221 kubelet[2571]: E0116 09:06:24.928191 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.928221 kubelet[2571]: W0116 09:06:24.928211 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.928844 kubelet[2571]: E0116 09:06:24.928257 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.928844 kubelet[2571]: E0116 09:06:24.928763 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.928844 kubelet[2571]: W0116 09:06:24.928792 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.928844 kubelet[2571]: E0116 09:06:24.928810 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:24.953058 kubelet[2571]: E0116 09:06:24.952901 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:24.953058 kubelet[2571]: W0116 09:06:24.952940 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:24.953058 kubelet[2571]: E0116 09:06:24.952973 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:25.013324 containerd[1482]: time="2025-01-16T09:06:25.013144825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d97bbc577-z7bfw,Uid:e8add061-66ff-4cb6-b3e6-324eba0461f9,Namespace:calico-system,Attempt:0,} returns sandbox id \"c9be08edbe4acb7e6193f16d8c9a12cb5adf242741625bbac11f8dcd19302b0b\"" Jan 16 09:06:25.016336 kubelet[2571]: E0116 09:06:25.016114 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:25.021490 containerd[1482]: time="2025-01-16T09:06:25.021413695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 16 09:06:25.118479 containerd[1482]: time="2025-01-16T09:06:25.118065803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cvrz2,Uid:4739ced5-4f52-4587-93bd-e23be6f62634,Namespace:calico-system,Attempt:0,} returns sandbox id \"06dddb64708c113d576c2552d1e5487c5ca4094ef8af0a058fb9416a3ee7dc5c\"" Jan 16 09:06:25.120027 kubelet[2571]: E0116 09:06:25.119651 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:25.402440 systemd[1]: run-containerd-runc-k8s.io-c9be08edbe4acb7e6193f16d8c9a12cb5adf242741625bbac11f8dcd19302b0b-runc.IgL5II.mount: Deactivated successfully. Jan 16 09:06:26.463117 kubelet[2571]: E0116 09:06:26.462976 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r7n9l" podUID="b7bd711d-8793-408e-a86f-5638b4667c72" Jan 16 09:06:26.761661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount101331717.mount: Deactivated successfully. Jan 16 09:06:27.913941 containerd[1482]: time="2025-01-16T09:06:27.913847586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:27.925754 containerd[1482]: time="2025-01-16T09:06:27.925578662Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 16 09:06:27.996599 containerd[1482]: time="2025-01-16T09:06:27.995946618Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:28.004920 containerd[1482]: time="2025-01-16T09:06:28.004463938Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:28.006906 containerd[1482]: time="2025-01-16T09:06:28.006731054Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.985221122s" Jan 16 09:06:28.006906 containerd[1482]: time="2025-01-16T09:06:28.006837799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 16 09:06:28.017935 containerd[1482]: time="2025-01-16T09:06:28.011409866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 16 09:06:28.041929 containerd[1482]: time="2025-01-16T09:06:28.039831746Z" level=info msg="CreateContainer within sandbox \"c9be08edbe4acb7e6193f16d8c9a12cb5adf242741625bbac11f8dcd19302b0b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 16 09:06:28.083883 containerd[1482]: time="2025-01-16T09:06:28.083631312Z" level=info msg="CreateContainer within sandbox \"c9be08edbe4acb7e6193f16d8c9a12cb5adf242741625bbac11f8dcd19302b0b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f0c46ce29bd7fb88e9879c02228b8000d334d58bf3bc12b781031933a9f58e19\"" Jan 16 09:06:28.087626 containerd[1482]: time="2025-01-16T09:06:28.086878729Z" level=info msg="StartContainer for \"f0c46ce29bd7fb88e9879c02228b8000d334d58bf3bc12b781031933a9f58e19\"" Jan 16 09:06:28.158468 systemd[1]: Started cri-containerd-f0c46ce29bd7fb88e9879c02228b8000d334d58bf3bc12b781031933a9f58e19.scope - libcontainer container f0c46ce29bd7fb88e9879c02228b8000d334d58bf3bc12b781031933a9f58e19. Jan 16 09:06:28.247769 containerd[1482]: time="2025-01-16T09:06:28.246392994Z" level=info msg="StartContainer for \"f0c46ce29bd7fb88e9879c02228b8000d334d58bf3bc12b781031933a9f58e19\" returns successfully" Jan 16 09:06:28.463516 kubelet[2571]: E0116 09:06:28.463437 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r7n9l" podUID="b7bd711d-8793-408e-a86f-5638b4667c72" Jan 16 09:06:28.690586 kubelet[2571]: E0116 09:06:28.690525 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:28.778808 kubelet[2571]: E0116 09:06:28.778477 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.778808 kubelet[2571]: W0116 09:06:28.778522 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.778808 kubelet[2571]: E0116 09:06:28.778559 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.781346 kubelet[2571]: E0116 09:06:28.781300 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.782032 kubelet[2571]: W0116 09:06:28.781582 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.782032 kubelet[2571]: E0116 09:06:28.781830 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.786652 kubelet[2571]: E0116 09:06:28.786176 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.786652 kubelet[2571]: W0116 09:06:28.786219 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.786652 kubelet[2571]: E0116 09:06:28.786255 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.788902 kubelet[2571]: E0116 09:06:28.788654 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.788902 kubelet[2571]: W0116 09:06:28.788700 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.788902 kubelet[2571]: E0116 09:06:28.788746 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.790816 kubelet[2571]: E0116 09:06:28.790702 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.790816 kubelet[2571]: W0116 09:06:28.790735 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.790816 kubelet[2571]: E0116 09:06:28.790765 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.791869 kubelet[2571]: E0116 09:06:28.791460 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.791869 kubelet[2571]: W0116 09:06:28.791484 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.791869 kubelet[2571]: E0116 09:06:28.791507 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.793407 kubelet[2571]: E0116 09:06:28.793327 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.793407 kubelet[2571]: W0116 09:06:28.793370 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.793407 kubelet[2571]: E0116 09:06:28.793400 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.794400 kubelet[2571]: E0116 09:06:28.793734 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.794400 kubelet[2571]: W0116 09:06:28.793750 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.794400 kubelet[2571]: E0116 09:06:28.793769 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.794400 kubelet[2571]: E0116 09:06:28.794109 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.794400 kubelet[2571]: W0116 09:06:28.794123 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.794400 kubelet[2571]: E0116 09:06:28.794140 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.795071 kubelet[2571]: E0116 09:06:28.794917 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.795071 kubelet[2571]: W0116 09:06:28.794934 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.795071 kubelet[2571]: E0116 09:06:28.794952 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.795461 kubelet[2571]: E0116 09:06:28.795320 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.795461 kubelet[2571]: W0116 09:06:28.795341 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.795461 kubelet[2571]: E0116 09:06:28.795362 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.796623 kubelet[2571]: E0116 09:06:28.796153 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.796623 kubelet[2571]: W0116 09:06:28.796177 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.796623 kubelet[2571]: E0116 09:06:28.796200 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.797266 kubelet[2571]: E0116 09:06:28.797226 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.797266 kubelet[2571]: W0116 09:06:28.797256 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.797425 kubelet[2571]: E0116 09:06:28.797281 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.797865 kubelet[2571]: E0116 09:06:28.797684 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.797865 kubelet[2571]: W0116 09:06:28.797704 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.797865 kubelet[2571]: E0116 09:06:28.797726 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.798265 kubelet[2571]: E0116 09:06:28.798108 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.798265 kubelet[2571]: W0116 09:06:28.798127 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.798265 kubelet[2571]: E0116 09:06:28.798147 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.866621 kubelet[2571]: E0116 09:06:28.866359 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.866621 kubelet[2571]: W0116 09:06:28.866466 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.866621 kubelet[2571]: E0116 09:06:28.866498 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.867255 kubelet[2571]: E0116 09:06:28.867191 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.867255 kubelet[2571]: W0116 09:06:28.867235 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.867414 kubelet[2571]: E0116 09:06:28.867274 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.867746 kubelet[2571]: E0116 09:06:28.867704 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.868264 kubelet[2571]: W0116 09:06:28.867748 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.868264 kubelet[2571]: E0116 09:06:28.867797 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.868543 kubelet[2571]: E0116 09:06:28.868276 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.868543 kubelet[2571]: W0116 09:06:28.868293 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.868543 kubelet[2571]: E0116 09:06:28.868480 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.869167 kubelet[2571]: E0116 09:06:28.869139 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.869167 kubelet[2571]: W0116 09:06:28.869167 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.869347 kubelet[2571]: E0116 09:06:28.869194 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.871924 kubelet[2571]: E0116 09:06:28.871879 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.872635 kubelet[2571]: W0116 09:06:28.871928 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.872635 kubelet[2571]: E0116 09:06:28.871971 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.873144 kubelet[2571]: E0116 09:06:28.872957 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.873144 kubelet[2571]: W0116 09:06:28.872989 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.873144 kubelet[2571]: E0116 09:06:28.873067 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.880409 kubelet[2571]: E0116 09:06:28.874047 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.880409 kubelet[2571]: W0116 09:06:28.874072 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.880409 kubelet[2571]: E0116 09:06:28.878975 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.880994 kubelet[2571]: E0116 09:06:28.880955 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.880994 kubelet[2571]: W0116 09:06:28.880990 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.881601 kubelet[2571]: E0116 09:06:28.881208 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.881601 kubelet[2571]: E0116 09:06:28.881402 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.881601 kubelet[2571]: W0116 09:06:28.881421 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.882259 kubelet[2571]: E0116 09:06:28.882130 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.882459 kubelet[2571]: E0116 09:06:28.882315 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.882459 kubelet[2571]: W0116 09:06:28.882334 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.882459 kubelet[2571]: E0116 09:06:28.882388 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.883736 kubelet[2571]: E0116 09:06:28.883705 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.883736 kubelet[2571]: W0116 09:06:28.883730 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.884010 kubelet[2571]: E0116 09:06:28.883858 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.884371 kubelet[2571]: E0116 09:06:28.884287 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.884371 kubelet[2571]: W0116 09:06:28.884311 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.884371 kubelet[2571]: E0116 09:06:28.884338 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.884687 kubelet[2571]: E0116 09:06:28.884666 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.884687 kubelet[2571]: W0116 09:06:28.884684 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.886094 kubelet[2571]: E0116 09:06:28.884707 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.886425 kubelet[2571]: E0116 09:06:28.886398 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.886425 kubelet[2571]: W0116 09:06:28.886425 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.886643 kubelet[2571]: E0116 09:06:28.886519 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.887035 kubelet[2571]: E0116 09:06:28.887014 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.887035 kubelet[2571]: W0116 09:06:28.887034 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.887166 kubelet[2571]: E0116 09:06:28.887055 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.887845 kubelet[2571]: E0116 09:06:28.887416 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.887845 kubelet[2571]: W0116 09:06:28.887433 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.887845 kubelet[2571]: E0116 09:06:28.887450 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:28.888263 kubelet[2571]: E0116 09:06:28.888243 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:28.888263 kubelet[2571]: W0116 09:06:28.888261 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:28.888370 kubelet[2571]: E0116 09:06:28.888300 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.698930 kubelet[2571]: I0116 09:06:29.697982 2571 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 09:06:29.699475 kubelet[2571]: E0116 09:06:29.699169 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:29.716004 kubelet[2571]: E0116 09:06:29.715218 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.716004 kubelet[2571]: W0116 09:06:29.715254 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.716004 kubelet[2571]: E0116 09:06:29.715282 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.716004 kubelet[2571]: E0116 09:06:29.715820 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.716004 kubelet[2571]: W0116 09:06:29.715844 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.716004 kubelet[2571]: E0116 09:06:29.715865 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.716675 kubelet[2571]: E0116 09:06:29.716144 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.716675 kubelet[2571]: W0116 09:06:29.716157 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.716675 kubelet[2571]: E0116 09:06:29.716174 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.717049 kubelet[2571]: E0116 09:06:29.716852 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.717049 kubelet[2571]: W0116 09:06:29.716877 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.717049 kubelet[2571]: E0116 09:06:29.716895 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.717579 kubelet[2571]: E0116 09:06:29.717460 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.717579 kubelet[2571]: W0116 09:06:29.717475 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.717579 kubelet[2571]: E0116 09:06:29.717488 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.718186 kubelet[2571]: E0116 09:06:29.718022 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.718186 kubelet[2571]: W0116 09:06:29.718042 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.718186 kubelet[2571]: E0116 09:06:29.718060 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.718446 kubelet[2571]: E0116 09:06:29.718430 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.718530 kubelet[2571]: W0116 09:06:29.718517 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.718603 kubelet[2571]: E0116 09:06:29.718590 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.719006 kubelet[2571]: E0116 09:06:29.718980 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.719491 kubelet[2571]: W0116 09:06:29.719134 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.719491 kubelet[2571]: E0116 09:06:29.719161 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.719711 kubelet[2571]: E0116 09:06:29.719693 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.719819 kubelet[2571]: W0116 09:06:29.719802 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.719894 kubelet[2571]: E0116 09:06:29.719884 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.720540 kubelet[2571]: E0116 09:06:29.720516 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.720897 kubelet[2571]: W0116 09:06:29.720656 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.720897 kubelet[2571]: E0116 09:06:29.720683 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.721170 kubelet[2571]: E0116 09:06:29.721154 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.721259 kubelet[2571]: W0116 09:06:29.721244 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.721351 kubelet[2571]: E0116 09:06:29.721331 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.722432 kubelet[2571]: E0116 09:06:29.722396 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.722696 kubelet[2571]: W0116 09:06:29.722592 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.722696 kubelet[2571]: E0116 09:06:29.722619 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.723184 kubelet[2571]: E0116 09:06:29.723159 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.723184 kubelet[2571]: W0116 09:06:29.723181 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.723277 kubelet[2571]: E0116 09:06:29.723197 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.723530 kubelet[2571]: E0116 09:06:29.723498 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.723530 kubelet[2571]: W0116 09:06:29.723523 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.723605 kubelet[2571]: E0116 09:06:29.723539 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.723879 kubelet[2571]: E0116 09:06:29.723797 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.723879 kubelet[2571]: W0116 09:06:29.723812 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.723879 kubelet[2571]: E0116 09:06:29.723824 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.779360 kubelet[2571]: E0116 09:06:29.779160 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.779360 kubelet[2571]: W0116 09:06:29.779221 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.779360 kubelet[2571]: E0116 09:06:29.779254 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.779954 kubelet[2571]: E0116 09:06:29.779726 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.779954 kubelet[2571]: W0116 09:06:29.779743 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.779954 kubelet[2571]: E0116 09:06:29.779763 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.788832 kubelet[2571]: E0116 09:06:29.780468 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.788832 kubelet[2571]: W0116 09:06:29.780495 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.788832 kubelet[2571]: E0116 09:06:29.780530 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.788832 kubelet[2571]: E0116 09:06:29.780853 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.788832 kubelet[2571]: W0116 09:06:29.780867 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.788832 kubelet[2571]: E0116 09:06:29.780895 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.788832 kubelet[2571]: E0116 09:06:29.781248 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.788832 kubelet[2571]: W0116 09:06:29.781262 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.788832 kubelet[2571]: E0116 09:06:29.781573 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.788832 kubelet[2571]: W0116 09:06:29.781607 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.789515 kubelet[2571]: E0116 09:06:29.781624 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.789515 kubelet[2571]: E0116 09:06:29.781850 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.789515 kubelet[2571]: E0116 09:06:29.782087 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.789515 kubelet[2571]: W0116 09:06:29.782102 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.789515 kubelet[2571]: E0116 09:06:29.782247 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.789515 kubelet[2571]: E0116 09:06:29.782816 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.789515 kubelet[2571]: W0116 09:06:29.782831 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.789515 kubelet[2571]: E0116 09:06:29.782858 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.789515 kubelet[2571]: E0116 09:06:29.783598 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.789515 kubelet[2571]: W0116 09:06:29.783618 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.790282 kubelet[2571]: E0116 09:06:29.783705 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.790282 kubelet[2571]: E0116 09:06:29.783992 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.790282 kubelet[2571]: W0116 09:06:29.784005 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.790282 kubelet[2571]: E0116 09:06:29.784124 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.790282 kubelet[2571]: E0116 09:06:29.784959 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.790282 kubelet[2571]: W0116 09:06:29.784973 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.790282 kubelet[2571]: E0116 09:06:29.785292 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.790282 kubelet[2571]: W0116 09:06:29.785307 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.790282 kubelet[2571]: E0116 09:06:29.785566 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.790282 kubelet[2571]: W0116 09:06:29.785577 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.792742 kubelet[2571]: E0116 09:06:29.785611 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.792742 kubelet[2571]: E0116 09:06:29.785655 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.792742 kubelet[2571]: E0116 09:06:29.785760 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.792742 kubelet[2571]: E0116 09:06:29.790722 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.792742 kubelet[2571]: W0116 09:06:29.790755 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.792742 kubelet[2571]: E0116 09:06:29.790836 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.792742 kubelet[2571]: E0116 09:06:29.791263 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.792742 kubelet[2571]: W0116 09:06:29.791279 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.792742 kubelet[2571]: E0116 09:06:29.791406 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.793830 kubelet[2571]: E0116 09:06:29.793263 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.793830 kubelet[2571]: W0116 09:06:29.793293 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.793830 kubelet[2571]: E0116 09:06:29.793339 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.796901 kubelet[2571]: E0116 09:06:29.795167 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.796901 kubelet[2571]: W0116 09:06:29.795198 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.796901 kubelet[2571]: E0116 09:06:29.795225 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.797445 kubelet[2571]: E0116 09:06:29.797304 2571 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:29.797445 kubelet[2571]: W0116 09:06:29.797337 2571 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:29.797445 kubelet[2571]: E0116 09:06:29.797371 2571 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:29.995952 containerd[1482]: time="2025-01-16T09:06:29.990345475Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:29.995952 containerd[1482]: time="2025-01-16T09:06:29.994507527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 16 09:06:29.995952 containerd[1482]: time="2025-01-16T09:06:29.995639712Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:30.074620 containerd[1482]: time="2025-01-16T09:06:30.073338671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:30.074620 containerd[1482]: time="2025-01-16T09:06:30.074172175Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.062701843s" Jan 16 09:06:30.075040 containerd[1482]: time="2025-01-16T09:06:30.074225686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 16 09:06:30.088418 containerd[1482]: time="2025-01-16T09:06:30.087508476Z" level=info msg="CreateContainer within sandbox \"06dddb64708c113d576c2552d1e5487c5ca4094ef8af0a058fb9416a3ee7dc5c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 16 09:06:30.162511 containerd[1482]: time="2025-01-16T09:06:30.162419164Z" level=info msg="CreateContainer within sandbox \"06dddb64708c113d576c2552d1e5487c5ca4094ef8af0a058fb9416a3ee7dc5c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"89abb818e41131e74437e7e8dfac459cbdb99e3914a585fbe0a473323da0623a\"" Jan 16 09:06:30.164222 containerd[1482]: time="2025-01-16T09:06:30.163908737Z" level=info msg="StartContainer for \"89abb818e41131e74437e7e8dfac459cbdb99e3914a585fbe0a473323da0623a\"" Jan 16 09:06:30.250108 systemd[1]: Started cri-containerd-89abb818e41131e74437e7e8dfac459cbdb99e3914a585fbe0a473323da0623a.scope - libcontainer container 89abb818e41131e74437e7e8dfac459cbdb99e3914a585fbe0a473323da0623a. Jan 16 09:06:30.343164 systemd[1]: cri-containerd-89abb818e41131e74437e7e8dfac459cbdb99e3914a585fbe0a473323da0623a.scope: Deactivated successfully. Jan 16 09:06:30.434555 containerd[1482]: time="2025-01-16T09:06:30.434376446Z" level=info msg="StartContainer for \"89abb818e41131e74437e7e8dfac459cbdb99e3914a585fbe0a473323da0623a\" returns successfully" Jan 16 09:06:30.463597 kubelet[2571]: E0116 09:06:30.463518 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r7n9l" podUID="b7bd711d-8793-408e-a86f-5638b4667c72" Jan 16 09:06:30.486794 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89abb818e41131e74437e7e8dfac459cbdb99e3914a585fbe0a473323da0623a-rootfs.mount: Deactivated successfully. Jan 16 09:06:30.541917 containerd[1482]: time="2025-01-16T09:06:30.496572176Z" level=info msg="shim disconnected" id=89abb818e41131e74437e7e8dfac459cbdb99e3914a585fbe0a473323da0623a namespace=k8s.io Jan 16 09:06:30.541917 containerd[1482]: time="2025-01-16T09:06:30.541161925Z" level=warning msg="cleaning up after shim disconnected" id=89abb818e41131e74437e7e8dfac459cbdb99e3914a585fbe0a473323da0623a namespace=k8s.io Jan 16 09:06:30.541917 containerd[1482]: time="2025-01-16T09:06:30.541188504Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:06:30.709179 kubelet[2571]: E0116 09:06:30.708767 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:30.714152 containerd[1482]: time="2025-01-16T09:06:30.714100977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 16 09:06:30.756864 kubelet[2571]: I0116 09:06:30.755896 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-d97bbc577-z7bfw" podStartSLOduration=3.765159426 podStartE2EDuration="6.755867625s" podCreationTimestamp="2025-01-16 09:06:24 +0000 UTC" firstStartedPulling="2025-01-16 09:06:25.019046791 +0000 UTC m=+21.955450619" lastFinishedPulling="2025-01-16 09:06:28.009754928 +0000 UTC m=+24.946158818" observedRunningTime="2025-01-16 09:06:28.779998024 +0000 UTC m=+25.716401883" watchObservedRunningTime="2025-01-16 09:06:30.755867625 +0000 UTC m=+27.692271462" Jan 16 09:06:32.463138 kubelet[2571]: E0116 09:06:32.463033 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r7n9l" podUID="b7bd711d-8793-408e-a86f-5638b4667c72" Jan 16 09:06:34.462811 kubelet[2571]: E0116 09:06:34.462714 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r7n9l" podUID="b7bd711d-8793-408e-a86f-5638b4667c72" Jan 16 09:06:36.463634 kubelet[2571]: E0116 09:06:36.463554 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r7n9l" podUID="b7bd711d-8793-408e-a86f-5638b4667c72" Jan 16 09:06:36.584876 containerd[1482]: time="2025-01-16T09:06:36.584345265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:36.588731 containerd[1482]: time="2025-01-16T09:06:36.588631221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 16 09:06:36.594823 containerd[1482]: time="2025-01-16T09:06:36.593207910Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:36.599029 containerd[1482]: time="2025-01-16T09:06:36.598934740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:36.599322 containerd[1482]: time="2025-01-16T09:06:36.599281912Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.885122845s" Jan 16 09:06:36.599434 containerd[1482]: time="2025-01-16T09:06:36.599413041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 16 09:06:36.644803 containerd[1482]: time="2025-01-16T09:06:36.644714462Z" level=info msg="CreateContainer within sandbox \"06dddb64708c113d576c2552d1e5487c5ca4094ef8af0a058fb9416a3ee7dc5c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 16 09:06:36.737165 containerd[1482]: time="2025-01-16T09:06:36.736964896Z" level=info msg="CreateContainer within sandbox \"06dddb64708c113d576c2552d1e5487c5ca4094ef8af0a058fb9416a3ee7dc5c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8233b3ec3603fc0ddd6d6d93e372c17686b6860e9d8977056ff9620b71b06969\"" Jan 16 09:06:36.740814 containerd[1482]: time="2025-01-16T09:06:36.739358854Z" level=info msg="StartContainer for \"8233b3ec3603fc0ddd6d6d93e372c17686b6860e9d8977056ff9620b71b06969\"" Jan 16 09:06:36.857191 systemd[1]: Started cri-containerd-8233b3ec3603fc0ddd6d6d93e372c17686b6860e9d8977056ff9620b71b06969.scope - libcontainer container 8233b3ec3603fc0ddd6d6d93e372c17686b6860e9d8977056ff9620b71b06969. Jan 16 09:06:36.923492 containerd[1482]: time="2025-01-16T09:06:36.923406900Z" level=info msg="StartContainer for \"8233b3ec3603fc0ddd6d6d93e372c17686b6860e9d8977056ff9620b71b06969\" returns successfully" Jan 16 09:06:37.738395 kubelet[2571]: E0116 09:06:37.738345 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:38.000989 systemd[1]: cri-containerd-8233b3ec3603fc0ddd6d6d93e372c17686b6860e9d8977056ff9620b71b06969.scope: Deactivated successfully. Jan 16 09:06:38.043557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8233b3ec3603fc0ddd6d6d93e372c17686b6860e9d8977056ff9620b71b06969-rootfs.mount: Deactivated successfully. Jan 16 09:06:38.059886 containerd[1482]: time="2025-01-16T09:06:38.059156039Z" level=info msg="shim disconnected" id=8233b3ec3603fc0ddd6d6d93e372c17686b6860e9d8977056ff9620b71b06969 namespace=k8s.io Jan 16 09:06:38.059886 containerd[1482]: time="2025-01-16T09:06:38.059227968Z" level=warning msg="cleaning up after shim disconnected" id=8233b3ec3603fc0ddd6d6d93e372c17686b6860e9d8977056ff9620b71b06969 namespace=k8s.io Jan 16 09:06:38.059886 containerd[1482]: time="2025-01-16T09:06:38.059241409Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:06:38.070283 kubelet[2571]: I0116 09:06:38.070225 2571 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 16 09:06:38.134382 kubelet[2571]: I0116 09:06:38.132798 2571 topology_manager.go:215] "Topology Admit Handler" podUID="3be6b83d-b704-4612-9bea-5273dc682d78" podNamespace="kube-system" podName="coredns-7db6d8ff4d-cz7k2" Jan 16 09:06:38.144469 kubelet[2571]: I0116 09:06:38.142992 2571 topology_manager.go:215] "Topology Admit Handler" podUID="9f7d449c-357b-4091-8b94-7aeb96a263ac" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ff2ps" Jan 16 09:06:38.147604 systemd[1]: Created slice kubepods-burstable-pod3be6b83d_b704_4612_9bea_5273dc682d78.slice - libcontainer container kubepods-burstable-pod3be6b83d_b704_4612_9bea_5273dc682d78.slice. Jan 16 09:06:38.151815 kubelet[2571]: I0116 09:06:38.150323 2571 topology_manager.go:215] "Topology Admit Handler" podUID="51a16b4c-b541-40b1-ba52-b426bfe5e240" podNamespace="calico-apiserver" podName="calico-apiserver-5ff4995848-5jlzt" Jan 16 09:06:38.157090 kubelet[2571]: I0116 09:06:38.157035 2571 topology_manager.go:215] "Topology Admit Handler" podUID="9030e0e7-f33c-4169-9df7-1b9ed86d0a85" podNamespace="calico-system" podName="calico-kube-controllers-7698c84dd8-drr4g" Jan 16 09:06:38.159672 kubelet[2571]: I0116 09:06:38.159616 2571 topology_manager.go:215] "Topology Admit Handler" podUID="16bb8998-202c-4c00-8496-dc8eaaa9a516" podNamespace="calico-apiserver" podName="calico-apiserver-5ff4995848-jjr88" Jan 16 09:06:38.168661 systemd[1]: Created slice kubepods-burstable-pod9f7d449c_357b_4091_8b94_7aeb96a263ac.slice - libcontainer container kubepods-burstable-pod9f7d449c_357b_4091_8b94_7aeb96a263ac.slice. Jan 16 09:06:38.186089 systemd[1]: Created slice kubepods-besteffort-pod51a16b4c_b541_40b1_ba52_b426bfe5e240.slice - libcontainer container kubepods-besteffort-pod51a16b4c_b541_40b1_ba52_b426bfe5e240.slice. Jan 16 09:06:38.200430 systemd[1]: Created slice kubepods-besteffort-pod9030e0e7_f33c_4169_9df7_1b9ed86d0a85.slice - libcontainer container kubepods-besteffort-pod9030e0e7_f33c_4169_9df7_1b9ed86d0a85.slice. Jan 16 09:06:38.211252 systemd[1]: Created slice kubepods-besteffort-pod16bb8998_202c_4c00_8496_dc8eaaa9a516.slice - libcontainer container kubepods-besteffort-pod16bb8998_202c_4c00_8496_dc8eaaa9a516.slice. Jan 16 09:06:38.283564 kubelet[2571]: I0116 09:06:38.283378 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9q22\" (UniqueName: \"kubernetes.io/projected/3be6b83d-b704-4612-9bea-5273dc682d78-kube-api-access-s9q22\") pod \"coredns-7db6d8ff4d-cz7k2\" (UID: \"3be6b83d-b704-4612-9bea-5273dc682d78\") " pod="kube-system/coredns-7db6d8ff4d-cz7k2" Jan 16 09:06:38.283564 kubelet[2571]: I0116 09:06:38.283457 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3be6b83d-b704-4612-9bea-5273dc682d78-config-volume\") pod \"coredns-7db6d8ff4d-cz7k2\" (UID: \"3be6b83d-b704-4612-9bea-5273dc682d78\") " pod="kube-system/coredns-7db6d8ff4d-cz7k2" Jan 16 09:06:38.283564 kubelet[2571]: I0116 09:06:38.283514 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j79ws\" (UniqueName: \"kubernetes.io/projected/16bb8998-202c-4c00-8496-dc8eaaa9a516-kube-api-access-j79ws\") pod \"calico-apiserver-5ff4995848-jjr88\" (UID: \"16bb8998-202c-4c00-8496-dc8eaaa9a516\") " pod="calico-apiserver/calico-apiserver-5ff4995848-jjr88" Jan 16 09:06:38.285137 kubelet[2571]: I0116 09:06:38.285068 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsmtw\" (UniqueName: \"kubernetes.io/projected/51a16b4c-b541-40b1-ba52-b426bfe5e240-kube-api-access-fsmtw\") pod \"calico-apiserver-5ff4995848-5jlzt\" (UID: \"51a16b4c-b541-40b1-ba52-b426bfe5e240\") " pod="calico-apiserver/calico-apiserver-5ff4995848-5jlzt" Jan 16 09:06:38.285334 kubelet[2571]: I0116 09:06:38.285161 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv69p\" (UniqueName: \"kubernetes.io/projected/9f7d449c-357b-4091-8b94-7aeb96a263ac-kube-api-access-gv69p\") pod \"coredns-7db6d8ff4d-ff2ps\" (UID: \"9f7d449c-357b-4091-8b94-7aeb96a263ac\") " pod="kube-system/coredns-7db6d8ff4d-ff2ps" Jan 16 09:06:38.285334 kubelet[2571]: I0116 09:06:38.285199 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/51a16b4c-b541-40b1-ba52-b426bfe5e240-calico-apiserver-certs\") pod \"calico-apiserver-5ff4995848-5jlzt\" (UID: \"51a16b4c-b541-40b1-ba52-b426bfe5e240\") " pod="calico-apiserver/calico-apiserver-5ff4995848-5jlzt" Jan 16 09:06:38.285334 kubelet[2571]: I0116 09:06:38.285226 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9030e0e7-f33c-4169-9df7-1b9ed86d0a85-tigera-ca-bundle\") pod \"calico-kube-controllers-7698c84dd8-drr4g\" (UID: \"9030e0e7-f33c-4169-9df7-1b9ed86d0a85\") " pod="calico-system/calico-kube-controllers-7698c84dd8-drr4g" Jan 16 09:06:38.285334 kubelet[2571]: I0116 09:06:38.285254 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/16bb8998-202c-4c00-8496-dc8eaaa9a516-calico-apiserver-certs\") pod \"calico-apiserver-5ff4995848-jjr88\" (UID: \"16bb8998-202c-4c00-8496-dc8eaaa9a516\") " pod="calico-apiserver/calico-apiserver-5ff4995848-jjr88" Jan 16 09:06:38.285334 kubelet[2571]: I0116 09:06:38.285288 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f7d449c-357b-4091-8b94-7aeb96a263ac-config-volume\") pod \"coredns-7db6d8ff4d-ff2ps\" (UID: \"9f7d449c-357b-4091-8b94-7aeb96a263ac\") " pod="kube-system/coredns-7db6d8ff4d-ff2ps" Jan 16 09:06:38.285531 kubelet[2571]: I0116 09:06:38.285323 2571 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbjnn\" (UniqueName: \"kubernetes.io/projected/9030e0e7-f33c-4169-9df7-1b9ed86d0a85-kube-api-access-sbjnn\") pod \"calico-kube-controllers-7698c84dd8-drr4g\" (UID: \"9030e0e7-f33c-4169-9df7-1b9ed86d0a85\") " pod="calico-system/calico-kube-controllers-7698c84dd8-drr4g" Jan 16 09:06:38.471969 systemd[1]: Created slice kubepods-besteffort-podb7bd711d_8793_408e_a86f_5638b4667c72.slice - libcontainer container kubepods-besteffort-podb7bd711d_8793_408e_a86f_5638b4667c72.slice. Jan 16 09:06:38.474282 kubelet[2571]: E0116 09:06:38.474242 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:38.476892 containerd[1482]: time="2025-01-16T09:06:38.476267637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ff2ps,Uid:9f7d449c-357b-4091-8b94-7aeb96a263ac,Namespace:kube-system,Attempt:0,}" Jan 16 09:06:38.478188 containerd[1482]: time="2025-01-16T09:06:38.477493480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r7n9l,Uid:b7bd711d-8793-408e-a86f-5638b4667c72,Namespace:calico-system,Attempt:0,}" Jan 16 09:06:38.493200 containerd[1482]: time="2025-01-16T09:06:38.492870042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff4995848-5jlzt,Uid:51a16b4c-b541-40b1-ba52-b426bfe5e240,Namespace:calico-apiserver,Attempt:0,}" Jan 16 09:06:38.509513 containerd[1482]: time="2025-01-16T09:06:38.509421747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7698c84dd8-drr4g,Uid:9030e0e7-f33c-4169-9df7-1b9ed86d0a85,Namespace:calico-system,Attempt:0,}" Jan 16 09:06:38.519466 containerd[1482]: time="2025-01-16T09:06:38.519154181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff4995848-jjr88,Uid:16bb8998-202c-4c00-8496-dc8eaaa9a516,Namespace:calico-apiserver,Attempt:0,}" Jan 16 09:06:38.756678 kubelet[2571]: E0116 09:06:38.756391 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:38.761933 containerd[1482]: time="2025-01-16T09:06:38.761580607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cz7k2,Uid:3be6b83d-b704-4612-9bea-5273dc682d78,Namespace:kube-system,Attempt:0,}" Jan 16 09:06:38.767287 kubelet[2571]: E0116 09:06:38.766761 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:38.773874 containerd[1482]: time="2025-01-16T09:06:38.773370209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 16 09:06:39.318103 containerd[1482]: time="2025-01-16T09:06:39.316014249Z" level=error msg="Failed to destroy network for sandbox \"575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:39.321813 containerd[1482]: time="2025-01-16T09:06:39.320289792Z" level=error msg="encountered an error cleaning up failed sandbox \"575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:39.321813 containerd[1482]: time="2025-01-16T09:06:39.320425690Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r7n9l,Uid:b7bd711d-8793-408e-a86f-5638b4667c72,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:39.327223 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9-shm.mount: Deactivated successfully. Jan 16 09:06:39.348442 containerd[1482]: time="2025-01-16T09:06:39.348204969Z" level=error msg="Failed to destroy network for sandbox \"cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:39.349463 containerd[1482]: time="2025-01-16T09:06:39.349005256Z" level=error msg="encountered an error cleaning up failed sandbox \"cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:39.349463 containerd[1482]: time="2025-01-16T09:06:39.349113028Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7698c84dd8-drr4g,Uid:9030e0e7-f33c-4169-9df7-1b9ed86d0a85,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:39.349463 containerd[1482]: time="2025-01-16T09:06:39.349264096Z" level=error msg="Failed to destroy network for sandbox \"3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:39.350142 containerd[1482]: time="2025-01-16T09:06:39.350100138Z" level=error msg="encountered an error cleaning up failed sandbox \"3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:39.350374 containerd[1482]: time="2025-01-16T09:06:39.350334918Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff4995848-jjr88,Uid:16bb8998-202c-4c00-8496-dc8eaaa9a516,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:39.353836 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a-shm.mount: Deactivated successfully. Jan 16 09:06:39.354006 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f-shm.mount: Deactivated successfully. Jan 16 09:06:39.362900 kubelet[2571]: E0116 09:06:39.357815 2571 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:39.362900 kubelet[2571]: E0116 09:06:39.357932 2571 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ff4995848-jjr88" Jan 16 09:06:39.362900 kubelet[2571]: E0116 09:06:39.357968 2571 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ff4995848-jjr88" Jan 16 09:06:39.363227 containerd[1482]: time="2025-01-16T09:06:39.360204614Z" level=error msg="Failed to destroy network for sandbox \"ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:39.363227 containerd[1482]: time="2025-01-16T09:06:39.360961146Z" level=error msg="encountered an error cleaning up failed sandbox \"ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:39.363227 containerd[1482]: time="2025-01-16T09:06:39.361052402Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff4995848-5jlzt,Uid:51a16b4c-b541-40b1-ba52-b426bfe5e240,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:39.363227 containerd[1482]: time="2025-01-16T09:06:39.361236074Z" level=error msg="Failed to destroy network for sandbox \"062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:39.363489 kubelet[2571]: E0116 09:06:39.358036 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5ff4995848-jjr88_calico-apiserver(16bb8998-202c-4c00-8496-dc8eaaa9a516)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5ff4995848-jjr88_calico-apiserver(16bb8998-202c-4c00-8496-dc8eaaa9a516)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ff4995848-jjr88" podUID="16bb8998-202c-4c00-8496-dc8eaaa9a516" Jan 16 09:06:39.363489 kubelet[2571]: E0116 09:06:39.358391 2571 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:39.363489 kubelet[2571]: E0116 09:06:39.358445 2571 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r7n9l" Jan 16 09:06:39.363665 kubelet[2571]: E0116 09:06:39.358474 2571 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r7n9l" Jan 16 09:06:39.363665 kubelet[2571]: E0116 09:06:39.358520 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r7n9l_calico-system(b7bd711d-8793-408e-a86f-5638b4667c72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r7n9l_calico-system(b7bd711d-8793-408e-a86f-5638b4667c72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r7n9l" podUID="b7bd711d-8793-408e-a86f-5638b4667c72" Jan 16 09:06:39.363665 kubelet[2571]: E0116 09:06:39.358583 2571 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:39.368672 kubelet[2571]: E0116 09:06:39.358609 2571 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7698c84dd8-drr4g" Jan 16 09:06:39.368672 kubelet[2571]: E0116 09:06:39.358630 2571 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7698c84dd8-drr4g" Jan 16 09:06:39.368672 kubelet[2571]: E0116 09:06:39.358673 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7698c84dd8-drr4g_calico-system(9030e0e7-f33c-4169-9df7-1b9ed86d0a85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7698c84dd8-drr4g_calico-system(9030e0e7-f33c-4169-9df7-1b9ed86d0a85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7698c84dd8-drr4g" podUID="9030e0e7-f33c-4169-9df7-1b9ed86d0a85" Jan 16 09:06:39.368480 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff-shm.mount: Deactivated successfully. Jan 16 09:06:39.369194 containerd[1482]: time="2025-01-16T09:06:39.365917609Z" level=error msg="encountered an error cleaning up failed sandbox \"062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:39.369194 containerd[1482]: time="2025-01-16T09:06:39.366074162Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ff2ps,Uid:9f7d449c-357b-4091-8b94-7aeb96a263ac,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:39.369194 containerd[1482]: time="2025-01-16T09:06:39.366249784Z" level=error msg="Failed to destroy network for sandbox \"3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:39.369835 kubelet[2571]: E0116 09:06:39.369451 2571 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:39.369835 kubelet[2571]: E0116 09:06:39.369513 2571 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ff2ps" Jan 16 09:06:39.369835 kubelet[2571]: E0116 09:06:39.369540 2571 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ff2ps" Jan 16 09:06:39.370008 kubelet[2571]: E0116 09:06:39.369812 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-ff2ps_kube-system(9f7d449c-357b-4091-8b94-7aeb96a263ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-ff2ps_kube-system(9f7d449c-357b-4091-8b94-7aeb96a263ac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-ff2ps" podUID="9f7d449c-357b-4091-8b94-7aeb96a263ac" Jan 16 09:06:39.370008 kubelet[2571]: E0116 09:06:39.369891 2571 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:39.370008 kubelet[2571]: E0116 09:06:39.369919 2571 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ff4995848-5jlzt" Jan 16 09:06:39.370147 kubelet[2571]: E0116 09:06:39.369940 2571 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ff4995848-5jlzt" Jan 16 09:06:39.370147 kubelet[2571]: E0116 09:06:39.369967 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5ff4995848-5jlzt_calico-apiserver(51a16b4c-b541-40b1-ba52-b426bfe5e240)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5ff4995848-5jlzt_calico-apiserver(51a16b4c-b541-40b1-ba52-b426bfe5e240)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ff4995848-5jlzt" podUID="51a16b4c-b541-40b1-ba52-b426bfe5e240" Jan 16 09:06:39.374666 kubelet[2571]: E0116 09:06:39.373748 2571 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:39.374666 kubelet[2571]: E0116 09:06:39.373848 2571 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-cz7k2" Jan 16 09:06:39.374666 kubelet[2571]: E0116 09:06:39.373878 2571 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-cz7k2" Jan 16 09:06:39.374915 containerd[1482]: time="2025-01-16T09:06:39.373183794Z" level=error msg="encountered an error cleaning up failed sandbox \"3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:39.374915 containerd[1482]: time="2025-01-16T09:06:39.373294722Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cz7k2,Uid:3be6b83d-b704-4612-9bea-5273dc682d78,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:39.370908 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130-shm.mount: Deactivated successfully. Jan 16 09:06:39.375120 kubelet[2571]: E0116 09:06:39.373936 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-cz7k2_kube-system(3be6b83d-b704-4612-9bea-5273dc682d78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-cz7k2_kube-system(3be6b83d-b704-4612-9bea-5273dc682d78)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-cz7k2" podUID="3be6b83d-b704-4612-9bea-5273dc682d78" Jan 16 09:06:39.378494 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37-shm.mount: Deactivated successfully. Jan 16 09:06:39.773606 kubelet[2571]: I0116 09:06:39.770429 2571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" Jan 16 09:06:39.780331 kubelet[2571]: I0116 09:06:39.776904 2571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" Jan 16 09:06:39.780506 containerd[1482]: time="2025-01-16T09:06:39.779285395Z" level=info msg="StopPodSandbox for \"ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff\"" Jan 16 09:06:39.786187 containerd[1482]: time="2025-01-16T09:06:39.785565861Z" level=info msg="StopPodSandbox for \"3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a\"" Jan 16 09:06:39.787533 containerd[1482]: time="2025-01-16T09:06:39.787328880Z" level=info msg="Ensure that sandbox ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff in task-service has been cleanup successfully" Jan 16 09:06:39.788109 containerd[1482]: time="2025-01-16T09:06:39.787359352Z" level=info msg="Ensure that sandbox 3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a in task-service has been cleanup successfully" Jan 16 09:06:39.794130 kubelet[2571]: I0116 09:06:39.794086 2571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" Jan 16 09:06:39.797273 containerd[1482]: time="2025-01-16T09:06:39.796832037Z" level=info msg="StopPodSandbox for \"575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9\"" Jan 16 09:06:39.797874 containerd[1482]: time="2025-01-16T09:06:39.797763430Z" level=info msg="Ensure that sandbox 575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9 in task-service has been cleanup successfully" Jan 16 09:06:39.804821 kubelet[2571]: I0116 09:06:39.804568 2571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" Jan 16 09:06:39.806430 containerd[1482]: time="2025-01-16T09:06:39.806175259Z" level=info msg="StopPodSandbox for \"062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130\"" Jan 16 09:06:39.816828 containerd[1482]: time="2025-01-16T09:06:39.815157836Z" level=info msg="Ensure that sandbox 062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130 in task-service has been cleanup successfully" Jan 16 09:06:39.842927 kubelet[2571]: I0116 09:06:39.842875 2571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" Jan 16 09:06:39.846852 containerd[1482]: time="2025-01-16T09:06:39.846799062Z" level=info msg="StopPodSandbox for \"cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f\"" Jan 16 09:06:39.856292 containerd[1482]: time="2025-01-16T09:06:39.856198095Z" level=info msg="Ensure that sandbox cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f in task-service has been cleanup successfully" Jan 16 09:06:39.875879 kubelet[2571]: I0116 09:06:39.875833 2571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" Jan 16 09:06:39.891095 containerd[1482]: time="2025-01-16T09:06:39.890591515Z" level=info msg="StopPodSandbox for \"3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37\"" Jan 16 09:06:39.893167 containerd[1482]: time="2025-01-16T09:06:39.893095816Z" level=info msg="Ensure that sandbox 3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37 in task-service has been cleanup successfully" Jan 16 09:06:40.014793 containerd[1482]: time="2025-01-16T09:06:40.014686287Z" level=error msg="StopPodSandbox for \"575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9\" failed" error="failed to destroy network for sandbox \"575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:40.015977 kubelet[2571]: E0116 09:06:40.015398 2571 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" Jan 16 09:06:40.015977 kubelet[2571]: E0116 09:06:40.015488 2571 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9"} Jan 16 09:06:40.015977 kubelet[2571]: E0116 09:06:40.015606 2571 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b7bd711d-8793-408e-a86f-5638b4667c72\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 09:06:40.015977 kubelet[2571]: E0116 09:06:40.015648 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b7bd711d-8793-408e-a86f-5638b4667c72\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r7n9l" podUID="b7bd711d-8793-408e-a86f-5638b4667c72" Jan 16 09:06:40.033515 containerd[1482]: time="2025-01-16T09:06:40.033253171Z" level=error msg="StopPodSandbox for \"ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff\" failed" error="failed to destroy network for sandbox \"ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:40.035639 kubelet[2571]: E0116 09:06:40.034123 2571 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" Jan 16 09:06:40.035639 kubelet[2571]: E0116 09:06:40.034204 2571 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff"} Jan 16 09:06:40.035639 kubelet[2571]: E0116 09:06:40.034267 2571 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"51a16b4c-b541-40b1-ba52-b426bfe5e240\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 09:06:40.035639 kubelet[2571]: E0116 09:06:40.034304 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"51a16b4c-b541-40b1-ba52-b426bfe5e240\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ff4995848-5jlzt" podUID="51a16b4c-b541-40b1-ba52-b426bfe5e240" Jan 16 09:06:40.054236 containerd[1482]: time="2025-01-16T09:06:40.054158849Z" level=error msg="StopPodSandbox for \"3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a\" failed" error="failed to destroy network for sandbox \"3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:40.055004 kubelet[2571]: E0116 09:06:40.054938 2571 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" Jan 16 09:06:40.055190 kubelet[2571]: E0116 09:06:40.055020 2571 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a"} Jan 16 09:06:40.055190 kubelet[2571]: E0116 09:06:40.055071 2571 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"16bb8998-202c-4c00-8496-dc8eaaa9a516\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 09:06:40.055190 kubelet[2571]: E0116 09:06:40.055105 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"16bb8998-202c-4c00-8496-dc8eaaa9a516\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ff4995848-jjr88" podUID="16bb8998-202c-4c00-8496-dc8eaaa9a516" Jan 16 09:06:40.065425 containerd[1482]: time="2025-01-16T09:06:40.065212775Z" level=error msg="StopPodSandbox for \"3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37\" failed" error="failed to destroy network for sandbox \"3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:40.066220 kubelet[2571]: E0116 09:06:40.065964 2571 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" Jan 16 09:06:40.066220 kubelet[2571]: E0116 09:06:40.066041 2571 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37"} Jan 16 09:06:40.066220 kubelet[2571]: E0116 09:06:40.066093 2571 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3be6b83d-b704-4612-9bea-5273dc682d78\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 09:06:40.066220 kubelet[2571]: E0116 09:06:40.066137 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3be6b83d-b704-4612-9bea-5273dc682d78\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-cz7k2" podUID="3be6b83d-b704-4612-9bea-5273dc682d78" Jan 16 09:06:40.075038 containerd[1482]: time="2025-01-16T09:06:40.074955143Z" level=error msg="StopPodSandbox for \"cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f\" failed" error="failed to destroy network for sandbox \"cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:40.076250 kubelet[2571]: E0116 09:06:40.076167 2571 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" Jan 16 09:06:40.076991 kubelet[2571]: E0116 09:06:40.076255 2571 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f"} Jan 16 09:06:40.076991 kubelet[2571]: E0116 09:06:40.076307 2571 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9030e0e7-f33c-4169-9df7-1b9ed86d0a85\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 09:06:40.076991 kubelet[2571]: E0116 09:06:40.076344 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9030e0e7-f33c-4169-9df7-1b9ed86d0a85\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7698c84dd8-drr4g" podUID="9030e0e7-f33c-4169-9df7-1b9ed86d0a85" Jan 16 09:06:40.077557 containerd[1482]: time="2025-01-16T09:06:40.077461110Z" level=error msg="StopPodSandbox for \"062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130\" failed" error="failed to destroy network for sandbox \"062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:40.078507 kubelet[2571]: E0116 09:06:40.077866 2571 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" Jan 16 09:06:40.078507 kubelet[2571]: E0116 09:06:40.077949 2571 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130"} Jan 16 09:06:40.078507 kubelet[2571]: E0116 09:06:40.078008 2571 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9f7d449c-357b-4091-8b94-7aeb96a263ac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 09:06:40.078507 kubelet[2571]: E0116 09:06:40.078044 2571 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9f7d449c-357b-4091-8b94-7aeb96a263ac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-ff2ps" podUID="9f7d449c-357b-4091-8b94-7aeb96a263ac" Jan 16 09:06:49.200552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount316158345.mount: Deactivated successfully. Jan 16 09:06:49.330616 containerd[1482]: time="2025-01-16T09:06:49.330387241Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 16 09:06:49.336131 containerd[1482]: time="2025-01-16T09:06:49.332608820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:49.357127 containerd[1482]: time="2025-01-16T09:06:49.356619657Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 10.564853124s" Jan 16 09:06:49.358508 containerd[1482]: time="2025-01-16T09:06:49.357844747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 16 09:06:49.374699 containerd[1482]: time="2025-01-16T09:06:49.374622192Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:49.375643 containerd[1482]: time="2025-01-16T09:06:49.375594820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:49.631999 containerd[1482]: time="2025-01-16T09:06:49.631750837Z" level=info msg="CreateContainer within sandbox \"06dddb64708c113d576c2552d1e5487c5ca4094ef8af0a058fb9416a3ee7dc5c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 16 09:06:49.814696 containerd[1482]: time="2025-01-16T09:06:49.814364492Z" level=info msg="CreateContainer within sandbox \"06dddb64708c113d576c2552d1e5487c5ca4094ef8af0a058fb9416a3ee7dc5c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3fdb2f3b6ecf814e666ea22ad059ee29578a3cd4650599326684afb4cfc7eafa\"" Jan 16 09:06:49.824112 containerd[1482]: time="2025-01-16T09:06:49.824003109Z" level=info msg="StartContainer for \"3fdb2f3b6ecf814e666ea22ad059ee29578a3cd4650599326684afb4cfc7eafa\"" Jan 16 09:06:50.197481 systemd[1]: Started cri-containerd-3fdb2f3b6ecf814e666ea22ad059ee29578a3cd4650599326684afb4cfc7eafa.scope - libcontainer container 3fdb2f3b6ecf814e666ea22ad059ee29578a3cd4650599326684afb4cfc7eafa. Jan 16 09:06:50.337242 containerd[1482]: time="2025-01-16T09:06:50.336655659Z" level=info msg="StartContainer for \"3fdb2f3b6ecf814e666ea22ad059ee29578a3cd4650599326684afb4cfc7eafa\" returns successfully" Jan 16 09:06:50.526899 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 16 09:06:50.529911 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 16 09:06:51.078065 kubelet[2571]: E0116 09:06:51.077338 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:51.466233 containerd[1482]: time="2025-01-16T09:06:51.465505787Z" level=info msg="StopPodSandbox for \"3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a\"" Jan 16 09:06:51.641862 kubelet[2571]: I0116 09:06:51.615217 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cvrz2" podStartSLOduration=3.329557168 podStartE2EDuration="27.604484704s" podCreationTimestamp="2025-01-16 09:06:24 +0000 UTC" firstStartedPulling="2025-01-16 09:06:25.121233494 +0000 UTC m=+22.057637330" lastFinishedPulling="2025-01-16 09:06:49.396161048 +0000 UTC m=+46.332564866" observedRunningTime="2025-01-16 09:06:51.166555749 +0000 UTC m=+48.102959586" watchObservedRunningTime="2025-01-16 09:06:51.604484704 +0000 UTC m=+48.540888557" Jan 16 09:06:51.945004 containerd[1482]: 2025-01-16 09:06:51.608 [INFO][3772] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" Jan 16 09:06:51.945004 containerd[1482]: 2025-01-16 09:06:51.611 [INFO][3772] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" iface="eth0" netns="/var/run/netns/cni-7dbc923d-0424-6472-5bf7-307f82a89a91" Jan 16 09:06:51.945004 containerd[1482]: 2025-01-16 09:06:51.611 [INFO][3772] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" iface="eth0" netns="/var/run/netns/cni-7dbc923d-0424-6472-5bf7-307f82a89a91" Jan 16 09:06:51.945004 containerd[1482]: 2025-01-16 09:06:51.618 [INFO][3772] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" iface="eth0" netns="/var/run/netns/cni-7dbc923d-0424-6472-5bf7-307f82a89a91" Jan 16 09:06:51.945004 containerd[1482]: 2025-01-16 09:06:51.618 [INFO][3772] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" Jan 16 09:06:51.945004 containerd[1482]: 2025-01-16 09:06:51.618 [INFO][3772] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" Jan 16 09:06:51.945004 containerd[1482]: 2025-01-16 09:06:51.908 [INFO][3779] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" HandleID="k8s-pod-network.3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--jjr88-eth0" Jan 16 09:06:51.945004 containerd[1482]: 2025-01-16 09:06:51.918 [INFO][3779] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:06:51.945004 containerd[1482]: 2025-01-16 09:06:51.918 [INFO][3779] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:06:51.945004 containerd[1482]: 2025-01-16 09:06:51.933 [WARNING][3779] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" HandleID="k8s-pod-network.3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--jjr88-eth0" Jan 16 09:06:51.945004 containerd[1482]: 2025-01-16 09:06:51.933 [INFO][3779] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" HandleID="k8s-pod-network.3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--jjr88-eth0" Jan 16 09:06:51.945004 containerd[1482]: 2025-01-16 09:06:51.938 [INFO][3779] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:06:51.945004 containerd[1482]: 2025-01-16 09:06:51.941 [INFO][3772] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" Jan 16 09:06:51.945968 containerd[1482]: time="2025-01-16T09:06:51.945237090Z" level=info msg="TearDown network for sandbox \"3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a\" successfully" Jan 16 09:06:51.945968 containerd[1482]: time="2025-01-16T09:06:51.945291412Z" level=info msg="StopPodSandbox for \"3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a\" returns successfully" Jan 16 09:06:51.951484 containerd[1482]: time="2025-01-16T09:06:51.950158508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff4995848-jjr88,Uid:16bb8998-202c-4c00-8496-dc8eaaa9a516,Namespace:calico-apiserver,Attempt:1,}" Jan 16 09:06:51.954478 systemd[1]: run-netns-cni\x2d7dbc923d\x2d0424\x2d6472\x2d5bf7\x2d307f82a89a91.mount: Deactivated successfully. Jan 16 09:06:52.076553 kubelet[2571]: E0116 09:06:52.076497 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:52.404127 systemd-networkd[1374]: cali5467b727370: Link UP Jan 16 09:06:52.404627 systemd-networkd[1374]: cali5467b727370: Gained carrier Jan 16 09:06:52.434167 containerd[1482]: 2025-01-16 09:06:52.145 [INFO][3785] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 09:06:52.434167 containerd[1482]: 2025-01-16 09:06:52.168 [INFO][3785] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--jjr88-eth0 calico-apiserver-5ff4995848- calico-apiserver 16bb8998-202c-4c00-8496-dc8eaaa9a516 774 0 2025-01-16 09:06:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5ff4995848 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-d8418dcdb9 calico-apiserver-5ff4995848-jjr88 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5467b727370 [] []}} ContainerID="3830472e4260f4cb05949fcfc39e6e7e60c093a1fe1f03db15cd1479bfa6ec47" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4995848-jjr88" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--jjr88-" Jan 16 09:06:52.434167 containerd[1482]: 2025-01-16 09:06:52.168 [INFO][3785] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3830472e4260f4cb05949fcfc39e6e7e60c093a1fe1f03db15cd1479bfa6ec47" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4995848-jjr88" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--jjr88-eth0" Jan 16 09:06:52.434167 containerd[1482]: 2025-01-16 09:06:52.248 [INFO][3817] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3830472e4260f4cb05949fcfc39e6e7e60c093a1fe1f03db15cd1479bfa6ec47" HandleID="k8s-pod-network.3830472e4260f4cb05949fcfc39e6e7e60c093a1fe1f03db15cd1479bfa6ec47" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--jjr88-eth0" Jan 16 09:06:52.434167 containerd[1482]: 2025-01-16 09:06:52.283 [INFO][3817] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3830472e4260f4cb05949fcfc39e6e7e60c093a1fe1f03db15cd1479bfa6ec47" HandleID="k8s-pod-network.3830472e4260f4cb05949fcfc39e6e7e60c093a1fe1f03db15cd1479bfa6ec47" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--jjr88-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a69f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-d8418dcdb9", "pod":"calico-apiserver-5ff4995848-jjr88", "timestamp":"2025-01-16 09:06:52.248692196 +0000 UTC"}, Hostname:"ci-4081.3.0-a-d8418dcdb9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 09:06:52.434167 containerd[1482]: 2025-01-16 09:06:52.283 [INFO][3817] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:06:52.434167 containerd[1482]: 2025-01-16 09:06:52.283 [INFO][3817] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:06:52.434167 containerd[1482]: 2025-01-16 09:06:52.283 [INFO][3817] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-d8418dcdb9' Jan 16 09:06:52.434167 containerd[1482]: 2025-01-16 09:06:52.290 [INFO][3817] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3830472e4260f4cb05949fcfc39e6e7e60c093a1fe1f03db15cd1479bfa6ec47" host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:52.434167 containerd[1482]: 2025-01-16 09:06:52.310 [INFO][3817] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:52.434167 containerd[1482]: 2025-01-16 09:06:52.322 [INFO][3817] ipam/ipam.go 489: Trying affinity for 192.168.8.0/26 host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:52.434167 containerd[1482]: 2025-01-16 09:06:52.328 [INFO][3817] ipam/ipam.go 155: Attempting to load block cidr=192.168.8.0/26 host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:52.434167 containerd[1482]: 2025-01-16 09:06:52.340 [INFO][3817] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:52.434167 containerd[1482]: 2025-01-16 09:06:52.341 [INFO][3817] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.3830472e4260f4cb05949fcfc39e6e7e60c093a1fe1f03db15cd1479bfa6ec47" host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:52.434167 containerd[1482]: 2025-01-16 09:06:52.345 [INFO][3817] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3830472e4260f4cb05949fcfc39e6e7e60c093a1fe1f03db15cd1479bfa6ec47 Jan 16 09:06:52.434167 containerd[1482]: 2025-01-16 09:06:52.356 [INFO][3817] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.3830472e4260f4cb05949fcfc39e6e7e60c093a1fe1f03db15cd1479bfa6ec47" host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:52.434167 containerd[1482]: 2025-01-16 09:06:52.369 [INFO][3817] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.8.1/26] block=192.168.8.0/26 handle="k8s-pod-network.3830472e4260f4cb05949fcfc39e6e7e60c093a1fe1f03db15cd1479bfa6ec47" host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:52.434167 containerd[1482]: 2025-01-16 09:06:52.369 [INFO][3817] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.8.1/26] handle="k8s-pod-network.3830472e4260f4cb05949fcfc39e6e7e60c093a1fe1f03db15cd1479bfa6ec47" host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:52.434167 containerd[1482]: 2025-01-16 09:06:52.369 [INFO][3817] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:06:52.434167 containerd[1482]: 2025-01-16 09:06:52.369 [INFO][3817] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.1/26] IPv6=[] ContainerID="3830472e4260f4cb05949fcfc39e6e7e60c093a1fe1f03db15cd1479bfa6ec47" HandleID="k8s-pod-network.3830472e4260f4cb05949fcfc39e6e7e60c093a1fe1f03db15cd1479bfa6ec47" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--jjr88-eth0" Jan 16 09:06:52.435953 containerd[1482]: 2025-01-16 09:06:52.373 [INFO][3785] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3830472e4260f4cb05949fcfc39e6e7e60c093a1fe1f03db15cd1479bfa6ec47" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4995848-jjr88" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--jjr88-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--jjr88-eth0", GenerateName:"calico-apiserver-5ff4995848-", Namespace:"calico-apiserver", SelfLink:"", UID:"16bb8998-202c-4c00-8496-dc8eaaa9a516", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff4995848", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-d8418dcdb9", ContainerID:"", Pod:"calico-apiserver-5ff4995848-jjr88", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5467b727370", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:06:52.435953 containerd[1482]: 2025-01-16 09:06:52.373 [INFO][3785] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.8.1/32] ContainerID="3830472e4260f4cb05949fcfc39e6e7e60c093a1fe1f03db15cd1479bfa6ec47" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4995848-jjr88" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--jjr88-eth0" Jan 16 09:06:52.435953 containerd[1482]: 2025-01-16 09:06:52.374 [INFO][3785] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5467b727370 ContainerID="3830472e4260f4cb05949fcfc39e6e7e60c093a1fe1f03db15cd1479bfa6ec47" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4995848-jjr88" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--jjr88-eth0" Jan 16 09:06:52.435953 containerd[1482]: 2025-01-16 09:06:52.388 [INFO][3785] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3830472e4260f4cb05949fcfc39e6e7e60c093a1fe1f03db15cd1479bfa6ec47" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4995848-jjr88" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--jjr88-eth0" Jan 16 09:06:52.435953 containerd[1482]: 2025-01-16 09:06:52.403 [INFO][3785] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3830472e4260f4cb05949fcfc39e6e7e60c093a1fe1f03db15cd1479bfa6ec47" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4995848-jjr88" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--jjr88-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--jjr88-eth0", GenerateName:"calico-apiserver-5ff4995848-", Namespace:"calico-apiserver", SelfLink:"", UID:"16bb8998-202c-4c00-8496-dc8eaaa9a516", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff4995848", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-d8418dcdb9", ContainerID:"3830472e4260f4cb05949fcfc39e6e7e60c093a1fe1f03db15cd1479bfa6ec47", Pod:"calico-apiserver-5ff4995848-jjr88", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5467b727370", MAC:"76:8c:cf:e6:f5:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:06:52.435953 containerd[1482]: 2025-01-16 09:06:52.429 [INFO][3785] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3830472e4260f4cb05949fcfc39e6e7e60c093a1fe1f03db15cd1479bfa6ec47" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4995848-jjr88" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--jjr88-eth0" Jan 16 09:06:52.470890 containerd[1482]: time="2025-01-16T09:06:52.466865019Z" level=info msg="StopPodSandbox for \"062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130\"" Jan 16 09:06:52.488648 containerd[1482]: time="2025-01-16T09:06:52.480174098Z" level=info msg="StopPodSandbox for \"3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37\"" Jan 16 09:06:52.690520 containerd[1482]: time="2025-01-16T09:06:52.688692564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:06:52.690520 containerd[1482]: time="2025-01-16T09:06:52.688827049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:06:52.690520 containerd[1482]: time="2025-01-16T09:06:52.688854048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:52.690520 containerd[1482]: time="2025-01-16T09:06:52.689108104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:52.824513 systemd[1]: Started cri-containerd-3830472e4260f4cb05949fcfc39e6e7e60c093a1fe1f03db15cd1479bfa6ec47.scope - libcontainer container 3830472e4260f4cb05949fcfc39e6e7e60c093a1fe1f03db15cd1479bfa6ec47. Jan 16 09:06:52.865017 kubelet[2571]: I0116 09:06:52.864938 2571 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 09:06:52.989940 kubelet[2571]: E0116 09:06:52.988410 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:53.073520 containerd[1482]: 2025-01-16 09:06:52.828 [INFO][3898] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" Jan 16 09:06:53.073520 containerd[1482]: 2025-01-16 09:06:52.828 [INFO][3898] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" iface="eth0" netns="/var/run/netns/cni-e6711ddc-e58c-ffbc-05b1-ab3f4b573ad3" Jan 16 09:06:53.073520 containerd[1482]: 2025-01-16 09:06:52.828 [INFO][3898] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" iface="eth0" netns="/var/run/netns/cni-e6711ddc-e58c-ffbc-05b1-ab3f4b573ad3" Jan 16 09:06:53.073520 containerd[1482]: 2025-01-16 09:06:52.829 [INFO][3898] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" iface="eth0" netns="/var/run/netns/cni-e6711ddc-e58c-ffbc-05b1-ab3f4b573ad3" Jan 16 09:06:53.073520 containerd[1482]: 2025-01-16 09:06:52.830 [INFO][3898] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" Jan 16 09:06:53.073520 containerd[1482]: 2025-01-16 09:06:52.830 [INFO][3898] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" Jan 16 09:06:53.073520 containerd[1482]: 2025-01-16 09:06:53.017 [INFO][3970] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" HandleID="k8s-pod-network.3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--cz7k2-eth0" Jan 16 09:06:53.073520 containerd[1482]: 2025-01-16 09:06:53.018 [INFO][3970] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:06:53.073520 containerd[1482]: 2025-01-16 09:06:53.018 [INFO][3970] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:06:53.073520 containerd[1482]: 2025-01-16 09:06:53.045 [WARNING][3970] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" HandleID="k8s-pod-network.3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--cz7k2-eth0" Jan 16 09:06:53.073520 containerd[1482]: 2025-01-16 09:06:53.045 [INFO][3970] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" HandleID="k8s-pod-network.3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--cz7k2-eth0" Jan 16 09:06:53.073520 containerd[1482]: 2025-01-16 09:06:53.050 [INFO][3970] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:06:53.073520 containerd[1482]: 2025-01-16 09:06:53.066 [INFO][3898] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" Jan 16 09:06:53.073520 containerd[1482]: time="2025-01-16T09:06:53.073365295Z" level=info msg="TearDown network for sandbox \"3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37\" successfully" Jan 16 09:06:53.073520 containerd[1482]: time="2025-01-16T09:06:53.073402709Z" level=info msg="StopPodSandbox for \"3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37\" returns successfully" Jan 16 09:06:53.080662 systemd[1]: run-netns-cni\x2de6711ddc\x2de58c\x2dffbc\x2d05b1\x2dab3f4b573ad3.mount: Deactivated successfully. Jan 16 09:06:53.087302 containerd[1482]: time="2025-01-16T09:06:53.082520313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cz7k2,Uid:3be6b83d-b704-4612-9bea-5273dc682d78,Namespace:kube-system,Attempt:1,}" Jan 16 09:06:53.087406 kubelet[2571]: E0116 09:06:53.080752 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:53.102235 kubelet[2571]: E0116 09:06:53.100411 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:53.140630 containerd[1482]: 2025-01-16 09:06:52.842 [INFO][3899] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" Jan 16 09:06:53.140630 containerd[1482]: 2025-01-16 09:06:52.844 [INFO][3899] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" iface="eth0" netns="/var/run/netns/cni-f5c603b9-00a7-dfaa-7645-edf46335e1ec" Jan 16 09:06:53.140630 containerd[1482]: 2025-01-16 09:06:52.844 [INFO][3899] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" iface="eth0" netns="/var/run/netns/cni-f5c603b9-00a7-dfaa-7645-edf46335e1ec" Jan 16 09:06:53.140630 containerd[1482]: 2025-01-16 09:06:52.844 [INFO][3899] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" iface="eth0" netns="/var/run/netns/cni-f5c603b9-00a7-dfaa-7645-edf46335e1ec" Jan 16 09:06:53.140630 containerd[1482]: 2025-01-16 09:06:52.845 [INFO][3899] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" Jan 16 09:06:53.140630 containerd[1482]: 2025-01-16 09:06:52.845 [INFO][3899] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" Jan 16 09:06:53.140630 containerd[1482]: 2025-01-16 09:06:53.056 [INFO][3972] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" HandleID="k8s-pod-network.062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--ff2ps-eth0" Jan 16 09:06:53.140630 containerd[1482]: 2025-01-16 09:06:53.057 [INFO][3972] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:06:53.140630 containerd[1482]: 2025-01-16 09:06:53.057 [INFO][3972] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:06:53.140630 containerd[1482]: 2025-01-16 09:06:53.076 [WARNING][3972] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" HandleID="k8s-pod-network.062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--ff2ps-eth0" Jan 16 09:06:53.140630 containerd[1482]: 2025-01-16 09:06:53.076 [INFO][3972] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" HandleID="k8s-pod-network.062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--ff2ps-eth0" Jan 16 09:06:53.140630 containerd[1482]: 2025-01-16 09:06:53.090 [INFO][3972] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:06:53.140630 containerd[1482]: 2025-01-16 09:06:53.119 [INFO][3899] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" Jan 16 09:06:53.151004 kubelet[2571]: E0116 09:06:53.147448 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:53.155981 containerd[1482]: time="2025-01-16T09:06:53.152650195Z" level=info msg="TearDown network for sandbox \"062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130\" successfully" Jan 16 09:06:53.155981 containerd[1482]: time="2025-01-16T09:06:53.152703665Z" level=info msg="StopPodSandbox for \"062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130\" returns successfully" Jan 16 09:06:53.157351 kubelet[2571]: E0116 09:06:53.155341 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:53.157387 systemd[1]: run-netns-cni\x2df5c603b9\x2d00a7\x2ddfaa\x2d7645\x2dedf46335e1ec.mount: Deactivated successfully. Jan 16 09:06:53.161028 containerd[1482]: time="2025-01-16T09:06:53.160513453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ff2ps,Uid:9f7d449c-357b-4091-8b94-7aeb96a263ac,Namespace:kube-system,Attempt:1,}" Jan 16 09:06:53.410909 systemd-networkd[1374]: cali5467b727370: Gained IPv6LL Jan 16 09:06:53.466576 containerd[1482]: time="2025-01-16T09:06:53.465590867Z" level=info msg="StopPodSandbox for \"ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff\"" Jan 16 09:06:53.469861 containerd[1482]: time="2025-01-16T09:06:53.468855533Z" level=info msg="StopPodSandbox for \"cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f\"" Jan 16 09:06:53.475551 containerd[1482]: time="2025-01-16T09:06:53.475434425Z" level=info msg="StopPodSandbox for \"575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9\"" Jan 16 09:06:53.814812 containerd[1482]: time="2025-01-16T09:06:53.814295939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff4995848-jjr88,Uid:16bb8998-202c-4c00-8496-dc8eaaa9a516,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3830472e4260f4cb05949fcfc39e6e7e60c093a1fe1f03db15cd1479bfa6ec47\"" Jan 16 09:06:53.894485 containerd[1482]: time="2025-01-16T09:06:53.892263025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 16 09:06:54.063305 systemd-networkd[1374]: cali6b2f4b19036: Link UP Jan 16 09:06:54.072197 systemd-networkd[1374]: cali6b2f4b19036: Gained carrier Jan 16 09:06:54.214047 containerd[1482]: 2025-01-16 09:06:53.316 [INFO][4022] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 09:06:54.214047 containerd[1482]: 2025-01-16 09:06:53.371 [INFO][4022] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--ff2ps-eth0 coredns-7db6d8ff4d- kube-system 9f7d449c-357b-4091-8b94-7aeb96a263ac 784 0 2025-01-16 09:06:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-d8418dcdb9 coredns-7db6d8ff4d-ff2ps eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6b2f4b19036 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a2fa7a8404fce68d3b556c0adf28a669609daaadbb219617b9adeaf03eb9d76f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ff2ps" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--ff2ps-" Jan 16 09:06:54.214047 containerd[1482]: 2025-01-16 09:06:53.374 [INFO][4022] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a2fa7a8404fce68d3b556c0adf28a669609daaadbb219617b9adeaf03eb9d76f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ff2ps" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--ff2ps-eth0" Jan 16 09:06:54.214047 containerd[1482]: 2025-01-16 09:06:53.702 [INFO][4052] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a2fa7a8404fce68d3b556c0adf28a669609daaadbb219617b9adeaf03eb9d76f" HandleID="k8s-pod-network.a2fa7a8404fce68d3b556c0adf28a669609daaadbb219617b9adeaf03eb9d76f" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--ff2ps-eth0" Jan 16 09:06:54.214047 containerd[1482]: 2025-01-16 09:06:53.739 [INFO][4052] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a2fa7a8404fce68d3b556c0adf28a669609daaadbb219617b9adeaf03eb9d76f" HandleID="k8s-pod-network.a2fa7a8404fce68d3b556c0adf28a669609daaadbb219617b9adeaf03eb9d76f" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--ff2ps-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031f9c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-d8418dcdb9", "pod":"coredns-7db6d8ff4d-ff2ps", "timestamp":"2025-01-16 09:06:53.702461164 +0000 UTC"}, Hostname:"ci-4081.3.0-a-d8418dcdb9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 09:06:54.214047 containerd[1482]: 2025-01-16 09:06:53.739 [INFO][4052] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:06:54.214047 containerd[1482]: 2025-01-16 09:06:53.740 [INFO][4052] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:06:54.214047 containerd[1482]: 2025-01-16 09:06:53.740 [INFO][4052] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-d8418dcdb9' Jan 16 09:06:54.214047 containerd[1482]: 2025-01-16 09:06:53.748 [INFO][4052] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a2fa7a8404fce68d3b556c0adf28a669609daaadbb219617b9adeaf03eb9d76f" host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:54.214047 containerd[1482]: 2025-01-16 09:06:53.797 [INFO][4052] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:54.214047 containerd[1482]: 2025-01-16 09:06:53.816 [INFO][4052] ipam/ipam.go 489: Trying affinity for 192.168.8.0/26 host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:54.214047 containerd[1482]: 2025-01-16 09:06:53.823 [INFO][4052] ipam/ipam.go 155: Attempting to load block cidr=192.168.8.0/26 host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:54.214047 containerd[1482]: 2025-01-16 09:06:53.846 [INFO][4052] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:54.214047 containerd[1482]: 2025-01-16 09:06:53.846 [INFO][4052] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.a2fa7a8404fce68d3b556c0adf28a669609daaadbb219617b9adeaf03eb9d76f" host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:54.214047 containerd[1482]: 2025-01-16 09:06:53.852 [INFO][4052] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a2fa7a8404fce68d3b556c0adf28a669609daaadbb219617b9adeaf03eb9d76f Jan 16 09:06:54.214047 containerd[1482]: 2025-01-16 09:06:53.916 [INFO][4052] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.a2fa7a8404fce68d3b556c0adf28a669609daaadbb219617b9adeaf03eb9d76f" host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:54.214047 containerd[1482]: 2025-01-16 09:06:53.951 [INFO][4052] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.8.2/26] block=192.168.8.0/26 handle="k8s-pod-network.a2fa7a8404fce68d3b556c0adf28a669609daaadbb219617b9adeaf03eb9d76f" host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:54.214047 containerd[1482]: 2025-01-16 09:06:53.951 [INFO][4052] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.8.2/26] handle="k8s-pod-network.a2fa7a8404fce68d3b556c0adf28a669609daaadbb219617b9adeaf03eb9d76f" host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:54.214047 containerd[1482]: 2025-01-16 09:06:53.951 [INFO][4052] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:06:54.214047 containerd[1482]: 2025-01-16 09:06:53.951 [INFO][4052] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.2/26] IPv6=[] ContainerID="a2fa7a8404fce68d3b556c0adf28a669609daaadbb219617b9adeaf03eb9d76f" HandleID="k8s-pod-network.a2fa7a8404fce68d3b556c0adf28a669609daaadbb219617b9adeaf03eb9d76f" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--ff2ps-eth0" Jan 16 09:06:54.215610 containerd[1482]: 2025-01-16 09:06:54.003 [INFO][4022] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a2fa7a8404fce68d3b556c0adf28a669609daaadbb219617b9adeaf03eb9d76f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ff2ps" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--ff2ps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--ff2ps-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9f7d449c-357b-4091-8b94-7aeb96a263ac", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-d8418dcdb9", ContainerID:"", Pod:"coredns-7db6d8ff4d-ff2ps", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b2f4b19036", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:06:54.215610 containerd[1482]: 2025-01-16 09:06:54.012 [INFO][4022] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.8.2/32] ContainerID="a2fa7a8404fce68d3b556c0adf28a669609daaadbb219617b9adeaf03eb9d76f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ff2ps" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--ff2ps-eth0" Jan 16 09:06:54.215610 containerd[1482]: 2025-01-16 09:06:54.012 [INFO][4022] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b2f4b19036 ContainerID="a2fa7a8404fce68d3b556c0adf28a669609daaadbb219617b9adeaf03eb9d76f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ff2ps" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--ff2ps-eth0" Jan 16 09:06:54.215610 containerd[1482]: 2025-01-16 09:06:54.081 [INFO][4022] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a2fa7a8404fce68d3b556c0adf28a669609daaadbb219617b9adeaf03eb9d76f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ff2ps" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--ff2ps-eth0" Jan 16 09:06:54.215610 containerd[1482]: 2025-01-16 09:06:54.088 [INFO][4022] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a2fa7a8404fce68d3b556c0adf28a669609daaadbb219617b9adeaf03eb9d76f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ff2ps" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--ff2ps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--ff2ps-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9f7d449c-357b-4091-8b94-7aeb96a263ac", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-d8418dcdb9", ContainerID:"a2fa7a8404fce68d3b556c0adf28a669609daaadbb219617b9adeaf03eb9d76f", Pod:"coredns-7db6d8ff4d-ff2ps", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b2f4b19036", MAC:"4a:35:7d:7b:2d:bf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:06:54.215610 containerd[1482]: 2025-01-16 09:06:54.192 [INFO][4022] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a2fa7a8404fce68d3b556c0adf28a669609daaadbb219617b9adeaf03eb9d76f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ff2ps" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--ff2ps-eth0" Jan 16 09:06:54.343857 systemd-networkd[1374]: calie12bbc23086: Link UP Jan 16 09:06:54.354271 systemd-networkd[1374]: calie12bbc23086: Gained carrier Jan 16 09:06:54.385554 containerd[1482]: 2025-01-16 09:06:54.145 [INFO][4101] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" Jan 16 09:06:54.385554 containerd[1482]: 2025-01-16 09:06:54.145 [INFO][4101] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" iface="eth0" netns="/var/run/netns/cni-9152f6fa-e81f-26f1-4bd6-ad58437e874d" Jan 16 09:06:54.385554 containerd[1482]: 2025-01-16 09:06:54.148 [INFO][4101] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" iface="eth0" netns="/var/run/netns/cni-9152f6fa-e81f-26f1-4bd6-ad58437e874d" Jan 16 09:06:54.385554 containerd[1482]: 2025-01-16 09:06:54.148 [INFO][4101] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" iface="eth0" netns="/var/run/netns/cni-9152f6fa-e81f-26f1-4bd6-ad58437e874d" Jan 16 09:06:54.385554 containerd[1482]: 2025-01-16 09:06:54.148 [INFO][4101] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" Jan 16 09:06:54.385554 containerd[1482]: 2025-01-16 09:06:54.148 [INFO][4101] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" Jan 16 09:06:54.385554 containerd[1482]: 2025-01-16 09:06:54.295 [INFO][4146] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" HandleID="k8s-pod-network.cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--kube--controllers--7698c84dd8--drr4g-eth0" Jan 16 09:06:54.385554 containerd[1482]: 2025-01-16 09:06:54.296 [INFO][4146] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:06:54.385554 containerd[1482]: 2025-01-16 09:06:54.300 [INFO][4146] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:06:54.385554 containerd[1482]: 2025-01-16 09:06:54.321 [WARNING][4146] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" HandleID="k8s-pod-network.cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--kube--controllers--7698c84dd8--drr4g-eth0" Jan 16 09:06:54.385554 containerd[1482]: 2025-01-16 09:06:54.321 [INFO][4146] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" HandleID="k8s-pod-network.cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--kube--controllers--7698c84dd8--drr4g-eth0" Jan 16 09:06:54.385554 containerd[1482]: 2025-01-16 09:06:54.327 [INFO][4146] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:06:54.385554 containerd[1482]: 2025-01-16 09:06:54.349 [INFO][4101] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" Jan 16 09:06:54.391969 containerd[1482]: time="2025-01-16T09:06:54.387923185Z" level=info msg="TearDown network for sandbox \"cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f\" successfully" Jan 16 09:06:54.391969 containerd[1482]: time="2025-01-16T09:06:54.387970135Z" level=info msg="StopPodSandbox for \"cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f\" returns successfully" Jan 16 09:06:54.394671 containerd[1482]: time="2025-01-16T09:06:54.394624968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7698c84dd8-drr4g,Uid:9030e0e7-f33c-4169-9df7-1b9ed86d0a85,Namespace:calico-system,Attempt:1,}" Jan 16 09:06:54.397308 systemd[1]: run-netns-cni\x2d9152f6fa\x2de81f\x2d26f1\x2d4bd6\x2dad58437e874d.mount: Deactivated successfully. Jan 16 09:06:54.427383 containerd[1482]: 2025-01-16 09:06:53.365 [INFO][4020] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 09:06:54.427383 containerd[1482]: 2025-01-16 09:06:53.408 [INFO][4020] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--cz7k2-eth0 coredns-7db6d8ff4d- kube-system 3be6b83d-b704-4612-9bea-5273dc682d78 783 0 2025-01-16 09:06:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-d8418dcdb9 coredns-7db6d8ff4d-cz7k2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie12bbc23086 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="119eeb8e79aca82e1bb6d5043f442f48380ffcc94ef2ca77bb4883fc73449893" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cz7k2" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--cz7k2-" Jan 16 09:06:54.427383 containerd[1482]: 2025-01-16 09:06:53.408 [INFO][4020] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="119eeb8e79aca82e1bb6d5043f442f48380ffcc94ef2ca77bb4883fc73449893" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cz7k2" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--cz7k2-eth0" Jan 16 09:06:54.427383 containerd[1482]: 2025-01-16 09:06:53.930 [INFO][4056] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="119eeb8e79aca82e1bb6d5043f442f48380ffcc94ef2ca77bb4883fc73449893" HandleID="k8s-pod-network.119eeb8e79aca82e1bb6d5043f442f48380ffcc94ef2ca77bb4883fc73449893" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--cz7k2-eth0" Jan 16 09:06:54.427383 containerd[1482]: 2025-01-16 09:06:54.003 [INFO][4056] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="119eeb8e79aca82e1bb6d5043f442f48380ffcc94ef2ca77bb4883fc73449893" HandleID="k8s-pod-network.119eeb8e79aca82e1bb6d5043f442f48380ffcc94ef2ca77bb4883fc73449893" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--cz7k2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011aa90), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-d8418dcdb9", "pod":"coredns-7db6d8ff4d-cz7k2", "timestamp":"2025-01-16 09:06:53.93039754 +0000 UTC"}, Hostname:"ci-4081.3.0-a-d8418dcdb9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 09:06:54.427383 containerd[1482]: 2025-01-16 09:06:54.005 [INFO][4056] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:06:54.427383 containerd[1482]: 2025-01-16 09:06:54.010 [INFO][4056] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:06:54.427383 containerd[1482]: 2025-01-16 09:06:54.012 [INFO][4056] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-d8418dcdb9' Jan 16 09:06:54.427383 containerd[1482]: 2025-01-16 09:06:54.041 [INFO][4056] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.119eeb8e79aca82e1bb6d5043f442f48380ffcc94ef2ca77bb4883fc73449893" host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:54.427383 containerd[1482]: 2025-01-16 09:06:54.164 [INFO][4056] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:54.427383 containerd[1482]: 2025-01-16 09:06:54.202 [INFO][4056] ipam/ipam.go 489: Trying affinity for 192.168.8.0/26 host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:54.427383 containerd[1482]: 2025-01-16 09:06:54.212 [INFO][4056] ipam/ipam.go 155: Attempting to load block cidr=192.168.8.0/26 host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:54.427383 containerd[1482]: 2025-01-16 09:06:54.239 [INFO][4056] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:54.427383 containerd[1482]: 2025-01-16 09:06:54.239 [INFO][4056] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.119eeb8e79aca82e1bb6d5043f442f48380ffcc94ef2ca77bb4883fc73449893" host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:54.427383 containerd[1482]: 2025-01-16 09:06:54.250 [INFO][4056] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.119eeb8e79aca82e1bb6d5043f442f48380ffcc94ef2ca77bb4883fc73449893 Jan 16 09:06:54.427383 containerd[1482]: 2025-01-16 09:06:54.269 [INFO][4056] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.119eeb8e79aca82e1bb6d5043f442f48380ffcc94ef2ca77bb4883fc73449893" host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:54.427383 containerd[1482]: 2025-01-16 09:06:54.300 [INFO][4056] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.8.3/26] block=192.168.8.0/26 handle="k8s-pod-network.119eeb8e79aca82e1bb6d5043f442f48380ffcc94ef2ca77bb4883fc73449893" host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:54.427383 containerd[1482]: 2025-01-16 09:06:54.300 [INFO][4056] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.8.3/26] handle="k8s-pod-network.119eeb8e79aca82e1bb6d5043f442f48380ffcc94ef2ca77bb4883fc73449893" host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:54.427383 containerd[1482]: 2025-01-16 09:06:54.300 [INFO][4056] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:06:54.427383 containerd[1482]: 2025-01-16 09:06:54.300 [INFO][4056] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.3/26] IPv6=[] ContainerID="119eeb8e79aca82e1bb6d5043f442f48380ffcc94ef2ca77bb4883fc73449893" HandleID="k8s-pod-network.119eeb8e79aca82e1bb6d5043f442f48380ffcc94ef2ca77bb4883fc73449893" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--cz7k2-eth0" Jan 16 09:06:54.433118 containerd[1482]: 2025-01-16 09:06:54.323 [INFO][4020] cni-plugin/k8s.go 386: Populated endpoint ContainerID="119eeb8e79aca82e1bb6d5043f442f48380ffcc94ef2ca77bb4883fc73449893" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cz7k2" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--cz7k2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--cz7k2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3be6b83d-b704-4612-9bea-5273dc682d78", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-d8418dcdb9", ContainerID:"", Pod:"coredns-7db6d8ff4d-cz7k2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie12bbc23086", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:06:54.433118 containerd[1482]: 2025-01-16 09:06:54.326 [INFO][4020] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.8.3/32] ContainerID="119eeb8e79aca82e1bb6d5043f442f48380ffcc94ef2ca77bb4883fc73449893" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cz7k2" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--cz7k2-eth0" Jan 16 09:06:54.433118 containerd[1482]: 2025-01-16 09:06:54.326 [INFO][4020] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie12bbc23086 ContainerID="119eeb8e79aca82e1bb6d5043f442f48380ffcc94ef2ca77bb4883fc73449893" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cz7k2" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--cz7k2-eth0" Jan 16 09:06:54.433118 containerd[1482]: 2025-01-16 09:06:54.382 [INFO][4020] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="119eeb8e79aca82e1bb6d5043f442f48380ffcc94ef2ca77bb4883fc73449893" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cz7k2" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--cz7k2-eth0" Jan 16 09:06:54.433118 containerd[1482]: 2025-01-16 09:06:54.385 [INFO][4020] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="119eeb8e79aca82e1bb6d5043f442f48380ffcc94ef2ca77bb4883fc73449893" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cz7k2" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--cz7k2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--cz7k2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3be6b83d-b704-4612-9bea-5273dc682d78", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-d8418dcdb9", ContainerID:"119eeb8e79aca82e1bb6d5043f442f48380ffcc94ef2ca77bb4883fc73449893", Pod:"coredns-7db6d8ff4d-cz7k2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie12bbc23086", MAC:"86:bf:85:c5:3e:90", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:06:54.433118 containerd[1482]: 2025-01-16 09:06:54.418 [INFO][4020] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="119eeb8e79aca82e1bb6d5043f442f48380ffcc94ef2ca77bb4883fc73449893" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cz7k2" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--cz7k2-eth0" Jan 16 09:06:54.460929 containerd[1482]: 2025-01-16 09:06:54.105 [INFO][4100] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" Jan 16 09:06:54.460929 containerd[1482]: 2025-01-16 09:06:54.107 [INFO][4100] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" iface="eth0" netns="/var/run/netns/cni-bdf9048d-5732-04b8-4327-7f3b3d7b8c33" Jan 16 09:06:54.460929 containerd[1482]: 2025-01-16 09:06:54.109 [INFO][4100] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" iface="eth0" netns="/var/run/netns/cni-bdf9048d-5732-04b8-4327-7f3b3d7b8c33" Jan 16 09:06:54.460929 containerd[1482]: 2025-01-16 09:06:54.111 [INFO][4100] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" iface="eth0" netns="/var/run/netns/cni-bdf9048d-5732-04b8-4327-7f3b3d7b8c33" Jan 16 09:06:54.460929 containerd[1482]: 2025-01-16 09:06:54.111 [INFO][4100] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" Jan 16 09:06:54.460929 containerd[1482]: 2025-01-16 09:06:54.111 [INFO][4100] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" Jan 16 09:06:54.460929 containerd[1482]: 2025-01-16 09:06:54.349 [INFO][4142] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" HandleID="k8s-pod-network.575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-csi--node--driver--r7n9l-eth0" Jan 16 09:06:54.460929 containerd[1482]: 2025-01-16 09:06:54.359 [INFO][4142] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:06:54.460929 containerd[1482]: 2025-01-16 09:06:54.359 [INFO][4142] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:06:54.460929 containerd[1482]: 2025-01-16 09:06:54.398 [WARNING][4142] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" HandleID="k8s-pod-network.575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-csi--node--driver--r7n9l-eth0" Jan 16 09:06:54.460929 containerd[1482]: 2025-01-16 09:06:54.399 [INFO][4142] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" HandleID="k8s-pod-network.575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-csi--node--driver--r7n9l-eth0" Jan 16 09:06:54.460929 containerd[1482]: 2025-01-16 09:06:54.413 [INFO][4142] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:06:54.460929 containerd[1482]: 2025-01-16 09:06:54.442 [INFO][4100] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" Jan 16 09:06:54.460929 containerd[1482]: time="2025-01-16T09:06:54.459926551Z" level=info msg="TearDown network for sandbox \"575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9\" successfully" Jan 16 09:06:54.460929 containerd[1482]: time="2025-01-16T09:06:54.459991988Z" level=info msg="StopPodSandbox for \"575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9\" returns successfully" Jan 16 09:06:54.467362 systemd[1]: run-netns-cni\x2dbdf9048d\x2d5732\x2d04b8\x2d4327\x2d7f3b3d7b8c33.mount: Deactivated successfully. Jan 16 09:06:54.468700 containerd[1482]: time="2025-01-16T09:06:54.468185351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r7n9l,Uid:b7bd711d-8793-408e-a86f-5638b4667c72,Namespace:calico-system,Attempt:1,}" Jan 16 09:06:54.630155 containerd[1482]: 2025-01-16 09:06:54.213 [INFO][4105] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" Jan 16 09:06:54.630155 containerd[1482]: 2025-01-16 09:06:54.218 [INFO][4105] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" iface="eth0" netns="/var/run/netns/cni-c2003c79-57d3-67e2-c628-9869bbe3a9f3" Jan 16 09:06:54.630155 containerd[1482]: 2025-01-16 09:06:54.218 [INFO][4105] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" iface="eth0" netns="/var/run/netns/cni-c2003c79-57d3-67e2-c628-9869bbe3a9f3" Jan 16 09:06:54.630155 containerd[1482]: 2025-01-16 09:06:54.218 [INFO][4105] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" iface="eth0" netns="/var/run/netns/cni-c2003c79-57d3-67e2-c628-9869bbe3a9f3" Jan 16 09:06:54.630155 containerd[1482]: 2025-01-16 09:06:54.219 [INFO][4105] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" Jan 16 09:06:54.630155 containerd[1482]: 2025-01-16 09:06:54.219 [INFO][4105] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" Jan 16 09:06:54.630155 containerd[1482]: 2025-01-16 09:06:54.493 [INFO][4157] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" HandleID="k8s-pod-network.ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--5jlzt-eth0" Jan 16 09:06:54.630155 containerd[1482]: 2025-01-16 09:06:54.494 [INFO][4157] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:06:54.630155 containerd[1482]: 2025-01-16 09:06:54.498 [INFO][4157] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:06:54.630155 containerd[1482]: 2025-01-16 09:06:54.532 [WARNING][4157] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" HandleID="k8s-pod-network.ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--5jlzt-eth0" Jan 16 09:06:54.630155 containerd[1482]: 2025-01-16 09:06:54.532 [INFO][4157] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" HandleID="k8s-pod-network.ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--5jlzt-eth0" Jan 16 09:06:54.630155 containerd[1482]: 2025-01-16 09:06:54.546 [INFO][4157] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:06:54.630155 containerd[1482]: 2025-01-16 09:06:54.555 [INFO][4105] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" Jan 16 09:06:54.635204 containerd[1482]: time="2025-01-16T09:06:54.635135769Z" level=info msg="TearDown network for sandbox \"ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff\" successfully" Jan 16 09:06:54.635793 containerd[1482]: time="2025-01-16T09:06:54.635629610Z" level=info msg="StopPodSandbox for \"ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff\" returns successfully" Jan 16 09:06:54.673891 containerd[1482]: time="2025-01-16T09:06:54.663683466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:06:54.673891 containerd[1482]: time="2025-01-16T09:06:54.663804034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:06:54.673891 containerd[1482]: time="2025-01-16T09:06:54.663819917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:54.673891 containerd[1482]: time="2025-01-16T09:06:54.663949082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:54.678173 containerd[1482]: time="2025-01-16T09:06:54.640544633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:06:54.678173 containerd[1482]: time="2025-01-16T09:06:54.640654521Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:06:54.678173 containerd[1482]: time="2025-01-16T09:06:54.640676321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:54.678173 containerd[1482]: time="2025-01-16T09:06:54.649124924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:54.690320 containerd[1482]: time="2025-01-16T09:06:54.690256608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff4995848-5jlzt,Uid:51a16b4c-b541-40b1-ba52-b426bfe5e240,Namespace:calico-apiserver,Attempt:1,}" Jan 16 09:06:54.770228 systemd[1]: run-netns-cni\x2dc2003c79\x2d57d3\x2d67e2\x2dc628\x2d9869bbe3a9f3.mount: Deactivated successfully. Jan 16 09:06:54.839357 systemd[1]: Started cri-containerd-a2fa7a8404fce68d3b556c0adf28a669609daaadbb219617b9adeaf03eb9d76f.scope - libcontainer container a2fa7a8404fce68d3b556c0adf28a669609daaadbb219617b9adeaf03eb9d76f. Jan 16 09:06:54.864328 systemd[1]: Started cri-containerd-119eeb8e79aca82e1bb6d5043f442f48380ffcc94ef2ca77bb4883fc73449893.scope - libcontainer container 119eeb8e79aca82e1bb6d5043f442f48380ffcc94ef2ca77bb4883fc73449893. Jan 16 09:06:55.232151 containerd[1482]: time="2025-01-16T09:06:55.231851947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ff2ps,Uid:9f7d449c-357b-4091-8b94-7aeb96a263ac,Namespace:kube-system,Attempt:1,} returns sandbox id \"a2fa7a8404fce68d3b556c0adf28a669609daaadbb219617b9adeaf03eb9d76f\"" Jan 16 09:06:55.236141 kubelet[2571]: E0116 09:06:55.235510 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:55.255559 containerd[1482]: time="2025-01-16T09:06:55.255302786Z" level=info msg="CreateContainer within sandbox \"a2fa7a8404fce68d3b556c0adf28a669609daaadbb219617b9adeaf03eb9d76f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 09:06:55.263268 containerd[1482]: time="2025-01-16T09:06:55.260485850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cz7k2,Uid:3be6b83d-b704-4612-9bea-5273dc682d78,Namespace:kube-system,Attempt:1,} returns sandbox id \"119eeb8e79aca82e1bb6d5043f442f48380ffcc94ef2ca77bb4883fc73449893\"" Jan 16 09:06:55.271199 kubelet[2571]: E0116 09:06:55.270494 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:55.305715 containerd[1482]: time="2025-01-16T09:06:55.303991507Z" level=info msg="CreateContainer within sandbox \"119eeb8e79aca82e1bb6d5043f442f48380ffcc94ef2ca77bb4883fc73449893\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 09:06:55.358295 containerd[1482]: time="2025-01-16T09:06:55.358229875Z" level=info msg="CreateContainer within sandbox \"a2fa7a8404fce68d3b556c0adf28a669609daaadbb219617b9adeaf03eb9d76f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0b89ccd22df8c44ae11a8c3181c38720aa46b6ecb38aebeef48dea39ff05deca\"" Jan 16 09:06:55.360372 containerd[1482]: time="2025-01-16T09:06:55.360209266Z" level=info msg="StartContainer for \"0b89ccd22df8c44ae11a8c3181c38720aa46b6ecb38aebeef48dea39ff05deca\"" Jan 16 09:06:55.364462 containerd[1482]: time="2025-01-16T09:06:55.363165079Z" level=info msg="CreateContainer within sandbox \"119eeb8e79aca82e1bb6d5043f442f48380ffcc94ef2ca77bb4883fc73449893\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6b65d91893c94728321d047be7a32ca7c96a6b96b9201101ec517a46fc59b076\"" Jan 16 09:06:55.365582 containerd[1482]: time="2025-01-16T09:06:55.365386797Z" level=info msg="StartContainer for \"6b65d91893c94728321d047be7a32ca7c96a6b96b9201101ec517a46fc59b076\"" Jan 16 09:06:55.460130 systemd[1]: Started cri-containerd-0b89ccd22df8c44ae11a8c3181c38720aa46b6ecb38aebeef48dea39ff05deca.scope - libcontainer container 0b89ccd22df8c44ae11a8c3181c38720aa46b6ecb38aebeef48dea39ff05deca. Jan 16 09:06:55.525648 systemd[1]: Started cri-containerd-6b65d91893c94728321d047be7a32ca7c96a6b96b9201101ec517a46fc59b076.scope - libcontainer container 6b65d91893c94728321d047be7a32ca7c96a6b96b9201101ec517a46fc59b076. Jan 16 09:06:55.537230 systemd-networkd[1374]: calib5d814bc144: Link UP Jan 16 09:06:55.540151 systemd-networkd[1374]: calib5d814bc144: Gained carrier Jan 16 09:06:55.632327 containerd[1482]: 2025-01-16 09:06:54.856 [INFO][4210] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 09:06:55.632327 containerd[1482]: 2025-01-16 09:06:54.929 [INFO][4210] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--d8418dcdb9-k8s-calico--kube--controllers--7698c84dd8--drr4g-eth0 calico-kube-controllers-7698c84dd8- calico-system 9030e0e7-f33c-4169-9df7-1b9ed86d0a85 805 0 2025-01-16 09:06:24 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7698c84dd8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-a-d8418dcdb9 calico-kube-controllers-7698c84dd8-drr4g eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib5d814bc144 [] []}} ContainerID="ac8777bd56687c457dff7cc2866804056120d7c8081d3893ea945872e74c971b" Namespace="calico-system" Pod="calico-kube-controllers-7698c84dd8-drr4g" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-calico--kube--controllers--7698c84dd8--drr4g-" Jan 16 09:06:55.632327 containerd[1482]: 2025-01-16 09:06:54.930 [INFO][4210] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ac8777bd56687c457dff7cc2866804056120d7c8081d3893ea945872e74c971b" Namespace="calico-system" Pod="calico-kube-controllers-7698c84dd8-drr4g" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-calico--kube--controllers--7698c84dd8--drr4g-eth0" Jan 16 09:06:55.632327 containerd[1482]: 2025-01-16 09:06:55.218 [INFO][4301] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac8777bd56687c457dff7cc2866804056120d7c8081d3893ea945872e74c971b" HandleID="k8s-pod-network.ac8777bd56687c457dff7cc2866804056120d7c8081d3893ea945872e74c971b" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--kube--controllers--7698c84dd8--drr4g-eth0" Jan 16 09:06:55.632327 containerd[1482]: 2025-01-16 09:06:55.297 [INFO][4301] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ac8777bd56687c457dff7cc2866804056120d7c8081d3893ea945872e74c971b" HandleID="k8s-pod-network.ac8777bd56687c457dff7cc2866804056120d7c8081d3893ea945872e74c971b" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--kube--controllers--7698c84dd8--drr4g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000310090), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-d8418dcdb9", "pod":"calico-kube-controllers-7698c84dd8-drr4g", "timestamp":"2025-01-16 09:06:55.218094016 +0000 UTC"}, Hostname:"ci-4081.3.0-a-d8418dcdb9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 09:06:55.632327 containerd[1482]: 2025-01-16 09:06:55.298 [INFO][4301] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:06:55.632327 containerd[1482]: 2025-01-16 09:06:55.298 [INFO][4301] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:06:55.632327 containerd[1482]: 2025-01-16 09:06:55.298 [INFO][4301] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-d8418dcdb9' Jan 16 09:06:55.632327 containerd[1482]: 2025-01-16 09:06:55.323 [INFO][4301] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ac8777bd56687c457dff7cc2866804056120d7c8081d3893ea945872e74c971b" host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:55.632327 containerd[1482]: 2025-01-16 09:06:55.361 [INFO][4301] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:55.632327 containerd[1482]: 2025-01-16 09:06:55.388 [INFO][4301] ipam/ipam.go 489: Trying affinity for 192.168.8.0/26 host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:55.632327 containerd[1482]: 2025-01-16 09:06:55.396 [INFO][4301] ipam/ipam.go 155: Attempting to load block cidr=192.168.8.0/26 host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:55.632327 containerd[1482]: 2025-01-16 09:06:55.423 [INFO][4301] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:55.632327 containerd[1482]: 2025-01-16 09:06:55.423 [INFO][4301] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.ac8777bd56687c457dff7cc2866804056120d7c8081d3893ea945872e74c971b" host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:55.632327 containerd[1482]: 2025-01-16 09:06:55.435 [INFO][4301] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ac8777bd56687c457dff7cc2866804056120d7c8081d3893ea945872e74c971b Jan 16 09:06:55.632327 containerd[1482]: 2025-01-16 09:06:55.470 [INFO][4301] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.ac8777bd56687c457dff7cc2866804056120d7c8081d3893ea945872e74c971b" host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:55.632327 containerd[1482]: 2025-01-16 09:06:55.492 [INFO][4301] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.8.4/26] block=192.168.8.0/26 handle="k8s-pod-network.ac8777bd56687c457dff7cc2866804056120d7c8081d3893ea945872e74c971b" host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:55.632327 containerd[1482]: 2025-01-16 09:06:55.492 [INFO][4301] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.8.4/26] handle="k8s-pod-network.ac8777bd56687c457dff7cc2866804056120d7c8081d3893ea945872e74c971b" host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:55.632327 containerd[1482]: 2025-01-16 09:06:55.492 [INFO][4301] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:06:55.632327 containerd[1482]: 2025-01-16 09:06:55.492 [INFO][4301] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.4/26] IPv6=[] ContainerID="ac8777bd56687c457dff7cc2866804056120d7c8081d3893ea945872e74c971b" HandleID="k8s-pod-network.ac8777bd56687c457dff7cc2866804056120d7c8081d3893ea945872e74c971b" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--kube--controllers--7698c84dd8--drr4g-eth0" Jan 16 09:06:55.636748 containerd[1482]: 2025-01-16 09:06:55.522 [INFO][4210] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ac8777bd56687c457dff7cc2866804056120d7c8081d3893ea945872e74c971b" Namespace="calico-system" Pod="calico-kube-controllers-7698c84dd8-drr4g" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-calico--kube--controllers--7698c84dd8--drr4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--d8418dcdb9-k8s-calico--kube--controllers--7698c84dd8--drr4g-eth0", GenerateName:"calico-kube-controllers-7698c84dd8-", Namespace:"calico-system", SelfLink:"", UID:"9030e0e7-f33c-4169-9df7-1b9ed86d0a85", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7698c84dd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-d8418dcdb9", ContainerID:"", Pod:"calico-kube-controllers-7698c84dd8-drr4g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.8.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib5d814bc144", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:06:55.636748 containerd[1482]: 2025-01-16 09:06:55.522 [INFO][4210] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.8.4/32] ContainerID="ac8777bd56687c457dff7cc2866804056120d7c8081d3893ea945872e74c971b" Namespace="calico-system" Pod="calico-kube-controllers-7698c84dd8-drr4g" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-calico--kube--controllers--7698c84dd8--drr4g-eth0" Jan 16 09:06:55.636748 containerd[1482]: 2025-01-16 09:06:55.522 [INFO][4210] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib5d814bc144 ContainerID="ac8777bd56687c457dff7cc2866804056120d7c8081d3893ea945872e74c971b" Namespace="calico-system" Pod="calico-kube-controllers-7698c84dd8-drr4g" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-calico--kube--controllers--7698c84dd8--drr4g-eth0" Jan 16 09:06:55.636748 containerd[1482]: 2025-01-16 09:06:55.541 [INFO][4210] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac8777bd56687c457dff7cc2866804056120d7c8081d3893ea945872e74c971b" Namespace="calico-system" Pod="calico-kube-controllers-7698c84dd8-drr4g" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-calico--kube--controllers--7698c84dd8--drr4g-eth0" Jan 16 09:06:55.636748 containerd[1482]: 2025-01-16 09:06:55.545 [INFO][4210] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ac8777bd56687c457dff7cc2866804056120d7c8081d3893ea945872e74c971b" Namespace="calico-system" Pod="calico-kube-controllers-7698c84dd8-drr4g" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-calico--kube--controllers--7698c84dd8--drr4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--d8418dcdb9-k8s-calico--kube--controllers--7698c84dd8--drr4g-eth0", GenerateName:"calico-kube-controllers-7698c84dd8-", Namespace:"calico-system", SelfLink:"", UID:"9030e0e7-f33c-4169-9df7-1b9ed86d0a85", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7698c84dd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-d8418dcdb9", ContainerID:"ac8777bd56687c457dff7cc2866804056120d7c8081d3893ea945872e74c971b", Pod:"calico-kube-controllers-7698c84dd8-drr4g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.8.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib5d814bc144", MAC:"c6:27:61:08:81:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:06:55.636748 containerd[1482]: 2025-01-16 09:06:55.615 [INFO][4210] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ac8777bd56687c457dff7cc2866804056120d7c8081d3893ea945872e74c971b" Namespace="calico-system" Pod="calico-kube-controllers-7698c84dd8-drr4g" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-calico--kube--controllers--7698c84dd8--drr4g-eth0" Jan 16 09:06:55.677557 containerd[1482]: time="2025-01-16T09:06:55.677177325Z" level=info msg="StartContainer for \"0b89ccd22df8c44ae11a8c3181c38720aa46b6ecb38aebeef48dea39ff05deca\" returns successfully" Jan 16 09:06:55.692835 containerd[1482]: time="2025-01-16T09:06:55.692423479Z" level=info msg="StartContainer for \"6b65d91893c94728321d047be7a32ca7c96a6b96b9201101ec517a46fc59b076\" returns successfully" Jan 16 09:06:55.760517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount273491770.mount: Deactivated successfully. Jan 16 09:06:55.770711 containerd[1482]: time="2025-01-16T09:06:55.770537721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:06:55.770711 containerd[1482]: time="2025-01-16T09:06:55.770630149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:06:55.770711 containerd[1482]: time="2025-01-16T09:06:55.770647120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:55.774144 containerd[1482]: time="2025-01-16T09:06:55.770789491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:55.778161 systemd-networkd[1374]: cali6b2f4b19036: Gained IPv6LL Jan 16 09:06:55.846600 systemd[1]: Started cri-containerd-ac8777bd56687c457dff7cc2866804056120d7c8081d3893ea945872e74c971b.scope - libcontainer container ac8777bd56687c457dff7cc2866804056120d7c8081d3893ea945872e74c971b. Jan 16 09:06:55.876248 systemd-networkd[1374]: cali55b85a219a9: Link UP Jan 16 09:06:55.878163 systemd-networkd[1374]: cali55b85a219a9: Gained carrier Jan 16 09:06:55.950905 containerd[1482]: 2025-01-16 09:06:54.995 [INFO][4264] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 09:06:55.950905 containerd[1482]: 2025-01-16 09:06:55.047 [INFO][4264] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--5jlzt-eth0 calico-apiserver-5ff4995848- calico-apiserver 51a16b4c-b541-40b1-ba52-b426bfe5e240 807 0 2025-01-16 09:06:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5ff4995848 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-d8418dcdb9 calico-apiserver-5ff4995848-5jlzt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali55b85a219a9 [] []}} ContainerID="f39bd9d5b8415f6eefa183cf589f6d6a89a69d56b1c4d6fbf10001f1eae75d0a" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4995848-5jlzt" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--5jlzt-" Jan 16 09:06:55.950905 containerd[1482]: 2025-01-16 09:06:55.050 [INFO][4264] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f39bd9d5b8415f6eefa183cf589f6d6a89a69d56b1c4d6fbf10001f1eae75d0a" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4995848-5jlzt" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--5jlzt-eth0" Jan 16 09:06:55.950905 containerd[1482]: 2025-01-16 09:06:55.293 [INFO][4323] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f39bd9d5b8415f6eefa183cf589f6d6a89a69d56b1c4d6fbf10001f1eae75d0a" HandleID="k8s-pod-network.f39bd9d5b8415f6eefa183cf589f6d6a89a69d56b1c4d6fbf10001f1eae75d0a" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--5jlzt-eth0" Jan 16 09:06:55.950905 containerd[1482]: 2025-01-16 09:06:55.352 [INFO][4323] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f39bd9d5b8415f6eefa183cf589f6d6a89a69d56b1c4d6fbf10001f1eae75d0a" HandleID="k8s-pod-network.f39bd9d5b8415f6eefa183cf589f6d6a89a69d56b1c4d6fbf10001f1eae75d0a" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--5jlzt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000d0870), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-d8418dcdb9", "pod":"calico-apiserver-5ff4995848-5jlzt", "timestamp":"2025-01-16 09:06:55.288754685 +0000 UTC"}, Hostname:"ci-4081.3.0-a-d8418dcdb9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 09:06:55.950905 containerd[1482]: 2025-01-16 09:06:55.352 [INFO][4323] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:06:55.950905 containerd[1482]: 2025-01-16 09:06:55.494 [INFO][4323] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:06:55.950905 containerd[1482]: 2025-01-16 09:06:55.496 [INFO][4323] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-d8418dcdb9' Jan 16 09:06:55.950905 containerd[1482]: 2025-01-16 09:06:55.523 [INFO][4323] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f39bd9d5b8415f6eefa183cf589f6d6a89a69d56b1c4d6fbf10001f1eae75d0a" host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:55.950905 containerd[1482]: 2025-01-16 09:06:55.561 [INFO][4323] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:55.950905 containerd[1482]: 2025-01-16 09:06:55.612 [INFO][4323] ipam/ipam.go 489: Trying affinity for 192.168.8.0/26 host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:55.950905 containerd[1482]: 2025-01-16 09:06:55.621 [INFO][4323] ipam/ipam.go 155: Attempting to load block cidr=192.168.8.0/26 host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:55.950905 containerd[1482]: 2025-01-16 09:06:55.656 [INFO][4323] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:55.950905 containerd[1482]: 2025-01-16 09:06:55.657 [INFO][4323] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.f39bd9d5b8415f6eefa183cf589f6d6a89a69d56b1c4d6fbf10001f1eae75d0a" host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:55.950905 containerd[1482]: 2025-01-16 09:06:55.671 [INFO][4323] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f39bd9d5b8415f6eefa183cf589f6d6a89a69d56b1c4d6fbf10001f1eae75d0a Jan 16 09:06:55.950905 containerd[1482]: 2025-01-16 09:06:55.738 [INFO][4323] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.f39bd9d5b8415f6eefa183cf589f6d6a89a69d56b1c4d6fbf10001f1eae75d0a" host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:55.950905 containerd[1482]: 2025-01-16 09:06:55.863 [INFO][4323] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.8.5/26] block=192.168.8.0/26 handle="k8s-pod-network.f39bd9d5b8415f6eefa183cf589f6d6a89a69d56b1c4d6fbf10001f1eae75d0a" host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:55.950905 containerd[1482]: 2025-01-16 09:06:55.863 [INFO][4323] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.8.5/26] handle="k8s-pod-network.f39bd9d5b8415f6eefa183cf589f6d6a89a69d56b1c4d6fbf10001f1eae75d0a" host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:55.950905 containerd[1482]: 2025-01-16 09:06:55.863 [INFO][4323] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:06:55.950905 containerd[1482]: 2025-01-16 09:06:55.863 [INFO][4323] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.5/26] IPv6=[] ContainerID="f39bd9d5b8415f6eefa183cf589f6d6a89a69d56b1c4d6fbf10001f1eae75d0a" HandleID="k8s-pod-network.f39bd9d5b8415f6eefa183cf589f6d6a89a69d56b1c4d6fbf10001f1eae75d0a" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--5jlzt-eth0" Jan 16 09:06:55.955728 containerd[1482]: 2025-01-16 09:06:55.868 [INFO][4264] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f39bd9d5b8415f6eefa183cf589f6d6a89a69d56b1c4d6fbf10001f1eae75d0a" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4995848-5jlzt" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--5jlzt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--5jlzt-eth0", GenerateName:"calico-apiserver-5ff4995848-", Namespace:"calico-apiserver", SelfLink:"", UID:"51a16b4c-b541-40b1-ba52-b426bfe5e240", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff4995848", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-d8418dcdb9", ContainerID:"", Pod:"calico-apiserver-5ff4995848-5jlzt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali55b85a219a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:06:55.955728 containerd[1482]: 2025-01-16 09:06:55.869 [INFO][4264] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.8.5/32] ContainerID="f39bd9d5b8415f6eefa183cf589f6d6a89a69d56b1c4d6fbf10001f1eae75d0a" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4995848-5jlzt" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--5jlzt-eth0" Jan 16 09:06:55.955728 containerd[1482]: 2025-01-16 09:06:55.869 [INFO][4264] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali55b85a219a9 ContainerID="f39bd9d5b8415f6eefa183cf589f6d6a89a69d56b1c4d6fbf10001f1eae75d0a" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4995848-5jlzt" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--5jlzt-eth0" Jan 16 09:06:55.955728 containerd[1482]: 2025-01-16 09:06:55.880 [INFO][4264] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f39bd9d5b8415f6eefa183cf589f6d6a89a69d56b1c4d6fbf10001f1eae75d0a" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4995848-5jlzt" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--5jlzt-eth0" Jan 16 09:06:55.955728 containerd[1482]: 2025-01-16 09:06:55.880 [INFO][4264] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f39bd9d5b8415f6eefa183cf589f6d6a89a69d56b1c4d6fbf10001f1eae75d0a" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4995848-5jlzt" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--5jlzt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--5jlzt-eth0", GenerateName:"calico-apiserver-5ff4995848-", Namespace:"calico-apiserver", SelfLink:"", UID:"51a16b4c-b541-40b1-ba52-b426bfe5e240", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff4995848", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-d8418dcdb9", ContainerID:"f39bd9d5b8415f6eefa183cf589f6d6a89a69d56b1c4d6fbf10001f1eae75d0a", Pod:"calico-apiserver-5ff4995848-5jlzt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali55b85a219a9", MAC:"f6:d6:12:3e:b0:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:06:55.955728 containerd[1482]: 2025-01-16 09:06:55.944 [INFO][4264] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f39bd9d5b8415f6eefa183cf589f6d6a89a69d56b1c4d6fbf10001f1eae75d0a" Namespace="calico-apiserver" Pod="calico-apiserver-5ff4995848-5jlzt" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--5jlzt-eth0" Jan 16 09:06:56.039632 containerd[1482]: time="2025-01-16T09:06:56.035971137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:06:56.039632 containerd[1482]: time="2025-01-16T09:06:56.036088050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:06:56.039632 containerd[1482]: time="2025-01-16T09:06:56.036113507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:56.039632 containerd[1482]: time="2025-01-16T09:06:56.036255850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:56.121185 systemd[1]: Started cri-containerd-f39bd9d5b8415f6eefa183cf589f6d6a89a69d56b1c4d6fbf10001f1eae75d0a.scope - libcontainer container f39bd9d5b8415f6eefa183cf589f6d6a89a69d56b1c4d6fbf10001f1eae75d0a. Jan 16 09:06:56.156353 kubelet[2571]: E0116 09:06:56.153220 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:56.180051 kubelet[2571]: E0116 09:06:56.180005 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:56.208974 systemd-networkd[1374]: cali54e4a66f96a: Link UP Jan 16 09:06:56.211955 systemd-networkd[1374]: cali54e4a66f96a: Gained carrier Jan 16 09:06:56.225082 systemd-networkd[1374]: calie12bbc23086: Gained IPv6LL Jan 16 09:06:56.307398 containerd[1482]: 2025-01-16 09:06:54.954 [INFO][4238] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 09:06:56.307398 containerd[1482]: 2025-01-16 09:06:55.001 [INFO][4238] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--d8418dcdb9-k8s-csi--node--driver--r7n9l-eth0 csi-node-driver- calico-system b7bd711d-8793-408e-a86f-5638b4667c72 804 0 2025-01-16 09:06:24 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-a-d8418dcdb9 csi-node-driver-r7n9l eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali54e4a66f96a [] []}} ContainerID="ebce42d909ae3994ae35dec32a443da77bd01be07722e1886258dccee09f7d46" Namespace="calico-system" Pod="csi-node-driver-r7n9l" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-csi--node--driver--r7n9l-" Jan 16 09:06:56.307398 containerd[1482]: 2025-01-16 09:06:55.004 [INFO][4238] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ebce42d909ae3994ae35dec32a443da77bd01be07722e1886258dccee09f7d46" Namespace="calico-system" Pod="csi-node-driver-r7n9l" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-csi--node--driver--r7n9l-eth0" Jan 16 09:06:56.307398 containerd[1482]: 2025-01-16 09:06:55.378 [INFO][4310] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ebce42d909ae3994ae35dec32a443da77bd01be07722e1886258dccee09f7d46" HandleID="k8s-pod-network.ebce42d909ae3994ae35dec32a443da77bd01be07722e1886258dccee09f7d46" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-csi--node--driver--r7n9l-eth0" Jan 16 09:06:56.307398 containerd[1482]: 2025-01-16 09:06:55.457 [INFO][4310] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ebce42d909ae3994ae35dec32a443da77bd01be07722e1886258dccee09f7d46" HandleID="k8s-pod-network.ebce42d909ae3994ae35dec32a443da77bd01be07722e1886258dccee09f7d46" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-csi--node--driver--r7n9l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000341c60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-d8418dcdb9", "pod":"csi-node-driver-r7n9l", "timestamp":"2025-01-16 09:06:55.378387505 +0000 UTC"}, Hostname:"ci-4081.3.0-a-d8418dcdb9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 09:06:56.307398 containerd[1482]: 2025-01-16 09:06:55.459 [INFO][4310] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:06:56.307398 containerd[1482]: 2025-01-16 09:06:55.863 [INFO][4310] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:06:56.307398 containerd[1482]: 2025-01-16 09:06:55.864 [INFO][4310] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-d8418dcdb9' Jan 16 09:06:56.307398 containerd[1482]: 2025-01-16 09:06:55.888 [INFO][4310] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ebce42d909ae3994ae35dec32a443da77bd01be07722e1886258dccee09f7d46" host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:56.307398 containerd[1482]: 2025-01-16 09:06:55.939 [INFO][4310] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:56.307398 containerd[1482]: 2025-01-16 09:06:55.964 [INFO][4310] ipam/ipam.go 489: Trying affinity for 192.168.8.0/26 host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:56.307398 containerd[1482]: 2025-01-16 09:06:55.972 [INFO][4310] ipam/ipam.go 155: Attempting to load block cidr=192.168.8.0/26 host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:56.307398 containerd[1482]: 2025-01-16 09:06:55.980 [INFO][4310] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:56.307398 containerd[1482]: 2025-01-16 09:06:55.982 [INFO][4310] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.ebce42d909ae3994ae35dec32a443da77bd01be07722e1886258dccee09f7d46" host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:56.307398 containerd[1482]: 2025-01-16 09:06:56.005 [INFO][4310] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ebce42d909ae3994ae35dec32a443da77bd01be07722e1886258dccee09f7d46 Jan 16 09:06:56.307398 containerd[1482]: 2025-01-16 09:06:56.053 [INFO][4310] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.ebce42d909ae3994ae35dec32a443da77bd01be07722e1886258dccee09f7d46" host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:56.307398 containerd[1482]: 2025-01-16 09:06:56.169 [INFO][4310] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.8.6/26] block=192.168.8.0/26 handle="k8s-pod-network.ebce42d909ae3994ae35dec32a443da77bd01be07722e1886258dccee09f7d46" host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:56.307398 containerd[1482]: 2025-01-16 09:06:56.178 [INFO][4310] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.8.6/26] handle="k8s-pod-network.ebce42d909ae3994ae35dec32a443da77bd01be07722e1886258dccee09f7d46" host="ci-4081.3.0-a-d8418dcdb9" Jan 16 09:06:56.307398 containerd[1482]: 2025-01-16 09:06:56.179 [INFO][4310] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:06:56.307398 containerd[1482]: 2025-01-16 09:06:56.181 [INFO][4310] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.6/26] IPv6=[] ContainerID="ebce42d909ae3994ae35dec32a443da77bd01be07722e1886258dccee09f7d46" HandleID="k8s-pod-network.ebce42d909ae3994ae35dec32a443da77bd01be07722e1886258dccee09f7d46" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-csi--node--driver--r7n9l-eth0" Jan 16 09:06:56.309754 containerd[1482]: 2025-01-16 09:06:56.199 [INFO][4238] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ebce42d909ae3994ae35dec32a443da77bd01be07722e1886258dccee09f7d46" Namespace="calico-system" Pod="csi-node-driver-r7n9l" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-csi--node--driver--r7n9l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--d8418dcdb9-k8s-csi--node--driver--r7n9l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b7bd711d-8793-408e-a86f-5638b4667c72", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-d8418dcdb9", ContainerID:"", Pod:"csi-node-driver-r7n9l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.8.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali54e4a66f96a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:06:56.309754 containerd[1482]: 2025-01-16 09:06:56.199 [INFO][4238] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.8.6/32] ContainerID="ebce42d909ae3994ae35dec32a443da77bd01be07722e1886258dccee09f7d46" Namespace="calico-system" Pod="csi-node-driver-r7n9l" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-csi--node--driver--r7n9l-eth0" Jan 16 09:06:56.309754 containerd[1482]: 2025-01-16 09:06:56.199 [INFO][4238] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali54e4a66f96a ContainerID="ebce42d909ae3994ae35dec32a443da77bd01be07722e1886258dccee09f7d46" Namespace="calico-system" Pod="csi-node-driver-r7n9l" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-csi--node--driver--r7n9l-eth0" Jan 16 09:06:56.309754 containerd[1482]: 2025-01-16 09:06:56.214 [INFO][4238] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ebce42d909ae3994ae35dec32a443da77bd01be07722e1886258dccee09f7d46" Namespace="calico-system" Pod="csi-node-driver-r7n9l" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-csi--node--driver--r7n9l-eth0" Jan 16 09:06:56.309754 containerd[1482]: 2025-01-16 09:06:56.217 [INFO][4238] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ebce42d909ae3994ae35dec32a443da77bd01be07722e1886258dccee09f7d46" Namespace="calico-system" Pod="csi-node-driver-r7n9l" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-csi--node--driver--r7n9l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--d8418dcdb9-k8s-csi--node--driver--r7n9l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b7bd711d-8793-408e-a86f-5638b4667c72", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-d8418dcdb9", ContainerID:"ebce42d909ae3994ae35dec32a443da77bd01be07722e1886258dccee09f7d46", Pod:"csi-node-driver-r7n9l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.8.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali54e4a66f96a", MAC:"92:07:c7:ce:db:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:06:56.309754 containerd[1482]: 2025-01-16 09:06:56.297 [INFO][4238] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ebce42d909ae3994ae35dec32a443da77bd01be07722e1886258dccee09f7d46" Namespace="calico-system" Pod="csi-node-driver-r7n9l" WorkloadEndpoint="ci--4081.3.0--a--d8418dcdb9-k8s-csi--node--driver--r7n9l-eth0" Jan 16 09:06:56.414124 kubelet[2571]: I0116 09:06:56.414034 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-cz7k2" podStartSLOduration=40.413985171 podStartE2EDuration="40.413985171s" podCreationTimestamp="2025-01-16 09:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 09:06:56.320194641 +0000 UTC m=+53.256598479" watchObservedRunningTime="2025-01-16 09:06:56.413985171 +0000 UTC m=+53.350389008" Jan 16 09:06:56.431285 containerd[1482]: time="2025-01-16T09:06:56.428571841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:06:56.431285 containerd[1482]: time="2025-01-16T09:06:56.430174788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:06:56.431285 containerd[1482]: time="2025-01-16T09:06:56.430198269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:56.431285 containerd[1482]: time="2025-01-16T09:06:56.430352491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:56.502157 systemd[1]: Started cri-containerd-ebce42d909ae3994ae35dec32a443da77bd01be07722e1886258dccee09f7d46.scope - libcontainer container ebce42d909ae3994ae35dec32a443da77bd01be07722e1886258dccee09f7d46. Jan 16 09:06:56.674208 systemd-networkd[1374]: calib5d814bc144: Gained IPv6LL Jan 16 09:06:56.797261 kernel: bpftool[4592]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 16 09:06:56.807017 containerd[1482]: time="2025-01-16T09:06:56.806558411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff4995848-5jlzt,Uid:51a16b4c-b541-40b1-ba52-b426bfe5e240,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f39bd9d5b8415f6eefa183cf589f6d6a89a69d56b1c4d6fbf10001f1eae75d0a\"" Jan 16 09:06:56.811911 containerd[1482]: time="2025-01-16T09:06:56.811851530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7698c84dd8-drr4g,Uid:9030e0e7-f33c-4169-9df7-1b9ed86d0a85,Namespace:calico-system,Attempt:1,} returns sandbox id \"ac8777bd56687c457dff7cc2866804056120d7c8081d3893ea945872e74c971b\"" Jan 16 09:06:56.812435 containerd[1482]: time="2025-01-16T09:06:56.812078863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r7n9l,Uid:b7bd711d-8793-408e-a86f-5638b4667c72,Namespace:calico-system,Attempt:1,} returns sandbox id \"ebce42d909ae3994ae35dec32a443da77bd01be07722e1886258dccee09f7d46\"" Jan 16 09:06:57.219882 kubelet[2571]: E0116 09:06:57.216793 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:57.219882 kubelet[2571]: E0116 09:06:57.217792 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:57.266250 kubelet[2571]: I0116 09:06:57.265882 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-ff2ps" podStartSLOduration=42.265852408 podStartE2EDuration="42.265852408s" podCreationTimestamp="2025-01-16 09:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 09:06:56.42603072 +0000 UTC m=+53.362434565" watchObservedRunningTime="2025-01-16 09:06:57.265852408 +0000 UTC m=+54.202256252" Jan 16 09:06:57.954057 systemd-networkd[1374]: cali55b85a219a9: Gained IPv6LL Jan 16 09:06:58.208986 systemd-networkd[1374]: cali54e4a66f96a: Gained IPv6LL Jan 16 09:06:58.230595 kubelet[2571]: E0116 09:06:58.230264 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:58.237215 kubelet[2571]: E0116 09:06:58.237040 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:58.355259 systemd[1]: Started sshd@7-146.190.127.227:22-139.178.68.195:34550.service - OpenSSH per-connection server daemon (139.178.68.195:34550). Jan 16 09:06:58.630667 sshd[4609]: Accepted publickey for core from 139.178.68.195 port 34550 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:06:58.647208 sshd[4609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:06:58.674628 systemd-logind[1456]: New session 8 of user core. Jan 16 09:06:58.678285 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 16 09:06:58.686385 systemd-networkd[1374]: vxlan.calico: Link UP Jan 16 09:06:58.686402 systemd-networkd[1374]: vxlan.calico: Gained carrier Jan 16 09:06:59.237881 kubelet[2571]: E0116 09:06:59.237806 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:59.239228 kubelet[2571]: E0116 09:06:59.238001 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:59.654904 sshd[4609]: pam_unix(sshd:session): session closed for user core Jan 16 09:06:59.663595 systemd[1]: sshd@7-146.190.127.227:22-139.178.68.195:34550.service: Deactivated successfully. Jan 16 09:06:59.672386 systemd[1]: session-8.scope: Deactivated successfully. Jan 16 09:06:59.680012 systemd-logind[1456]: Session 8 logged out. Waiting for processes to exit. Jan 16 09:06:59.689151 systemd-logind[1456]: Removed session 8. Jan 16 09:06:59.826139 containerd[1482]: time="2025-01-16T09:06:59.825967097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:59.829210 containerd[1482]: time="2025-01-16T09:06:59.828392605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 16 09:06:59.831034 containerd[1482]: time="2025-01-16T09:06:59.830986460Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:59.842956 containerd[1482]: time="2025-01-16T09:06:59.842857001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:59.848121 containerd[1482]: time="2025-01-16T09:06:59.844975759Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 5.952548251s" Jan 16 09:06:59.848121 containerd[1482]: time="2025-01-16T09:06:59.845055982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 16 09:06:59.856514 containerd[1482]: time="2025-01-16T09:06:59.855744884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 16 09:06:59.857996 containerd[1482]: time="2025-01-16T09:06:59.857701216Z" level=info msg="CreateContainer within sandbox \"3830472e4260f4cb05949fcfc39e6e7e60c093a1fe1f03db15cd1479bfa6ec47\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 16 09:06:59.887298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount623687634.mount: Deactivated successfully. Jan 16 09:06:59.908396 containerd[1482]: time="2025-01-16T09:06:59.908186685Z" level=info msg="CreateContainer within sandbox \"3830472e4260f4cb05949fcfc39e6e7e60c093a1fe1f03db15cd1479bfa6ec47\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1e90b7e392cf3a383c1df2198c12891cac9f1020e5dbf8199bfc188c61871597\"" Jan 16 09:06:59.913993 containerd[1482]: time="2025-01-16T09:06:59.912180059Z" level=info msg="StartContainer for \"1e90b7e392cf3a383c1df2198c12891cac9f1020e5dbf8199bfc188c61871597\"" Jan 16 09:06:59.986760 systemd[1]: Started cri-containerd-1e90b7e392cf3a383c1df2198c12891cac9f1020e5dbf8199bfc188c61871597.scope - libcontainer container 1e90b7e392cf3a383c1df2198c12891cac9f1020e5dbf8199bfc188c61871597. Jan 16 09:07:00.165958 containerd[1482]: time="2025-01-16T09:07:00.165758710Z" level=info msg="StartContainer for \"1e90b7e392cf3a383c1df2198c12891cac9f1020e5dbf8199bfc188c61871597\" returns successfully" Jan 16 09:07:00.314480 kubelet[2571]: I0116 09:07:00.310720 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5ff4995848-jjr88" podStartSLOduration=31.281312567 podStartE2EDuration="37.310698567s" podCreationTimestamp="2025-01-16 09:06:23 +0000 UTC" firstStartedPulling="2025-01-16 09:06:53.823980542 +0000 UTC m=+50.760384357" lastFinishedPulling="2025-01-16 09:06:59.85336653 +0000 UTC m=+56.789770357" observedRunningTime="2025-01-16 09:07:00.310398993 +0000 UTC m=+57.246802833" watchObservedRunningTime="2025-01-16 09:07:00.310698567 +0000 UTC m=+57.247102406" Jan 16 09:07:00.534581 systemd-networkd[1374]: vxlan.calico: Gained IPv6LL Jan 16 09:07:00.754835 containerd[1482]: time="2025-01-16T09:07:00.754306501Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:00.755522 containerd[1482]: time="2025-01-16T09:07:00.755450636Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 16 09:07:00.759244 containerd[1482]: time="2025-01-16T09:07:00.759164230Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 902.441948ms" Jan 16 09:07:00.759244 containerd[1482]: time="2025-01-16T09:07:00.759238263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 16 09:07:00.765883 containerd[1482]: time="2025-01-16T09:07:00.764177922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 16 09:07:00.774227 containerd[1482]: time="2025-01-16T09:07:00.774126191Z" level=info msg="CreateContainer within sandbox \"f39bd9d5b8415f6eefa183cf589f6d6a89a69d56b1c4d6fbf10001f1eae75d0a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 16 09:07:00.895579 containerd[1482]: time="2025-01-16T09:07:00.895472278Z" level=info msg="CreateContainer within sandbox \"f39bd9d5b8415f6eefa183cf589f6d6a89a69d56b1c4d6fbf10001f1eae75d0a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"90d2d85a9061e5764dcb7b9ab4095dbe183f6bbf3e68786427e62170a42a4996\"" Jan 16 09:07:00.901021 containerd[1482]: time="2025-01-16T09:07:00.899647142Z" level=info msg="StartContainer for \"90d2d85a9061e5764dcb7b9ab4095dbe183f6bbf3e68786427e62170a42a4996\"" Jan 16 09:07:00.984691 systemd[1]: Started cri-containerd-90d2d85a9061e5764dcb7b9ab4095dbe183f6bbf3e68786427e62170a42a4996.scope - libcontainer container 90d2d85a9061e5764dcb7b9ab4095dbe183f6bbf3e68786427e62170a42a4996. Jan 16 09:07:01.134955 containerd[1482]: time="2025-01-16T09:07:01.134873596Z" level=info msg="StartContainer for \"90d2d85a9061e5764dcb7b9ab4095dbe183f6bbf3e68786427e62170a42a4996\" returns successfully" Jan 16 09:07:01.261368 kubelet[2571]: E0116 09:07:01.260455 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:07:01.269106 kubelet[2571]: I0116 09:07:01.269067 2571 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 09:07:01.352727 kubelet[2571]: I0116 09:07:01.351624 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5ff4995848-5jlzt" podStartSLOduration=34.408821713 podStartE2EDuration="38.351587559s" podCreationTimestamp="2025-01-16 09:06:23 +0000 UTC" firstStartedPulling="2025-01-16 09:06:56.820541727 +0000 UTC m=+53.756945548" lastFinishedPulling="2025-01-16 09:07:00.763307576 +0000 UTC m=+57.699711394" observedRunningTime="2025-01-16 09:07:01.319613998 +0000 UTC m=+58.256017835" watchObservedRunningTime="2025-01-16 09:07:01.351587559 +0000 UTC m=+58.287991431" Jan 16 09:07:02.272415 kubelet[2571]: I0116 09:07:02.271632 2571 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 09:07:02.948102 containerd[1482]: time="2025-01-16T09:07:02.948025918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:02.950980 containerd[1482]: time="2025-01-16T09:07:02.950897078Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 16 09:07:02.952903 containerd[1482]: time="2025-01-16T09:07:02.952848419Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:02.962571 containerd[1482]: time="2025-01-16T09:07:02.962453369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:02.964516 containerd[1482]: time="2025-01-16T09:07:02.964095004Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.199847686s" Jan 16 09:07:02.964516 containerd[1482]: time="2025-01-16T09:07:02.964161702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 16 09:07:02.968244 containerd[1482]: time="2025-01-16T09:07:02.968129178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 16 09:07:02.970789 containerd[1482]: time="2025-01-16T09:07:02.970727058Z" level=info msg="CreateContainer within sandbox \"ebce42d909ae3994ae35dec32a443da77bd01be07722e1886258dccee09f7d46\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 16 09:07:03.021832 containerd[1482]: time="2025-01-16T09:07:03.020008404Z" level=info msg="CreateContainer within sandbox \"ebce42d909ae3994ae35dec32a443da77bd01be07722e1886258dccee09f7d46\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9bf3ac0f643daa6aa9d1c740e9072a4dec926efbaaa22c7c938b2f52073bcbd4\"" Jan 16 09:07:03.022144 containerd[1482]: time="2025-01-16T09:07:03.022088017Z" level=info msg="StartContainer for \"9bf3ac0f643daa6aa9d1c740e9072a4dec926efbaaa22c7c938b2f52073bcbd4\"" Jan 16 09:07:03.120226 systemd[1]: Started cri-containerd-9bf3ac0f643daa6aa9d1c740e9072a4dec926efbaaa22c7c938b2f52073bcbd4.scope - libcontainer container 9bf3ac0f643daa6aa9d1c740e9072a4dec926efbaaa22c7c938b2f52073bcbd4. Jan 16 09:07:03.205829 containerd[1482]: time="2025-01-16T09:07:03.204150981Z" level=info msg="StartContainer for \"9bf3ac0f643daa6aa9d1c740e9072a4dec926efbaaa22c7c938b2f52073bcbd4\" returns successfully" Jan 16 09:07:03.583228 containerd[1482]: time="2025-01-16T09:07:03.583173893Z" level=info msg="StopPodSandbox for \"ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff\"" Jan 16 09:07:04.048865 containerd[1482]: 2025-01-16 09:07:03.933 [WARNING][4850] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--5jlzt-eth0", GenerateName:"calico-apiserver-5ff4995848-", Namespace:"calico-apiserver", SelfLink:"", UID:"51a16b4c-b541-40b1-ba52-b426bfe5e240", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff4995848", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-d8418dcdb9", ContainerID:"f39bd9d5b8415f6eefa183cf589f6d6a89a69d56b1c4d6fbf10001f1eae75d0a", Pod:"calico-apiserver-5ff4995848-5jlzt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali55b85a219a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:04.048865 containerd[1482]: 2025-01-16 09:07:03.936 [INFO][4850] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" Jan 16 09:07:04.048865 containerd[1482]: 2025-01-16 09:07:03.936 [INFO][4850] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" iface="eth0" netns="" Jan 16 09:07:04.048865 containerd[1482]: 2025-01-16 09:07:03.936 [INFO][4850] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" Jan 16 09:07:04.048865 containerd[1482]: 2025-01-16 09:07:03.936 [INFO][4850] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" Jan 16 09:07:04.048865 containerd[1482]: 2025-01-16 09:07:04.007 [INFO][4856] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" HandleID="k8s-pod-network.ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--5jlzt-eth0" Jan 16 09:07:04.048865 containerd[1482]: 2025-01-16 09:07:04.007 [INFO][4856] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:04.048865 containerd[1482]: 2025-01-16 09:07:04.007 [INFO][4856] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:04.048865 containerd[1482]: 2025-01-16 09:07:04.022 [WARNING][4856] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" HandleID="k8s-pod-network.ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--5jlzt-eth0" Jan 16 09:07:04.048865 containerd[1482]: 2025-01-16 09:07:04.022 [INFO][4856] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" HandleID="k8s-pod-network.ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--5jlzt-eth0" Jan 16 09:07:04.048865 containerd[1482]: 2025-01-16 09:07:04.032 [INFO][4856] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:04.048865 containerd[1482]: 2025-01-16 09:07:04.035 [INFO][4850] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" Jan 16 09:07:04.048865 containerd[1482]: time="2025-01-16T09:07:04.047722856Z" level=info msg="TearDown network for sandbox \"ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff\" successfully" Jan 16 09:07:04.048865 containerd[1482]: time="2025-01-16T09:07:04.047763805Z" level=info msg="StopPodSandbox for \"ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff\" returns successfully" Jan 16 09:07:04.062222 containerd[1482]: time="2025-01-16T09:07:04.050377205Z" level=info msg="RemovePodSandbox for \"ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff\"" Jan 16 09:07:04.062222 containerd[1482]: time="2025-01-16T09:07:04.053572480Z" level=info msg="Forcibly stopping sandbox \"ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff\"" Jan 16 09:07:04.301154 containerd[1482]: 2025-01-16 09:07:04.188 [WARNING][4874] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--5jlzt-eth0", GenerateName:"calico-apiserver-5ff4995848-", Namespace:"calico-apiserver", SelfLink:"", UID:"51a16b4c-b541-40b1-ba52-b426bfe5e240", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff4995848", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-d8418dcdb9", ContainerID:"f39bd9d5b8415f6eefa183cf589f6d6a89a69d56b1c4d6fbf10001f1eae75d0a", Pod:"calico-apiserver-5ff4995848-5jlzt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali55b85a219a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:04.301154 containerd[1482]: 2025-01-16 09:07:04.189 [INFO][4874] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" Jan 16 09:07:04.301154 containerd[1482]: 2025-01-16 09:07:04.189 [INFO][4874] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" iface="eth0" netns="" Jan 16 09:07:04.301154 containerd[1482]: 2025-01-16 09:07:04.189 [INFO][4874] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" Jan 16 09:07:04.301154 containerd[1482]: 2025-01-16 09:07:04.189 [INFO][4874] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" Jan 16 09:07:04.301154 containerd[1482]: 2025-01-16 09:07:04.258 [INFO][4882] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" HandleID="k8s-pod-network.ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--5jlzt-eth0" Jan 16 09:07:04.301154 containerd[1482]: 2025-01-16 09:07:04.270 [INFO][4882] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:04.301154 containerd[1482]: 2025-01-16 09:07:04.270 [INFO][4882] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:04.301154 containerd[1482]: 2025-01-16 09:07:04.282 [WARNING][4882] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" HandleID="k8s-pod-network.ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--5jlzt-eth0" Jan 16 09:07:04.301154 containerd[1482]: 2025-01-16 09:07:04.282 [INFO][4882] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" HandleID="k8s-pod-network.ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--5jlzt-eth0" Jan 16 09:07:04.301154 containerd[1482]: 2025-01-16 09:07:04.287 [INFO][4882] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:04.301154 containerd[1482]: 2025-01-16 09:07:04.291 [INFO][4874] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff" Jan 16 09:07:04.301154 containerd[1482]: time="2025-01-16T09:07:04.295710835Z" level=info msg="TearDown network for sandbox \"ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff\" successfully" Jan 16 09:07:04.313435 containerd[1482]: time="2025-01-16T09:07:04.313054083Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 09:07:04.313435 containerd[1482]: time="2025-01-16T09:07:04.313199050Z" level=info msg="RemovePodSandbox \"ddd1ac6423d3f53149c03081809ae0877f487d7ad005435111316e440e9deaff\" returns successfully" Jan 16 09:07:04.314924 containerd[1482]: time="2025-01-16T09:07:04.314876489Z" level=info msg="StopPodSandbox for \"3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37\"" Jan 16 09:07:04.527020 containerd[1482]: 2025-01-16 09:07:04.420 [WARNING][4902] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--cz7k2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3be6b83d-b704-4612-9bea-5273dc682d78", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-d8418dcdb9", ContainerID:"119eeb8e79aca82e1bb6d5043f442f48380ffcc94ef2ca77bb4883fc73449893", Pod:"coredns-7db6d8ff4d-cz7k2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie12bbc23086", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:04.527020 containerd[1482]: 2025-01-16 09:07:04.420 [INFO][4902] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" Jan 16 09:07:04.527020 containerd[1482]: 2025-01-16 09:07:04.421 [INFO][4902] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" iface="eth0" netns="" Jan 16 09:07:04.527020 containerd[1482]: 2025-01-16 09:07:04.421 [INFO][4902] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" Jan 16 09:07:04.527020 containerd[1482]: 2025-01-16 09:07:04.421 [INFO][4902] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" Jan 16 09:07:04.527020 containerd[1482]: 2025-01-16 09:07:04.471 [INFO][4908] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" HandleID="k8s-pod-network.3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--cz7k2-eth0" Jan 16 09:07:04.527020 containerd[1482]: 2025-01-16 09:07:04.471 [INFO][4908] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:04.527020 containerd[1482]: 2025-01-16 09:07:04.471 [INFO][4908] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:04.527020 containerd[1482]: 2025-01-16 09:07:04.487 [WARNING][4908] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" HandleID="k8s-pod-network.3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--cz7k2-eth0" Jan 16 09:07:04.527020 containerd[1482]: 2025-01-16 09:07:04.487 [INFO][4908] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" HandleID="k8s-pod-network.3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--cz7k2-eth0" Jan 16 09:07:04.527020 containerd[1482]: 2025-01-16 09:07:04.518 [INFO][4908] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:04.527020 containerd[1482]: 2025-01-16 09:07:04.522 [INFO][4902] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" Jan 16 09:07:04.527905 containerd[1482]: time="2025-01-16T09:07:04.527082612Z" level=info msg="TearDown network for sandbox \"3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37\" successfully" Jan 16 09:07:04.527905 containerd[1482]: time="2025-01-16T09:07:04.527138893Z" level=info msg="StopPodSandbox for \"3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37\" returns successfully" Jan 16 09:07:04.528021 containerd[1482]: time="2025-01-16T09:07:04.527927799Z" level=info msg="RemovePodSandbox for \"3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37\"" Jan 16 09:07:04.528021 containerd[1482]: time="2025-01-16T09:07:04.527972758Z" level=info msg="Forcibly stopping sandbox \"3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37\"" Jan 16 09:07:04.649863 kubelet[2571]: I0116 09:07:04.648985 2571 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 09:07:04.678104 containerd[1482]: 2025-01-16 09:07:04.606 [WARNING][4926] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--cz7k2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3be6b83d-b704-4612-9bea-5273dc682d78", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-d8418dcdb9", ContainerID:"119eeb8e79aca82e1bb6d5043f442f48380ffcc94ef2ca77bb4883fc73449893", Pod:"coredns-7db6d8ff4d-cz7k2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie12bbc23086", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:04.678104 containerd[1482]: 2025-01-16 09:07:04.606 [INFO][4926] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" Jan 16 09:07:04.678104 containerd[1482]: 2025-01-16 09:07:04.606 [INFO][4926] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" iface="eth0" netns="" Jan 16 09:07:04.678104 containerd[1482]: 2025-01-16 09:07:04.606 [INFO][4926] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" Jan 16 09:07:04.678104 containerd[1482]: 2025-01-16 09:07:04.606 [INFO][4926] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" Jan 16 09:07:04.678104 containerd[1482]: 2025-01-16 09:07:04.646 [INFO][4932] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" HandleID="k8s-pod-network.3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--cz7k2-eth0" Jan 16 09:07:04.678104 containerd[1482]: 2025-01-16 09:07:04.646 [INFO][4932] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:04.678104 containerd[1482]: 2025-01-16 09:07:04.646 [INFO][4932] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:04.678104 containerd[1482]: 2025-01-16 09:07:04.659 [WARNING][4932] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" HandleID="k8s-pod-network.3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--cz7k2-eth0" Jan 16 09:07:04.678104 containerd[1482]: 2025-01-16 09:07:04.659 [INFO][4932] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" HandleID="k8s-pod-network.3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--cz7k2-eth0" Jan 16 09:07:04.678104 containerd[1482]: 2025-01-16 09:07:04.666 [INFO][4932] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:04.678104 containerd[1482]: 2025-01-16 09:07:04.670 [INFO][4926] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37" Jan 16 09:07:04.678104 containerd[1482]: time="2025-01-16T09:07:04.677309995Z" level=info msg="TearDown network for sandbox \"3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37\" successfully" Jan 16 09:07:04.686263 systemd[1]: Started sshd@8-146.190.127.227:22-139.178.68.195:34556.service - OpenSSH per-connection server daemon (139.178.68.195:34556). Jan 16 09:07:04.689155 containerd[1482]: time="2025-01-16T09:07:04.688526681Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 09:07:04.689155 containerd[1482]: time="2025-01-16T09:07:04.688639671Z" level=info msg="RemovePodSandbox \"3f3f18109b2c385fcdacee649b041160e4caf49857a71522c78dd83cd7788c37\" returns successfully" Jan 16 09:07:04.692982 containerd[1482]: time="2025-01-16T09:07:04.692500901Z" level=info msg="StopPodSandbox for \"062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130\"" Jan 16 09:07:04.990000 sshd[4940]: Accepted publickey for core from 139.178.68.195 port 34556 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:07:04.998369 sshd[4940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:07:05.011599 systemd-logind[1456]: New session 9 of user core. Jan 16 09:07:05.017204 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 16 09:07:05.042568 containerd[1482]: 2025-01-16 09:07:04.863 [WARNING][4953] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--ff2ps-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9f7d449c-357b-4091-8b94-7aeb96a263ac", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-d8418dcdb9", ContainerID:"a2fa7a8404fce68d3b556c0adf28a669609daaadbb219617b9adeaf03eb9d76f", Pod:"coredns-7db6d8ff4d-ff2ps", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b2f4b19036", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:05.042568 containerd[1482]: 2025-01-16 09:07:04.863 [INFO][4953] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" Jan 16 09:07:05.042568 containerd[1482]: 2025-01-16 09:07:04.864 [INFO][4953] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" iface="eth0" netns="" Jan 16 09:07:05.042568 containerd[1482]: 2025-01-16 09:07:04.864 [INFO][4953] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" Jan 16 09:07:05.042568 containerd[1482]: 2025-01-16 09:07:04.864 [INFO][4953] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" Jan 16 09:07:05.042568 containerd[1482]: 2025-01-16 09:07:04.986 [INFO][4962] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" HandleID="k8s-pod-network.062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--ff2ps-eth0" Jan 16 09:07:05.042568 containerd[1482]: 2025-01-16 09:07:04.992 [INFO][4962] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:05.042568 containerd[1482]: 2025-01-16 09:07:04.994 [INFO][4962] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:05.042568 containerd[1482]: 2025-01-16 09:07:05.024 [WARNING][4962] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" HandleID="k8s-pod-network.062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--ff2ps-eth0" Jan 16 09:07:05.042568 containerd[1482]: 2025-01-16 09:07:05.024 [INFO][4962] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" HandleID="k8s-pod-network.062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--ff2ps-eth0" Jan 16 09:07:05.042568 containerd[1482]: 2025-01-16 09:07:05.030 [INFO][4962] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:05.042568 containerd[1482]: 2025-01-16 09:07:05.035 [INFO][4953] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" Jan 16 09:07:05.044950 containerd[1482]: time="2025-01-16T09:07:05.043923073Z" level=info msg="TearDown network for sandbox \"062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130\" successfully" Jan 16 09:07:05.044950 containerd[1482]: time="2025-01-16T09:07:05.043969453Z" level=info msg="StopPodSandbox for \"062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130\" returns successfully" Jan 16 09:07:05.045207 containerd[1482]: time="2025-01-16T09:07:05.045019280Z" level=info msg="RemovePodSandbox for \"062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130\"" Jan 16 09:07:05.046950 containerd[1482]: time="2025-01-16T09:07:05.045213406Z" level=info msg="Forcibly stopping sandbox \"062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130\"" Jan 16 09:07:05.410934 containerd[1482]: 2025-01-16 09:07:05.231 [WARNING][4982] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--ff2ps-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9f7d449c-357b-4091-8b94-7aeb96a263ac", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-d8418dcdb9", ContainerID:"a2fa7a8404fce68d3b556c0adf28a669609daaadbb219617b9adeaf03eb9d76f", Pod:"coredns-7db6d8ff4d-ff2ps", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b2f4b19036", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:05.410934 containerd[1482]: 2025-01-16 09:07:05.233 [INFO][4982] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" Jan 16 09:07:05.410934 containerd[1482]: 2025-01-16 09:07:05.233 [INFO][4982] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" iface="eth0" netns="" Jan 16 09:07:05.410934 containerd[1482]: 2025-01-16 09:07:05.233 [INFO][4982] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" Jan 16 09:07:05.410934 containerd[1482]: 2025-01-16 09:07:05.233 [INFO][4982] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" Jan 16 09:07:05.410934 containerd[1482]: 2025-01-16 09:07:05.360 [INFO][4994] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" HandleID="k8s-pod-network.062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--ff2ps-eth0" Jan 16 09:07:05.410934 containerd[1482]: 2025-01-16 09:07:05.360 [INFO][4994] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:05.410934 containerd[1482]: 2025-01-16 09:07:05.360 [INFO][4994] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:05.410934 containerd[1482]: 2025-01-16 09:07:05.395 [WARNING][4994] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" HandleID="k8s-pod-network.062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--ff2ps-eth0" Jan 16 09:07:05.410934 containerd[1482]: 2025-01-16 09:07:05.395 [INFO][4994] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" HandleID="k8s-pod-network.062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-coredns--7db6d8ff4d--ff2ps-eth0" Jan 16 09:07:05.410934 containerd[1482]: 2025-01-16 09:07:05.401 [INFO][4994] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:05.410934 containerd[1482]: 2025-01-16 09:07:05.407 [INFO][4982] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130" Jan 16 09:07:05.412903 containerd[1482]: time="2025-01-16T09:07:05.410999948Z" level=info msg="TearDown network for sandbox \"062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130\" successfully" Jan 16 09:07:05.448467 containerd[1482]: time="2025-01-16T09:07:05.447742345Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 09:07:05.448467 containerd[1482]: time="2025-01-16T09:07:05.448029654Z" level=info msg="RemovePodSandbox \"062aa8a0de3493b7e859366162375c175967e91920dbafe036235dacbbfbe130\" returns successfully" Jan 16 09:07:05.449397 containerd[1482]: time="2025-01-16T09:07:05.449345192Z" level=info msg="StopPodSandbox for \"3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a\"" Jan 16 09:07:05.848266 containerd[1482]: 2025-01-16 09:07:05.669 [WARNING][5025] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--jjr88-eth0", GenerateName:"calico-apiserver-5ff4995848-", Namespace:"calico-apiserver", SelfLink:"", UID:"16bb8998-202c-4c00-8496-dc8eaaa9a516", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff4995848", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-d8418dcdb9", ContainerID:"3830472e4260f4cb05949fcfc39e6e7e60c093a1fe1f03db15cd1479bfa6ec47", Pod:"calico-apiserver-5ff4995848-jjr88", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5467b727370", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:05.848266 containerd[1482]: 2025-01-16 09:07:05.670 [INFO][5025] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" Jan 16 09:07:05.848266 containerd[1482]: 2025-01-16 09:07:05.671 [INFO][5025] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" iface="eth0" netns="" Jan 16 09:07:05.848266 containerd[1482]: 2025-01-16 09:07:05.671 [INFO][5025] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" Jan 16 09:07:05.848266 containerd[1482]: 2025-01-16 09:07:05.671 [INFO][5025] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" Jan 16 09:07:05.848266 containerd[1482]: 2025-01-16 09:07:05.763 [INFO][5031] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" HandleID="k8s-pod-network.3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--jjr88-eth0" Jan 16 09:07:05.848266 containerd[1482]: 2025-01-16 09:07:05.763 [INFO][5031] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:05.848266 containerd[1482]: 2025-01-16 09:07:05.763 [INFO][5031] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:05.848266 containerd[1482]: 2025-01-16 09:07:05.794 [WARNING][5031] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" HandleID="k8s-pod-network.3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--jjr88-eth0" Jan 16 09:07:05.848266 containerd[1482]: 2025-01-16 09:07:05.795 [INFO][5031] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" HandleID="k8s-pod-network.3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--jjr88-eth0" Jan 16 09:07:05.848266 containerd[1482]: 2025-01-16 09:07:05.819 [INFO][5031] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:05.848266 containerd[1482]: 2025-01-16 09:07:05.834 [INFO][5025] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" Jan 16 09:07:05.851228 containerd[1482]: time="2025-01-16T09:07:05.848334334Z" level=info msg="TearDown network for sandbox \"3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a\" successfully" Jan 16 09:07:05.851228 containerd[1482]: time="2025-01-16T09:07:05.848377423Z" level=info msg="StopPodSandbox for \"3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a\" returns successfully" Jan 16 09:07:05.856982 containerd[1482]: time="2025-01-16T09:07:05.852542464Z" level=info msg="RemovePodSandbox for \"3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a\"" Jan 16 09:07:05.856982 containerd[1482]: time="2025-01-16T09:07:05.852602038Z" level=info msg="Forcibly stopping sandbox \"3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a\"" Jan 16 09:07:05.989762 sshd[4940]: pam_unix(sshd:session): session closed for user core Jan 16 09:07:06.008420 systemd[1]: sshd@8-146.190.127.227:22-139.178.68.195:34556.service: Deactivated successfully. Jan 16 09:07:06.022673 systemd[1]: session-9.scope: Deactivated successfully. Jan 16 09:07:06.038119 systemd-logind[1456]: Session 9 logged out. Waiting for processes to exit. Jan 16 09:07:06.054303 systemd-logind[1456]: Removed session 9. Jan 16 09:07:06.260828 containerd[1482]: 2025-01-16 09:07:06.162 [WARNING][5050] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--jjr88-eth0", GenerateName:"calico-apiserver-5ff4995848-", Namespace:"calico-apiserver", SelfLink:"", UID:"16bb8998-202c-4c00-8496-dc8eaaa9a516", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff4995848", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-d8418dcdb9", ContainerID:"3830472e4260f4cb05949fcfc39e6e7e60c093a1fe1f03db15cd1479bfa6ec47", Pod:"calico-apiserver-5ff4995848-jjr88", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5467b727370", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:06.260828 containerd[1482]: 2025-01-16 09:07:06.162 [INFO][5050] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" Jan 16 09:07:06.260828 containerd[1482]: 2025-01-16 09:07:06.162 [INFO][5050] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" iface="eth0" netns="" Jan 16 09:07:06.260828 containerd[1482]: 2025-01-16 09:07:06.162 [INFO][5050] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" Jan 16 09:07:06.260828 containerd[1482]: 2025-01-16 09:07:06.162 [INFO][5050] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" Jan 16 09:07:06.260828 containerd[1482]: 2025-01-16 09:07:06.223 [INFO][5058] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" HandleID="k8s-pod-network.3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--jjr88-eth0" Jan 16 09:07:06.260828 containerd[1482]: 2025-01-16 09:07:06.223 [INFO][5058] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:06.260828 containerd[1482]: 2025-01-16 09:07:06.223 [INFO][5058] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:06.260828 containerd[1482]: 2025-01-16 09:07:06.246 [WARNING][5058] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" HandleID="k8s-pod-network.3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--jjr88-eth0" Jan 16 09:07:06.260828 containerd[1482]: 2025-01-16 09:07:06.246 [INFO][5058] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" HandleID="k8s-pod-network.3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--apiserver--5ff4995848--jjr88-eth0" Jan 16 09:07:06.260828 containerd[1482]: 2025-01-16 09:07:06.250 [INFO][5058] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:06.260828 containerd[1482]: 2025-01-16 09:07:06.254 [INFO][5050] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a" Jan 16 09:07:06.261544 containerd[1482]: time="2025-01-16T09:07:06.261070158Z" level=info msg="TearDown network for sandbox \"3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a\" successfully" Jan 16 09:07:06.286260 containerd[1482]: time="2025-01-16T09:07:06.285984534Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 09:07:06.286260 containerd[1482]: time="2025-01-16T09:07:06.286100413Z" level=info msg="RemovePodSandbox \"3481bf04e621a58f95e90a4d4af438402703deee23d8bd8a52d864e0ac1f025a\" returns successfully" Jan 16 09:07:06.288128 containerd[1482]: time="2025-01-16T09:07:06.287972184Z" level=info msg="StopPodSandbox for \"cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f\"" Jan 16 09:07:06.564192 containerd[1482]: 2025-01-16 09:07:06.456 [WARNING][5076] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--d8418dcdb9-k8s-calico--kube--controllers--7698c84dd8--drr4g-eth0", GenerateName:"calico-kube-controllers-7698c84dd8-", Namespace:"calico-system", SelfLink:"", UID:"9030e0e7-f33c-4169-9df7-1b9ed86d0a85", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7698c84dd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-d8418dcdb9", ContainerID:"ac8777bd56687c457dff7cc2866804056120d7c8081d3893ea945872e74c971b", Pod:"calico-kube-controllers-7698c84dd8-drr4g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.8.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib5d814bc144", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:06.564192 containerd[1482]: 2025-01-16 09:07:06.456 [INFO][5076] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" Jan 16 09:07:06.564192 containerd[1482]: 2025-01-16 09:07:06.456 [INFO][5076] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" iface="eth0" netns="" Jan 16 09:07:06.564192 containerd[1482]: 2025-01-16 09:07:06.456 [INFO][5076] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" Jan 16 09:07:06.564192 containerd[1482]: 2025-01-16 09:07:06.456 [INFO][5076] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" Jan 16 09:07:06.564192 containerd[1482]: 2025-01-16 09:07:06.525 [INFO][5082] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" HandleID="k8s-pod-network.cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--kube--controllers--7698c84dd8--drr4g-eth0" Jan 16 09:07:06.564192 containerd[1482]: 2025-01-16 09:07:06.525 [INFO][5082] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:06.564192 containerd[1482]: 2025-01-16 09:07:06.525 [INFO][5082] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:06.564192 containerd[1482]: 2025-01-16 09:07:06.545 [WARNING][5082] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" HandleID="k8s-pod-network.cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--kube--controllers--7698c84dd8--drr4g-eth0" Jan 16 09:07:06.564192 containerd[1482]: 2025-01-16 09:07:06.545 [INFO][5082] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" HandleID="k8s-pod-network.cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--kube--controllers--7698c84dd8--drr4g-eth0" Jan 16 09:07:06.564192 containerd[1482]: 2025-01-16 09:07:06.554 [INFO][5082] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:06.564192 containerd[1482]: 2025-01-16 09:07:06.559 [INFO][5076] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" Jan 16 09:07:06.566896 containerd[1482]: time="2025-01-16T09:07:06.564256234Z" level=info msg="TearDown network for sandbox \"cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f\" successfully" Jan 16 09:07:06.566896 containerd[1482]: time="2025-01-16T09:07:06.564293962Z" level=info msg="StopPodSandbox for \"cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f\" returns successfully" Jan 16 09:07:06.566896 containerd[1482]: time="2025-01-16T09:07:06.566360517Z" level=info msg="RemovePodSandbox for \"cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f\"" Jan 16 09:07:06.566896 containerd[1482]: time="2025-01-16T09:07:06.566418691Z" level=info msg="Forcibly stopping sandbox \"cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f\"" Jan 16 09:07:06.794850 containerd[1482]: 2025-01-16 09:07:06.678 [WARNING][5101] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--d8418dcdb9-k8s-calico--kube--controllers--7698c84dd8--drr4g-eth0", GenerateName:"calico-kube-controllers-7698c84dd8-", Namespace:"calico-system", SelfLink:"", UID:"9030e0e7-f33c-4169-9df7-1b9ed86d0a85", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7698c84dd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-d8418dcdb9", ContainerID:"ac8777bd56687c457dff7cc2866804056120d7c8081d3893ea945872e74c971b", Pod:"calico-kube-controllers-7698c84dd8-drr4g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.8.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib5d814bc144", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:06.794850 containerd[1482]: 2025-01-16 09:07:06.678 [INFO][5101] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" Jan 16 09:07:06.794850 containerd[1482]: 2025-01-16 09:07:06.678 [INFO][5101] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" iface="eth0" netns="" Jan 16 09:07:06.794850 containerd[1482]: 2025-01-16 09:07:06.678 [INFO][5101] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" Jan 16 09:07:06.794850 containerd[1482]: 2025-01-16 09:07:06.678 [INFO][5101] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" Jan 16 09:07:06.794850 containerd[1482]: 2025-01-16 09:07:06.758 [INFO][5107] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" HandleID="k8s-pod-network.cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--kube--controllers--7698c84dd8--drr4g-eth0" Jan 16 09:07:06.794850 containerd[1482]: 2025-01-16 09:07:06.759 [INFO][5107] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:06.794850 containerd[1482]: 2025-01-16 09:07:06.759 [INFO][5107] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:06.794850 containerd[1482]: 2025-01-16 09:07:06.772 [WARNING][5107] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" HandleID="k8s-pod-network.cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--kube--controllers--7698c84dd8--drr4g-eth0" Jan 16 09:07:06.794850 containerd[1482]: 2025-01-16 09:07:06.772 [INFO][5107] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" HandleID="k8s-pod-network.cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-calico--kube--controllers--7698c84dd8--drr4g-eth0" Jan 16 09:07:06.794850 containerd[1482]: 2025-01-16 09:07:06.786 [INFO][5107] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:06.794850 containerd[1482]: 2025-01-16 09:07:06.789 [INFO][5101] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f" Jan 16 09:07:06.794850 containerd[1482]: time="2025-01-16T09:07:06.794116473Z" level=info msg="TearDown network for sandbox \"cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f\" successfully" Jan 16 09:07:06.806254 containerd[1482]: time="2025-01-16T09:07:06.805951078Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 09:07:06.806254 containerd[1482]: time="2025-01-16T09:07:06.806062339Z" level=info msg="RemovePodSandbox \"cb565bf8a7f2401cf4d1013bb7482f8d7f2964d730271491811d2349630ff98f\" returns successfully" Jan 16 09:07:06.807956 containerd[1482]: time="2025-01-16T09:07:06.807475469Z" level=info msg="StopPodSandbox for \"575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9\"" Jan 16 09:07:07.073474 containerd[1482]: 2025-01-16 09:07:06.933 [WARNING][5125] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--d8418dcdb9-k8s-csi--node--driver--r7n9l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b7bd711d-8793-408e-a86f-5638b4667c72", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-d8418dcdb9", ContainerID:"ebce42d909ae3994ae35dec32a443da77bd01be07722e1886258dccee09f7d46", Pod:"csi-node-driver-r7n9l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.8.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali54e4a66f96a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:07.073474 containerd[1482]: 2025-01-16 09:07:06.933 [INFO][5125] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" Jan 16 09:07:07.073474 containerd[1482]: 2025-01-16 09:07:06.933 [INFO][5125] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" iface="eth0" netns="" Jan 16 09:07:07.073474 containerd[1482]: 2025-01-16 09:07:06.933 [INFO][5125] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" Jan 16 09:07:07.073474 containerd[1482]: 2025-01-16 09:07:06.933 [INFO][5125] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" Jan 16 09:07:07.073474 containerd[1482]: 2025-01-16 09:07:07.037 [INFO][5131] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" HandleID="k8s-pod-network.575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-csi--node--driver--r7n9l-eth0" Jan 16 09:07:07.073474 containerd[1482]: 2025-01-16 09:07:07.037 [INFO][5131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:07.073474 containerd[1482]: 2025-01-16 09:07:07.037 [INFO][5131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:07.073474 containerd[1482]: 2025-01-16 09:07:07.056 [WARNING][5131] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" HandleID="k8s-pod-network.575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-csi--node--driver--r7n9l-eth0" Jan 16 09:07:07.073474 containerd[1482]: 2025-01-16 09:07:07.056 [INFO][5131] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" HandleID="k8s-pod-network.575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-csi--node--driver--r7n9l-eth0" Jan 16 09:07:07.073474 containerd[1482]: 2025-01-16 09:07:07.061 [INFO][5131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:07.073474 containerd[1482]: 2025-01-16 09:07:07.067 [INFO][5125] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" Jan 16 09:07:07.075398 containerd[1482]: time="2025-01-16T09:07:07.073518392Z" level=info msg="TearDown network for sandbox \"575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9\" successfully" Jan 16 09:07:07.075398 containerd[1482]: time="2025-01-16T09:07:07.073556027Z" level=info msg="StopPodSandbox for \"575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9\" returns successfully" Jan 16 09:07:07.075398 containerd[1482]: time="2025-01-16T09:07:07.074734293Z" level=info msg="RemovePodSandbox for \"575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9\"" Jan 16 09:07:07.075398 containerd[1482]: time="2025-01-16T09:07:07.074829339Z" level=info msg="Forcibly stopping sandbox \"575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9\"" Jan 16 09:07:07.307176 containerd[1482]: 2025-01-16 09:07:07.199 [WARNING][5150] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--d8418dcdb9-k8s-csi--node--driver--r7n9l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b7bd711d-8793-408e-a86f-5638b4667c72", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-d8418dcdb9", ContainerID:"ebce42d909ae3994ae35dec32a443da77bd01be07722e1886258dccee09f7d46", Pod:"csi-node-driver-r7n9l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.8.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali54e4a66f96a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:07.307176 containerd[1482]: 2025-01-16 09:07:07.201 [INFO][5150] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" Jan 16 09:07:07.307176 containerd[1482]: 2025-01-16 09:07:07.201 [INFO][5150] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" iface="eth0" netns="" Jan 16 09:07:07.307176 containerd[1482]: 2025-01-16 09:07:07.201 [INFO][5150] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" Jan 16 09:07:07.307176 containerd[1482]: 2025-01-16 09:07:07.201 [INFO][5150] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" Jan 16 09:07:07.307176 containerd[1482]: 2025-01-16 09:07:07.267 [INFO][5157] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" HandleID="k8s-pod-network.575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-csi--node--driver--r7n9l-eth0" Jan 16 09:07:07.307176 containerd[1482]: 2025-01-16 09:07:07.267 [INFO][5157] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:07.307176 containerd[1482]: 2025-01-16 09:07:07.267 [INFO][5157] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:07.307176 containerd[1482]: 2025-01-16 09:07:07.293 [WARNING][5157] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" HandleID="k8s-pod-network.575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-csi--node--driver--r7n9l-eth0" Jan 16 09:07:07.307176 containerd[1482]: 2025-01-16 09:07:07.293 [INFO][5157] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" HandleID="k8s-pod-network.575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" Workload="ci--4081.3.0--a--d8418dcdb9-k8s-csi--node--driver--r7n9l-eth0" Jan 16 09:07:07.307176 containerd[1482]: 2025-01-16 09:07:07.298 [INFO][5157] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:07.307176 containerd[1482]: 2025-01-16 09:07:07.302 [INFO][5150] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9" Jan 16 09:07:07.307176 containerd[1482]: time="2025-01-16T09:07:07.307015485Z" level=info msg="TearDown network for sandbox \"575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9\" successfully" Jan 16 09:07:07.320010 containerd[1482]: time="2025-01-16T09:07:07.319529434Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 09:07:07.320010 containerd[1482]: time="2025-01-16T09:07:07.319656952Z" level=info msg="RemovePodSandbox \"575e1a08b3eb401ea281ae589fec74924d892a05e33056c1bf55e014d7b461d9\" returns successfully" Jan 16 09:07:07.325017 containerd[1482]: time="2025-01-16T09:07:07.323816589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:07.325766 containerd[1482]: time="2025-01-16T09:07:07.325560613Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 16 09:07:07.327668 containerd[1482]: time="2025-01-16T09:07:07.326752358Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:07.330923 containerd[1482]: time="2025-01-16T09:07:07.330716190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:07.331766 containerd[1482]: time="2025-01-16T09:07:07.331698968Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 4.363506431s" Jan 16 09:07:07.331766 containerd[1482]: time="2025-01-16T09:07:07.331763120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 16 09:07:07.337113 containerd[1482]: time="2025-01-16T09:07:07.335739205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 16 09:07:07.382461 containerd[1482]: time="2025-01-16T09:07:07.382371864Z" level=info msg="CreateContainer within sandbox \"ac8777bd56687c457dff7cc2866804056120d7c8081d3893ea945872e74c971b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 16 09:07:07.423331 containerd[1482]: time="2025-01-16T09:07:07.423158943Z" level=info msg="CreateContainer within sandbox \"ac8777bd56687c457dff7cc2866804056120d7c8081d3893ea945872e74c971b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a92dce0b83f137dc6693a0a9d061e0531cf208804e2cb3ed113ad4e414b3dbb4\"" Jan 16 09:07:07.424474 containerd[1482]: time="2025-01-16T09:07:07.424417843Z" level=info msg="StartContainer for \"a92dce0b83f137dc6693a0a9d061e0531cf208804e2cb3ed113ad4e414b3dbb4\"" Jan 16 09:07:07.484131 systemd[1]: Started cri-containerd-a92dce0b83f137dc6693a0a9d061e0531cf208804e2cb3ed113ad4e414b3dbb4.scope - libcontainer container a92dce0b83f137dc6693a0a9d061e0531cf208804e2cb3ed113ad4e414b3dbb4. Jan 16 09:07:07.560455 containerd[1482]: time="2025-01-16T09:07:07.560389150Z" level=info msg="StartContainer for \"a92dce0b83f137dc6693a0a9d061e0531cf208804e2cb3ed113ad4e414b3dbb4\" returns successfully" Jan 16 09:07:08.418082 kubelet[2571]: I0116 09:07:08.417465 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7698c84dd8-drr4g" podStartSLOduration=33.904174864 podStartE2EDuration="44.417428161s" podCreationTimestamp="2025-01-16 09:06:24 +0000 UTC" firstStartedPulling="2025-01-16 09:06:56.821912217 +0000 UTC m=+53.758316031" lastFinishedPulling="2025-01-16 09:07:07.335165513 +0000 UTC m=+64.271569328" observedRunningTime="2025-01-16 09:07:08.398511844 +0000 UTC m=+65.334915683" watchObservedRunningTime="2025-01-16 09:07:08.417428161 +0000 UTC m=+65.353831997" Jan 16 09:07:08.424768 systemd[1]: run-containerd-runc-k8s.io-a92dce0b83f137dc6693a0a9d061e0531cf208804e2cb3ed113ad4e414b3dbb4-runc.9zLIq7.mount: Deactivated successfully. Jan 16 09:07:09.872405 containerd[1482]: time="2025-01-16T09:07:09.872302870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:09.885655 containerd[1482]: time="2025-01-16T09:07:09.885539335Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 16 09:07:09.893351 containerd[1482]: time="2025-01-16T09:07:09.893101630Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:09.898400 containerd[1482]: time="2025-01-16T09:07:09.898276744Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:09.900670 containerd[1482]: time="2025-01-16T09:07:09.900441826Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.564568348s" Jan 16 09:07:09.900670 containerd[1482]: time="2025-01-16T09:07:09.900525797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 16 09:07:09.926848 containerd[1482]: time="2025-01-16T09:07:09.926065305Z" level=info msg="CreateContainer within sandbox \"ebce42d909ae3994ae35dec32a443da77bd01be07722e1886258dccee09f7d46\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 16 09:07:09.972188 containerd[1482]: time="2025-01-16T09:07:09.971727534Z" level=info msg="CreateContainer within sandbox \"ebce42d909ae3994ae35dec32a443da77bd01be07722e1886258dccee09f7d46\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6fe4ba61ebda4f900f206a3807c454725b753b14816b74ae1f49b9590832c55e\"" Jan 16 09:07:09.978109 containerd[1482]: time="2025-01-16T09:07:09.975739196Z" level=info msg="StartContainer for \"6fe4ba61ebda4f900f206a3807c454725b753b14816b74ae1f49b9590832c55e\"" Jan 16 09:07:10.101212 systemd[1]: Started cri-containerd-6fe4ba61ebda4f900f206a3807c454725b753b14816b74ae1f49b9590832c55e.scope - libcontainer container 6fe4ba61ebda4f900f206a3807c454725b753b14816b74ae1f49b9590832c55e. Jan 16 09:07:10.145759 kubelet[2571]: I0116 09:07:10.145608 2571 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 09:07:10.187257 containerd[1482]: time="2025-01-16T09:07:10.187176177Z" level=info msg="StartContainer for \"6fe4ba61ebda4f900f206a3807c454725b753b14816b74ae1f49b9590832c55e\" returns successfully" Jan 16 09:07:11.001814 kubelet[2571]: I0116 09:07:11.001104 2571 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 16 09:07:11.013136 kubelet[2571]: I0116 09:07:11.013072 2571 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 16 09:07:11.014454 systemd[1]: Started sshd@9-146.190.127.227:22-139.178.68.195:34200.service - OpenSSH per-connection server daemon (139.178.68.195:34200). Jan 16 09:07:11.319422 sshd[5293]: Accepted publickey for core from 139.178.68.195 port 34200 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:07:11.327020 sshd[5293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:07:11.339163 systemd-logind[1456]: New session 10 of user core. Jan 16 09:07:11.344240 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 16 09:07:12.275211 sshd[5293]: pam_unix(sshd:session): session closed for user core Jan 16 09:07:12.281109 systemd-logind[1456]: Session 10 logged out. Waiting for processes to exit. Jan 16 09:07:12.281396 systemd[1]: sshd@9-146.190.127.227:22-139.178.68.195:34200.service: Deactivated successfully. Jan 16 09:07:12.284945 systemd[1]: session-10.scope: Deactivated successfully. Jan 16 09:07:12.289443 systemd-logind[1456]: Removed session 10. Jan 16 09:07:17.292293 systemd[1]: Started sshd@10-146.190.127.227:22-139.178.68.195:59100.service - OpenSSH per-connection server daemon (139.178.68.195:59100). Jan 16 09:07:17.373701 sshd[5316]: Accepted publickey for core from 139.178.68.195 port 59100 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:07:17.370244 sshd[5316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:07:17.409170 systemd-logind[1456]: New session 11 of user core. Jan 16 09:07:17.416209 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 16 09:07:17.685298 sshd[5316]: pam_unix(sshd:session): session closed for user core Jan 16 09:07:17.708020 systemd[1]: sshd@10-146.190.127.227:22-139.178.68.195:59100.service: Deactivated successfully. Jan 16 09:07:17.716042 systemd[1]: session-11.scope: Deactivated successfully. Jan 16 09:07:17.726867 systemd-logind[1456]: Session 11 logged out. Waiting for processes to exit. Jan 16 09:07:17.743540 systemd[1]: Started sshd@11-146.190.127.227:22-139.178.68.195:59114.service - OpenSSH per-connection server daemon (139.178.68.195:59114). Jan 16 09:07:17.754347 systemd-logind[1456]: Removed session 11. Jan 16 09:07:17.835755 sshd[5330]: Accepted publickey for core from 139.178.68.195 port 59114 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:07:17.841064 sshd[5330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:07:17.855913 systemd-logind[1456]: New session 12 of user core. Jan 16 09:07:17.862186 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 16 09:07:18.196248 sshd[5330]: pam_unix(sshd:session): session closed for user core Jan 16 09:07:18.215196 systemd[1]: sshd@11-146.190.127.227:22-139.178.68.195:59114.service: Deactivated successfully. Jan 16 09:07:18.221624 systemd[1]: session-12.scope: Deactivated successfully. Jan 16 09:07:18.229912 systemd-logind[1456]: Session 12 logged out. Waiting for processes to exit. Jan 16 09:07:18.242653 systemd[1]: Started sshd@12-146.190.127.227:22-139.178.68.195:59116.service - OpenSSH per-connection server daemon (139.178.68.195:59116). Jan 16 09:07:18.246857 systemd-logind[1456]: Removed session 12. Jan 16 09:07:18.353750 sshd[5341]: Accepted publickey for core from 139.178.68.195 port 59116 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:07:18.358686 sshd[5341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:07:18.373236 systemd-logind[1456]: New session 13 of user core. Jan 16 09:07:18.381181 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 16 09:07:18.619398 sshd[5341]: pam_unix(sshd:session): session closed for user core Jan 16 09:07:18.624720 systemd[1]: sshd@12-146.190.127.227:22-139.178.68.195:59116.service: Deactivated successfully. Jan 16 09:07:18.628259 systemd[1]: session-13.scope: Deactivated successfully. Jan 16 09:07:18.632084 systemd-logind[1456]: Session 13 logged out. Waiting for processes to exit. Jan 16 09:07:18.633710 systemd-logind[1456]: Removed session 13. Jan 16 09:07:23.640256 systemd[1]: Started sshd@13-146.190.127.227:22-139.178.68.195:59130.service - OpenSSH per-connection server daemon (139.178.68.195:59130). Jan 16 09:07:23.751975 sshd[5366]: Accepted publickey for core from 139.178.68.195 port 59130 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:07:23.754821 sshd[5366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:07:23.764079 systemd-logind[1456]: New session 14 of user core. Jan 16 09:07:23.771201 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 16 09:07:23.979422 sshd[5366]: pam_unix(sshd:session): session closed for user core Jan 16 09:07:23.988906 systemd[1]: sshd@13-146.190.127.227:22-139.178.68.195:59130.service: Deactivated successfully. Jan 16 09:07:23.992622 systemd[1]: session-14.scope: Deactivated successfully. Jan 16 09:07:23.994500 systemd-logind[1456]: Session 14 logged out. Waiting for processes to exit. Jan 16 09:07:23.996290 systemd-logind[1456]: Removed session 14. Jan 16 09:07:24.463380 kubelet[2571]: E0116 09:07:24.463323 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:07:25.467992 kubelet[2571]: E0116 09:07:25.467425 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:07:29.006398 systemd[1]: Started sshd@14-146.190.127.227:22-139.178.68.195:49830.service - OpenSSH per-connection server daemon (139.178.68.195:49830). Jan 16 09:07:29.150859 sshd[5380]: Accepted publickey for core from 139.178.68.195 port 49830 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:07:29.153941 sshd[5380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:07:29.161080 systemd-logind[1456]: New session 15 of user core. Jan 16 09:07:29.168226 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 16 09:07:29.464735 kubelet[2571]: E0116 09:07:29.464681 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:07:29.782027 sshd[5380]: pam_unix(sshd:session): session closed for user core Jan 16 09:07:29.793038 systemd-logind[1456]: Session 15 logged out. Waiting for processes to exit. Jan 16 09:07:29.794157 systemd[1]: sshd@14-146.190.127.227:22-139.178.68.195:49830.service: Deactivated successfully. Jan 16 09:07:29.799480 systemd[1]: session-15.scope: Deactivated successfully. Jan 16 09:07:29.801399 systemd-logind[1456]: Removed session 15. Jan 16 09:07:34.807656 systemd[1]: Started sshd@15-146.190.127.227:22-139.178.68.195:47576.service - OpenSSH per-connection server daemon (139.178.68.195:47576). Jan 16 09:07:34.915352 sshd[5415]: Accepted publickey for core from 139.178.68.195 port 47576 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:07:34.917677 sshd[5415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:07:34.937240 systemd-logind[1456]: New session 16 of user core. Jan 16 09:07:34.942162 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 16 09:07:35.278656 sshd[5415]: pam_unix(sshd:session): session closed for user core Jan 16 09:07:35.292674 systemd[1]: sshd@15-146.190.127.227:22-139.178.68.195:47576.service: Deactivated successfully. Jan 16 09:07:35.297331 systemd[1]: session-16.scope: Deactivated successfully. Jan 16 09:07:35.299240 systemd-logind[1456]: Session 16 logged out. Waiting for processes to exit. Jan 16 09:07:35.302992 systemd-logind[1456]: Removed session 16. Jan 16 09:07:40.312196 systemd[1]: Started sshd@16-146.190.127.227:22-139.178.68.195:47590.service - OpenSSH per-connection server daemon (139.178.68.195:47590). Jan 16 09:07:40.392471 sshd[5449]: Accepted publickey for core from 139.178.68.195 port 47590 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:07:40.395642 sshd[5449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:07:40.403301 systemd-logind[1456]: New session 17 of user core. Jan 16 09:07:40.408935 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 16 09:07:40.638344 sshd[5449]: pam_unix(sshd:session): session closed for user core Jan 16 09:07:40.653210 systemd[1]: sshd@16-146.190.127.227:22-139.178.68.195:47590.service: Deactivated successfully. Jan 16 09:07:40.658579 systemd[1]: session-17.scope: Deactivated successfully. Jan 16 09:07:40.662311 systemd-logind[1456]: Session 17 logged out. Waiting for processes to exit. Jan 16 09:07:40.672350 systemd[1]: Started sshd@17-146.190.127.227:22-139.178.68.195:47592.service - OpenSSH per-connection server daemon (139.178.68.195:47592). Jan 16 09:07:40.674034 systemd-logind[1456]: Removed session 17. Jan 16 09:07:40.747391 sshd[5468]: Accepted publickey for core from 139.178.68.195 port 47592 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:07:40.748799 sshd[5468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:07:40.757511 systemd-logind[1456]: New session 18 of user core. Jan 16 09:07:40.765196 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 16 09:07:41.308954 sshd[5468]: pam_unix(sshd:session): session closed for user core Jan 16 09:07:41.320698 systemd[1]: sshd@17-146.190.127.227:22-139.178.68.195:47592.service: Deactivated successfully. Jan 16 09:07:41.324007 systemd[1]: session-18.scope: Deactivated successfully. Jan 16 09:07:41.325568 systemd-logind[1456]: Session 18 logged out. Waiting for processes to exit. Jan 16 09:07:41.331517 systemd-logind[1456]: Removed session 18. Jan 16 09:07:41.336321 systemd[1]: Started sshd@18-146.190.127.227:22-139.178.68.195:47604.service - OpenSSH per-connection server daemon (139.178.68.195:47604). Jan 16 09:07:41.456297 sshd[5479]: Accepted publickey for core from 139.178.68.195 port 47604 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:07:41.462652 sshd[5479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:07:41.477871 systemd-logind[1456]: New session 19 of user core. Jan 16 09:07:41.488190 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 16 09:07:43.476841 kubelet[2571]: E0116 09:07:43.475977 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:07:44.456034 sshd[5479]: pam_unix(sshd:session): session closed for user core Jan 16 09:07:44.479712 systemd[1]: sshd@18-146.190.127.227:22-139.178.68.195:47604.service: Deactivated successfully. Jan 16 09:07:44.488085 systemd[1]: session-19.scope: Deactivated successfully. Jan 16 09:07:44.491552 systemd-logind[1456]: Session 19 logged out. Waiting for processes to exit. Jan 16 09:07:44.506888 systemd[1]: Started sshd@19-146.190.127.227:22-139.178.68.195:47612.service - OpenSSH per-connection server daemon (139.178.68.195:47612). Jan 16 09:07:44.515410 systemd-logind[1456]: Removed session 19. Jan 16 09:07:44.638000 sshd[5496]: Accepted publickey for core from 139.178.68.195 port 47612 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:07:44.640563 sshd[5496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:07:44.649069 systemd-logind[1456]: New session 20 of user core. Jan 16 09:07:44.655165 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 16 09:07:46.119869 sshd[5496]: pam_unix(sshd:session): session closed for user core Jan 16 09:07:46.128617 systemd[1]: sshd@19-146.190.127.227:22-139.178.68.195:47612.service: Deactivated successfully. Jan 16 09:07:46.135352 systemd[1]: session-20.scope: Deactivated successfully. Jan 16 09:07:46.142466 systemd-logind[1456]: Session 20 logged out. Waiting for processes to exit. Jan 16 09:07:46.155069 systemd[1]: Started sshd@20-146.190.127.227:22-139.178.68.195:33446.service - OpenSSH per-connection server daemon (139.178.68.195:33446). Jan 16 09:07:46.158992 systemd-logind[1456]: Removed session 20. Jan 16 09:07:46.276234 sshd[5508]: Accepted publickey for core from 139.178.68.195 port 33446 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:07:46.281517 sshd[5508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:07:46.293490 systemd-logind[1456]: New session 21 of user core. Jan 16 09:07:46.302140 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 16 09:07:46.641638 sshd[5508]: pam_unix(sshd:session): session closed for user core Jan 16 09:07:46.649378 systemd[1]: sshd@20-146.190.127.227:22-139.178.68.195:33446.service: Deactivated successfully. Jan 16 09:07:46.656303 systemd[1]: session-21.scope: Deactivated successfully. Jan 16 09:07:46.657567 systemd-logind[1456]: Session 21 logged out. Waiting for processes to exit. Jan 16 09:07:46.660071 systemd-logind[1456]: Removed session 21. Jan 16 09:07:51.671935 systemd[1]: Started sshd@21-146.190.127.227:22-139.178.68.195:33452.service - OpenSSH per-connection server daemon (139.178.68.195:33452). Jan 16 09:07:51.728820 sshd[5526]: Accepted publickey for core from 139.178.68.195 port 33452 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:07:51.730628 sshd[5526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:07:51.750121 systemd-logind[1456]: New session 22 of user core. Jan 16 09:07:51.753175 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 16 09:07:51.835663 systemd[1]: run-containerd-runc-k8s.io-a92dce0b83f137dc6693a0a9d061e0531cf208804e2cb3ed113ad4e414b3dbb4-runc.U7ff0H.mount: Deactivated successfully. Jan 16 09:07:52.049701 sshd[5526]: pam_unix(sshd:session): session closed for user core Jan 16 09:07:52.058159 systemd[1]: sshd@21-146.190.127.227:22-139.178.68.195:33452.service: Deactivated successfully. Jan 16 09:07:52.065449 systemd[1]: session-22.scope: Deactivated successfully. Jan 16 09:07:52.075970 systemd-logind[1456]: Session 22 logged out. Waiting for processes to exit. Jan 16 09:07:52.087268 systemd-logind[1456]: Removed session 22. Jan 16 09:07:55.481804 kubelet[2571]: E0116 09:07:55.480036 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:07:57.072257 systemd[1]: Started sshd@22-146.190.127.227:22-139.178.68.195:49420.service - OpenSSH per-connection server daemon (139.178.68.195:49420). Jan 16 09:07:57.134870 sshd[5561]: Accepted publickey for core from 139.178.68.195 port 49420 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:07:57.139560 sshd[5561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:07:57.147263 systemd-logind[1456]: New session 23 of user core. Jan 16 09:07:57.156137 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 16 09:07:57.399276 update_engine[1460]: I20250116 09:07:57.397615 1460 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 16 09:07:57.399276 update_engine[1460]: I20250116 09:07:57.397718 1460 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 16 09:07:57.406452 update_engine[1460]: I20250116 09:07:57.406383 1460 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 16 09:07:57.408342 update_engine[1460]: I20250116 09:07:57.408147 1460 omaha_request_params.cc:62] Current group set to lts Jan 16 09:07:57.408878 update_engine[1460]: I20250116 09:07:57.408376 1460 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 16 09:07:57.408878 update_engine[1460]: I20250116 09:07:57.408391 1460 update_attempter.cc:643] Scheduling an action processor start. Jan 16 09:07:57.408878 update_engine[1460]: I20250116 09:07:57.408416 1460 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 16 09:07:57.408878 update_engine[1460]: I20250116 09:07:57.408475 1460 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 16 09:07:57.408878 update_engine[1460]: I20250116 09:07:57.408548 1460 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 16 09:07:57.408878 update_engine[1460]: I20250116 09:07:57.408556 1460 omaha_request_action.cc:272] Request: Jan 16 09:07:57.408878 update_engine[1460]: Jan 16 09:07:57.408878 update_engine[1460]: Jan 16 09:07:57.408878 update_engine[1460]: Jan 16 09:07:57.408878 update_engine[1460]: Jan 16 09:07:57.408878 update_engine[1460]: Jan 16 09:07:57.408878 update_engine[1460]: Jan 16 09:07:57.408878 update_engine[1460]: Jan 16 09:07:57.408878 update_engine[1460]: Jan 16 09:07:57.408878 update_engine[1460]: I20250116 09:07:57.408565 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 16 09:07:57.439257 sshd[5561]: pam_unix(sshd:session): session closed for user core Jan 16 09:07:57.441746 update_engine[1460]: I20250116 09:07:57.441268 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 16 09:07:57.442163 update_engine[1460]: I20250116 09:07:57.442041 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 16 09:07:57.449836 update_engine[1460]: E20250116 09:07:57.449265 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 16 09:07:57.449836 update_engine[1460]: I20250116 09:07:57.449716 1460 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 16 09:07:57.451907 systemd[1]: sshd@22-146.190.127.227:22-139.178.68.195:49420.service: Deactivated successfully. Jan 16 09:07:57.454581 systemd[1]: session-23.scope: Deactivated successfully. Jan 16 09:07:57.460573 locksmithd[1487]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 16 09:07:57.466070 systemd-logind[1456]: Session 23 logged out. Waiting for processes to exit. Jan 16 09:07:57.468718 systemd-logind[1456]: Removed session 23. Jan 16 09:08:00.465190 kubelet[2571]: E0116 09:08:00.464737 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:08:02.454557 systemd[1]: Started sshd@23-146.190.127.227:22-139.178.68.195:49436.service - OpenSSH per-connection server daemon (139.178.68.195:49436). Jan 16 09:08:02.582238 sshd[5596]: Accepted publickey for core from 139.178.68.195 port 49436 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:08:02.585383 sshd[5596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:08:02.597271 systemd-logind[1456]: New session 24 of user core. Jan 16 09:08:02.604245 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 16 09:08:02.921469 sshd[5596]: pam_unix(sshd:session): session closed for user core Jan 16 09:08:02.928236 systemd[1]: sshd@23-146.190.127.227:22-139.178.68.195:49436.service: Deactivated successfully. Jan 16 09:08:02.933838 systemd[1]: session-24.scope: Deactivated successfully. Jan 16 09:08:02.935508 systemd-logind[1456]: Session 24 logged out. Waiting for processes to exit. Jan 16 09:08:02.936948 systemd-logind[1456]: Removed session 24. Jan 16 09:08:07.302861 update_engine[1460]: I20250116 09:08:07.302062 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 16 09:08:07.302861 update_engine[1460]: I20250116 09:08:07.302353 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 16 09:08:07.302861 update_engine[1460]: I20250116 09:08:07.302644 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 16 09:08:07.304832 update_engine[1460]: E20250116 09:08:07.304711 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 16 09:08:07.304985 update_engine[1460]: I20250116 09:08:07.304879 1460 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 16 09:08:07.947308 systemd[1]: Started sshd@24-146.190.127.227:22-139.178.68.195:51796.service - OpenSSH per-connection server daemon (139.178.68.195:51796). Jan 16 09:08:08.012867 sshd[5610]: Accepted publickey for core from 139.178.68.195 port 51796 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:08:08.016542 sshd[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:08:08.023699 systemd-logind[1456]: New session 25 of user core. Jan 16 09:08:08.034304 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 16 09:08:08.282424 sshd[5610]: pam_unix(sshd:session): session closed for user core Jan 16 09:08:08.289414 systemd[1]: sshd@24-146.190.127.227:22-139.178.68.195:51796.service: Deactivated successfully. Jan 16 09:08:08.293116 systemd[1]: session-25.scope: Deactivated successfully. Jan 16 09:08:08.295560 systemd-logind[1456]: Session 25 logged out. Waiting for processes to exit. Jan 16 09:08:08.297238 systemd-logind[1456]: Removed session 25.