Jan 16 09:04:53.165700 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 16 09:04:53.165739 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 16 09:04:53.165757 kernel: BIOS-provided physical RAM map: Jan 16 09:04:53.165768 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 16 09:04:53.165779 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 16 09:04:53.165789 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 16 09:04:53.165803 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Jan 16 09:04:53.165814 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Jan 16 09:04:53.165825 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 16 09:04:53.165840 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 16 09:04:53.165852 kernel: NX (Execute Disable) protection: active Jan 16 09:04:53.165863 kernel: APIC: Static calls initialized Jan 16 09:04:53.165874 kernel: SMBIOS 2.8 present. Jan 16 09:04:53.165886 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 16 09:04:53.165900 kernel: Hypervisor detected: KVM Jan 16 09:04:53.165917 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 16 09:04:53.165930 kernel: kvm-clock: using sched offset of 5124328887 cycles Jan 16 09:04:53.165943 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 16 09:04:53.165956 kernel: tsc: Detected 2494.138 MHz processor Jan 16 09:04:53.165969 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 16 09:04:53.165994 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 16 09:04:53.166006 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Jan 16 09:04:53.166019 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 16 09:04:53.166032 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 16 09:04:53.166061 kernel: ACPI: Early table checksum verification disabled Jan 16 09:04:53.166074 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Jan 16 09:04:53.166086 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:04:53.166099 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:04:53.166111 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:04:53.166124 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 16 09:04:53.166136 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:04:53.166149 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:04:53.166161 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:04:53.166179 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:04:53.166191 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 16 09:04:53.166203 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 16 09:04:53.166216 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 16 09:04:53.166228 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 16 09:04:53.166241 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 16 09:04:53.166253 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 16 09:04:53.166275 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 16 09:04:53.166288 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 16 09:04:53.166301 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 16 09:04:53.166315 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 16 09:04:53.166328 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 16 09:04:53.166342 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Jan 16 09:04:53.166355 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Jan 16 09:04:53.166373 kernel: Zone ranges: Jan 16 09:04:53.166387 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 16 09:04:53.166400 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Jan 16 09:04:53.166414 kernel: Normal empty Jan 16 09:04:53.166427 kernel: Movable zone start for each node Jan 16 09:04:53.166440 kernel: Early memory node ranges Jan 16 09:04:53.166454 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 16 09:04:53.166467 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Jan 16 09:04:53.166481 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Jan 16 09:04:53.166499 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 16 09:04:53.166513 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 16 09:04:53.166526 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Jan 16 09:04:53.166540 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 16 09:04:53.166553 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 16 09:04:53.166566 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 16 09:04:53.166580 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 16 09:04:53.166593 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 16 09:04:53.166606 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 16 09:04:53.166624 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 16 09:04:53.166637 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 16 09:04:53.166651 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 16 09:04:53.166664 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 16 09:04:53.166677 kernel: TSC deadline timer available Jan 16 09:04:53.166690 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 16 09:04:53.166704 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 16 09:04:53.166717 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 16 09:04:53.166731 kernel: Booting paravirtualized kernel on KVM Jan 16 09:04:53.166749 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 16 09:04:53.166762 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 16 09:04:53.166776 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 16 09:04:53.166789 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 16 09:04:53.166802 kernel: pcpu-alloc: [0] 0 1 Jan 16 09:04:53.166815 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 16 09:04:53.166830 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 16 09:04:53.166844 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 16 09:04:53.166862 kernel: random: crng init done Jan 16 09:04:53.166875 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 16 09:04:53.166888 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 16 09:04:53.166901 kernel: Fallback order for Node 0: 0 Jan 16 09:04:53.166915 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Jan 16 09:04:53.166928 kernel: Policy zone: DMA32 Jan 16 09:04:53.166942 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 16 09:04:53.166955 kernel: Memory: 1971192K/2096600K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Jan 16 09:04:53.166969 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 16 09:04:53.168810 kernel: Kernel/User page tables isolation: enabled Jan 16 09:04:53.168828 kernel: ftrace: allocating 37918 entries in 149 pages Jan 16 09:04:53.168843 kernel: ftrace: allocated 149 pages with 4 groups Jan 16 09:04:53.168858 kernel: Dynamic Preempt: voluntary Jan 16 09:04:53.168873 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 16 09:04:53.168890 kernel: rcu: RCU event tracing is enabled. Jan 16 09:04:53.168905 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 16 09:04:53.168919 kernel: Trampoline variant of Tasks RCU enabled. Jan 16 09:04:53.168933 kernel: Rude variant of Tasks RCU enabled. Jan 16 09:04:53.168951 kernel: Tracing variant of Tasks RCU enabled. Jan 16 09:04:53.168965 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 16 09:04:53.168999 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 16 09:04:53.169013 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 16 09:04:53.169027 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 16 09:04:53.169041 kernel: Console: colour VGA+ 80x25 Jan 16 09:04:53.169054 kernel: printk: console [tty0] enabled Jan 16 09:04:53.169068 kernel: printk: console [ttyS0] enabled Jan 16 09:04:53.169081 kernel: ACPI: Core revision 20230628 Jan 16 09:04:53.169098 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 16 09:04:53.169112 kernel: APIC: Switch to symmetric I/O mode setup Jan 16 09:04:53.169126 kernel: x2apic enabled Jan 16 09:04:53.169139 kernel: APIC: Switched APIC routing to: physical x2apic Jan 16 09:04:53.169152 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 16 09:04:53.169166 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jan 16 09:04:53.169180 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Jan 16 09:04:53.169193 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 16 09:04:53.169207 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 16 09:04:53.169236 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 16 09:04:53.169250 kernel: Spectre V2 : Mitigation: Retpolines Jan 16 09:04:53.169264 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 16 09:04:53.169282 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 16 09:04:53.169296 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 16 09:04:53.169311 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 16 09:04:53.169325 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 16 09:04:53.169339 kernel: MDS: Mitigation: Clear CPU buffers Jan 16 09:04:53.169354 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 16 09:04:53.169372 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 16 09:04:53.169386 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 16 09:04:53.169400 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 16 09:04:53.169415 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 16 09:04:53.169429 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 16 09:04:53.169443 kernel: Freeing SMP alternatives memory: 32K Jan 16 09:04:53.169457 kernel: pid_max: default: 32768 minimum: 301 Jan 16 09:04:53.169471 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 16 09:04:53.169490 kernel: landlock: Up and running. Jan 16 09:04:53.169504 kernel: SELinux: Initializing. Jan 16 09:04:53.169518 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 16 09:04:53.169532 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 16 09:04:53.169558 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 16 09:04:53.169572 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 09:04:53.169586 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 09:04:53.169601 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 09:04:53.169616 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 16 09:04:53.169635 kernel: signal: max sigframe size: 1776 Jan 16 09:04:53.169649 kernel: rcu: Hierarchical SRCU implementation. Jan 16 09:04:53.169663 kernel: rcu: Max phase no-delay instances is 400. Jan 16 09:04:53.169677 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 16 09:04:53.169692 kernel: smp: Bringing up secondary CPUs ... Jan 16 09:04:53.169706 kernel: smpboot: x86: Booting SMP configuration: Jan 16 09:04:53.169720 kernel: .... node #0, CPUs: #1 Jan 16 09:04:53.169735 kernel: smp: Brought up 1 node, 2 CPUs Jan 16 09:04:53.169750 kernel: smpboot: Max logical packages: 1 Jan 16 09:04:53.169768 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Jan 16 09:04:53.169782 kernel: devtmpfs: initialized Jan 16 09:04:53.169796 kernel: x86/mm: Memory block size: 128MB Jan 16 09:04:53.169811 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 16 09:04:53.169825 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 16 09:04:53.169840 kernel: pinctrl core: initialized pinctrl subsystem Jan 16 09:04:53.169855 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 16 09:04:53.169869 kernel: audit: initializing netlink subsys (disabled) Jan 16 09:04:53.169884 kernel: audit: type=2000 audit(1737018291.397:1): state=initialized audit_enabled=0 res=1 Jan 16 09:04:53.169901 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 16 09:04:53.169916 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 16 09:04:53.169930 kernel: cpuidle: using governor menu Jan 16 09:04:53.169944 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 16 09:04:53.169959 kernel: dca service started, version 1.12.1 Jan 16 09:04:53.169984 kernel: PCI: Using configuration type 1 for base access Jan 16 09:04:53.169999 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 16 09:04:53.170013 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 16 09:04:53.170028 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 16 09:04:53.170047 kernel: ACPI: Added _OSI(Module Device) Jan 16 09:04:53.170061 kernel: ACPI: Added _OSI(Processor Device) Jan 16 09:04:53.170075 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 16 09:04:53.170090 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 16 09:04:53.170104 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 16 09:04:53.170118 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 16 09:04:53.170132 kernel: ACPI: Interpreter enabled Jan 16 09:04:53.170147 kernel: ACPI: PM: (supports S0 S5) Jan 16 09:04:53.170161 kernel: ACPI: Using IOAPIC for interrupt routing Jan 16 09:04:53.170179 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 16 09:04:53.170193 kernel: PCI: Using E820 reservations for host bridge windows Jan 16 09:04:53.170208 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 16 09:04:53.170222 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 16 09:04:53.170505 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 16 09:04:53.170655 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 16 09:04:53.170799 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 16 09:04:53.170822 kernel: acpiphp: Slot [3] registered Jan 16 09:04:53.170837 kernel: acpiphp: Slot [4] registered Jan 16 09:04:53.170852 kernel: acpiphp: Slot [5] registered Jan 16 09:04:53.170866 kernel: acpiphp: Slot [6] registered Jan 16 09:04:53.170880 kernel: acpiphp: Slot [7] registered Jan 16 09:04:53.170895 kernel: acpiphp: Slot [8] registered Jan 16 09:04:53.170909 kernel: acpiphp: Slot [9] registered Jan 16 09:04:53.170924 kernel: acpiphp: Slot [10] registered Jan 16 09:04:53.170938 kernel: acpiphp: Slot [11] registered Jan 16 09:04:53.170952 kernel: acpiphp: Slot [12] registered Jan 16 09:04:53.170970 kernel: acpiphp: Slot [13] registered Jan 16 09:04:53.173045 kernel: acpiphp: Slot [14] registered Jan 16 09:04:53.173062 kernel: acpiphp: Slot [15] registered Jan 16 09:04:53.173077 kernel: acpiphp: Slot [16] registered Jan 16 09:04:53.173091 kernel: acpiphp: Slot [17] registered Jan 16 09:04:53.173105 kernel: acpiphp: Slot [18] registered Jan 16 09:04:53.173119 kernel: acpiphp: Slot [19] registered Jan 16 09:04:53.173134 kernel: acpiphp: Slot [20] registered Jan 16 09:04:53.173148 kernel: acpiphp: Slot [21] registered Jan 16 09:04:53.173171 kernel: acpiphp: Slot [22] registered Jan 16 09:04:53.173185 kernel: acpiphp: Slot [23] registered Jan 16 09:04:53.173199 kernel: acpiphp: Slot [24] registered Jan 16 09:04:53.173214 kernel: acpiphp: Slot [25] registered Jan 16 09:04:53.173228 kernel: acpiphp: Slot [26] registered Jan 16 09:04:53.173242 kernel: acpiphp: Slot [27] registered Jan 16 09:04:53.173256 kernel: acpiphp: Slot [28] registered Jan 16 09:04:53.173270 kernel: acpiphp: Slot [29] registered Jan 16 09:04:53.173285 kernel: acpiphp: Slot [30] registered Jan 16 09:04:53.173303 kernel: acpiphp: Slot [31] registered Jan 16 09:04:53.173317 kernel: PCI host bridge to bus 0000:00 Jan 16 09:04:53.173530 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 16 09:04:53.173656 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 16 09:04:53.173777 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 16 09:04:53.173907 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 16 09:04:53.176281 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 16 09:04:53.176435 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 16 09:04:53.176609 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 16 09:04:53.176758 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 16 09:04:53.176932 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 16 09:04:53.177095 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 16 09:04:53.177231 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 16 09:04:53.177363 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 16 09:04:53.177508 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 16 09:04:53.177647 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 16 09:04:53.177794 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 16 09:04:53.177931 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 16 09:04:53.178099 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 16 09:04:53.178239 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 16 09:04:53.178381 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 16 09:04:53.178526 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 16 09:04:53.178663 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 16 09:04:53.178799 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 16 09:04:53.178945 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 16 09:04:53.181295 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 16 09:04:53.181454 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 16 09:04:53.181629 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 16 09:04:53.181767 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 16 09:04:53.181898 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 16 09:04:53.184168 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 16 09:04:53.184367 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 16 09:04:53.184510 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 16 09:04:53.184649 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 16 09:04:53.184802 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 16 09:04:53.184951 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 16 09:04:53.185112 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 16 09:04:53.185246 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 16 09:04:53.185381 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 16 09:04:53.185526 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 16 09:04:53.185660 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 16 09:04:53.185817 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 16 09:04:53.185958 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 16 09:04:53.186126 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 16 09:04:53.186271 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 16 09:04:53.186405 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 16 09:04:53.186538 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 16 09:04:53.186711 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 16 09:04:53.186851 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 16 09:04:53.188668 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 16 09:04:53.188704 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 16 09:04:53.188719 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 16 09:04:53.188735 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 16 09:04:53.188749 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 16 09:04:53.188764 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 16 09:04:53.188787 kernel: iommu: Default domain type: Translated Jan 16 09:04:53.188802 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 16 09:04:53.188817 kernel: PCI: Using ACPI for IRQ routing Jan 16 09:04:53.188832 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 16 09:04:53.188847 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 16 09:04:53.188861 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Jan 16 09:04:53.189061 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 16 09:04:53.189202 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 16 09:04:53.189341 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 16 09:04:53.189367 kernel: vgaarb: loaded Jan 16 09:04:53.189382 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 16 09:04:53.189397 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 16 09:04:53.189412 kernel: clocksource: Switched to clocksource kvm-clock Jan 16 09:04:53.189427 kernel: VFS: Disk quotas dquot_6.6.0 Jan 16 09:04:53.189441 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 16 09:04:53.189457 kernel: pnp: PnP ACPI init Jan 16 09:04:53.189471 kernel: pnp: PnP ACPI: found 4 devices Jan 16 09:04:53.189486 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 16 09:04:53.189506 kernel: NET: Registered PF_INET protocol family Jan 16 09:04:53.189521 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 16 09:04:53.189536 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 16 09:04:53.189551 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 16 09:04:53.189566 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 16 09:04:53.189581 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 16 09:04:53.189596 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 16 09:04:53.189610 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 16 09:04:53.189628 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 16 09:04:53.189643 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 16 09:04:53.189657 kernel: NET: Registered PF_XDP protocol family Jan 16 09:04:53.189791 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 16 09:04:53.189913 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 16 09:04:53.190080 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 16 09:04:53.190205 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 16 09:04:53.190325 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 16 09:04:53.190487 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 16 09:04:53.190651 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 16 09:04:53.190671 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 16 09:04:53.190801 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 41299 usecs Jan 16 09:04:53.190819 kernel: PCI: CLS 0 bytes, default 64 Jan 16 09:04:53.190834 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 16 09:04:53.190848 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jan 16 09:04:53.190862 kernel: Initialise system trusted keyrings Jan 16 09:04:53.190875 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 16 09:04:53.190894 kernel: Key type asymmetric registered Jan 16 09:04:53.190908 kernel: Asymmetric key parser 'x509' registered Jan 16 09:04:53.190922 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 16 09:04:53.190936 kernel: io scheduler mq-deadline registered Jan 16 09:04:53.190950 kernel: io scheduler kyber registered Jan 16 09:04:53.190964 kernel: io scheduler bfq registered Jan 16 09:04:53.190978 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 16 09:04:53.191092 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 16 09:04:53.191107 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 16 09:04:53.191125 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 16 09:04:53.191139 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 16 09:04:53.191153 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 16 09:04:53.191168 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 16 09:04:53.191182 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 16 09:04:53.191196 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 16 09:04:53.191211 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 16 09:04:53.191407 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 16 09:04:53.191535 kernel: rtc_cmos 00:03: registered as rtc0 Jan 16 09:04:53.191654 kernel: rtc_cmos 00:03: setting system clock to 2025-01-16T09:04:52 UTC (1737018292) Jan 16 09:04:53.191769 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 16 09:04:53.191786 kernel: intel_pstate: CPU model not supported Jan 16 09:04:53.191816 kernel: NET: Registered PF_INET6 protocol family Jan 16 09:04:53.191843 kernel: Segment Routing with IPv6 Jan 16 09:04:53.191858 kernel: In-situ OAM (IOAM) with IPv6 Jan 16 09:04:53.191873 kernel: NET: Registered PF_PACKET protocol family Jan 16 09:04:53.191887 kernel: Key type dns_resolver registered Jan 16 09:04:53.191909 kernel: IPI shorthand broadcast: enabled Jan 16 09:04:53.191924 kernel: sched_clock: Marking stable (1357004760, 130619267)->(1595462317, -107838290) Jan 16 09:04:53.191938 kernel: registered taskstats version 1 Jan 16 09:04:53.191952 kernel: Loading compiled-in X.509 certificates Jan 16 09:04:53.191967 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 16 09:04:53.191981 kernel: Key type .fscrypt registered Jan 16 09:04:53.192011 kernel: Key type fscrypt-provisioning registered Jan 16 09:04:53.192025 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 16 09:04:53.192043 kernel: ima: Allocated hash algorithm: sha1 Jan 16 09:04:53.192058 kernel: ima: No architecture policies found Jan 16 09:04:53.192072 kernel: clk: Disabling unused clocks Jan 16 09:04:53.192086 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 16 09:04:53.192101 kernel: Write protecting the kernel read-only data: 36864k Jan 16 09:04:53.192139 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 16 09:04:53.192157 kernel: Run /init as init process Jan 16 09:04:53.192172 kernel: with arguments: Jan 16 09:04:53.192188 kernel: /init Jan 16 09:04:53.192202 kernel: with environment: Jan 16 09:04:53.192220 kernel: HOME=/ Jan 16 09:04:53.192235 kernel: TERM=linux Jan 16 09:04:53.192249 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 16 09:04:53.192268 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 16 09:04:53.192286 systemd[1]: Detected virtualization kvm. Jan 16 09:04:53.192302 systemd[1]: Detected architecture x86-64. Jan 16 09:04:53.192318 systemd[1]: Running in initrd. Jan 16 09:04:53.192336 systemd[1]: No hostname configured, using default hostname. Jan 16 09:04:53.192352 systemd[1]: Hostname set to . Jan 16 09:04:53.192368 systemd[1]: Initializing machine ID from VM UUID. Jan 16 09:04:53.192383 systemd[1]: Queued start job for default target initrd.target. Jan 16 09:04:53.192399 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 09:04:53.192414 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 09:04:53.192431 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 16 09:04:53.192446 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 09:04:53.192465 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 16 09:04:53.192481 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 16 09:04:53.192500 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 16 09:04:53.192516 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 16 09:04:53.192532 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 09:04:53.192547 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 09:04:53.192563 systemd[1]: Reached target paths.target - Path Units. Jan 16 09:04:53.192582 systemd[1]: Reached target slices.target - Slice Units. Jan 16 09:04:53.192598 systemd[1]: Reached target swap.target - Swaps. Jan 16 09:04:53.192616 systemd[1]: Reached target timers.target - Timer Units. Jan 16 09:04:53.192632 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 09:04:53.192648 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 09:04:53.192670 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 16 09:04:53.192686 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 16 09:04:53.192702 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 09:04:53.192718 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 09:04:53.192733 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 09:04:53.192750 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 09:04:53.192765 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 16 09:04:53.192782 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 09:04:53.192798 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 16 09:04:53.192817 systemd[1]: Starting systemd-fsck-usr.service... Jan 16 09:04:53.192833 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 09:04:53.192849 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 09:04:53.192865 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 09:04:53.192881 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 16 09:04:53.192933 systemd-journald[183]: Collecting audit messages is disabled. Jan 16 09:04:53.193089 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 09:04:53.193107 systemd[1]: Finished systemd-fsck-usr.service. Jan 16 09:04:53.193125 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 09:04:53.193147 systemd-journald[183]: Journal started Jan 16 09:04:53.193182 systemd-journald[183]: Runtime Journal (/run/log/journal/b403c3030ea8451293a4a0f491216f16) is 4.9M, max 39.3M, 34.4M free. Jan 16 09:04:53.195798 systemd-modules-load[184]: Inserted module 'overlay' Jan 16 09:04:53.265601 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 09:04:53.265640 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 16 09:04:53.265674 kernel: Bridge firewalling registered Jan 16 09:04:53.242255 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 16 09:04:53.268470 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 09:04:53.269521 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:04:53.275438 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 09:04:53.288367 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 09:04:53.297591 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 09:04:53.310247 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 09:04:53.319253 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 09:04:53.327297 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 09:04:53.332912 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 09:04:53.338799 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 09:04:53.349479 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 16 09:04:53.353163 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 09:04:53.361331 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 09:04:53.380967 dracut-cmdline[216]: dracut-dracut-053 Jan 16 09:04:53.386241 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 16 09:04:53.425160 systemd-resolved[218]: Positive Trust Anchors: Jan 16 09:04:53.425184 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 09:04:53.425251 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 09:04:53.434787 systemd-resolved[218]: Defaulting to hostname 'linux'. Jan 16 09:04:53.437902 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 09:04:53.438638 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 09:04:53.544032 kernel: SCSI subsystem initialized Jan 16 09:04:53.560051 kernel: Loading iSCSI transport class v2.0-870. Jan 16 09:04:53.578051 kernel: iscsi: registered transport (tcp) Jan 16 09:04:53.612331 kernel: iscsi: registered transport (qla4xxx) Jan 16 09:04:53.612411 kernel: QLogic iSCSI HBA Driver Jan 16 09:04:53.708951 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 16 09:04:53.723596 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 16 09:04:53.764491 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 16 09:04:53.765174 kernel: device-mapper: uevent: version 1.0.3 Jan 16 09:04:53.765205 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 16 09:04:53.863870 kernel: raid6: avx2x4 gen() 11520 MB/s Jan 16 09:04:53.867045 kernel: raid6: avx2x2 gen() 12527 MB/s Jan 16 09:04:53.892741 kernel: raid6: avx2x1 gen() 9408 MB/s Jan 16 09:04:53.892858 kernel: raid6: using algorithm avx2x2 gen() 12527 MB/s Jan 16 09:04:53.913024 kernel: raid6: .... xor() 10293 MB/s, rmw enabled Jan 16 09:04:53.913105 kernel: raid6: using avx2x2 recovery algorithm Jan 16 09:04:53.948074 kernel: xor: automatically using best checksumming function avx Jan 16 09:04:54.239220 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 16 09:04:54.265031 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 16 09:04:54.273400 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 09:04:54.306825 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jan 16 09:04:54.349478 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 09:04:54.364471 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 16 09:04:54.387499 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Jan 16 09:04:54.449786 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 09:04:54.467659 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 09:04:54.567175 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 09:04:54.574268 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 16 09:04:54.602034 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 16 09:04:54.608906 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 09:04:54.612346 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 09:04:54.612887 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 09:04:54.621280 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 16 09:04:54.656647 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 16 09:04:54.677080 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 16 09:04:54.771657 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 16 09:04:54.771943 kernel: scsi host0: Virtio SCSI HBA Jan 16 09:04:54.772196 kernel: cryptd: max_cpu_qlen set to 1000 Jan 16 09:04:54.772221 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 16 09:04:54.772244 kernel: GPT:9289727 != 125829119 Jan 16 09:04:54.772265 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 16 09:04:54.772286 kernel: GPT:9289727 != 125829119 Jan 16 09:04:54.772308 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 16 09:04:54.772338 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 09:04:54.772359 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 16 09:04:54.830052 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Jan 16 09:04:54.831284 kernel: AVX2 version of gcm_enc/dec engaged. Jan 16 09:04:54.831318 kernel: AES CTR mode by8 optimization enabled Jan 16 09:04:54.831341 kernel: ACPI: bus type USB registered Jan 16 09:04:54.831364 kernel: libata version 3.00 loaded. Jan 16 09:04:54.763953 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 09:04:54.838705 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 16 09:04:54.851651 kernel: scsi host1: ata_piix Jan 16 09:04:54.852177 kernel: scsi host2: ata_piix Jan 16 09:04:54.852404 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 16 09:04:54.852435 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 16 09:04:54.852457 kernel: usbcore: registered new interface driver usbfs Jan 16 09:04:54.852499 kernel: usbcore: registered new interface driver hub Jan 16 09:04:54.852521 kernel: usbcore: registered new device driver usb Jan 16 09:04:54.764193 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 09:04:54.961073 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (451) Jan 16 09:04:54.961115 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (450) Jan 16 09:04:54.775565 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 09:04:54.776365 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 09:04:54.776621 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:04:54.777310 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 09:04:54.788892 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 09:04:54.939685 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 16 09:04:54.963035 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:04:54.987527 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 16 09:04:54.994491 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 16 09:04:55.000422 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 16 09:04:55.001536 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 16 09:04:55.009414 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 16 09:04:55.024633 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 09:04:55.048073 disk-uuid[534]: Primary Header is updated. Jan 16 09:04:55.048073 disk-uuid[534]: Secondary Entries is updated. Jan 16 09:04:55.048073 disk-uuid[534]: Secondary Header is updated. Jan 16 09:04:55.060462 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 09:04:55.071062 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 09:04:55.084144 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 09:04:55.118823 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 16 09:04:55.126946 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 16 09:04:55.127306 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 16 09:04:55.127547 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 16 09:04:55.127954 kernel: hub 1-0:1.0: USB hub found Jan 16 09:04:55.130813 kernel: hub 1-0:1.0: 2 ports detected Jan 16 09:04:56.095249 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 09:04:56.096086 disk-uuid[541]: The operation has completed successfully. Jan 16 09:04:56.249719 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 16 09:04:56.256251 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 16 09:04:56.283517 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 16 09:04:56.294105 sh[563]: Success Jan 16 09:04:56.350482 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 16 09:04:56.556236 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 16 09:04:56.569641 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 16 09:04:56.573210 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 16 09:04:56.678159 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 16 09:04:56.678307 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 16 09:04:56.678333 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 16 09:04:56.678358 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 16 09:04:56.678383 kernel: BTRFS info (device dm-0): using free space tree Jan 16 09:04:56.702412 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 16 09:04:56.704422 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 16 09:04:56.724880 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 16 09:04:56.732438 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 16 09:04:56.765226 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 16 09:04:56.765373 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 09:04:56.767145 kernel: BTRFS info (device vda6): using free space tree Jan 16 09:04:56.783200 kernel: BTRFS info (device vda6): auto enabling async discard Jan 16 09:04:56.801419 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 16 09:04:56.804237 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 16 09:04:56.816876 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 16 09:04:56.828399 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 16 09:04:57.060197 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 09:04:57.064292 ignition[659]: Ignition 2.19.0 Jan 16 09:04:57.065117 ignition[659]: Stage: fetch-offline Jan 16 09:04:57.065206 ignition[659]: no configs at "/usr/lib/ignition/base.d" Jan 16 09:04:57.065224 ignition[659]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:04:57.065500 ignition[659]: parsed url from cmdline: "" Jan 16 09:04:57.065506 ignition[659]: no config URL provided Jan 16 09:04:57.065516 ignition[659]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 09:04:57.065531 ignition[659]: no config at "/usr/lib/ignition/user.ign" Jan 16 09:04:57.065541 ignition[659]: failed to fetch config: resource requires networking Jan 16 09:04:57.065877 ignition[659]: Ignition finished successfully Jan 16 09:04:57.070380 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 09:04:57.071737 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 09:04:57.116499 systemd-networkd[753]: lo: Link UP Jan 16 09:04:57.116519 systemd-networkd[753]: lo: Gained carrier Jan 16 09:04:57.120909 systemd-networkd[753]: Enumeration completed Jan 16 09:04:57.121151 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 09:04:57.122521 systemd-networkd[753]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 16 09:04:57.122528 systemd-networkd[753]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 16 09:04:57.123393 systemd[1]: Reached target network.target - Network. Jan 16 09:04:57.126195 systemd-networkd[753]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 09:04:57.126201 systemd-networkd[753]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 09:04:57.128821 systemd-networkd[753]: eth0: Link UP Jan 16 09:04:57.128830 systemd-networkd[753]: eth0: Gained carrier Jan 16 09:04:57.128848 systemd-networkd[753]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 16 09:04:57.143380 systemd-networkd[753]: eth1: Link UP Jan 16 09:04:57.143396 systemd-networkd[753]: eth1: Gained carrier Jan 16 09:04:57.143417 systemd-networkd[753]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 09:04:57.145324 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 16 09:04:57.161073 systemd-networkd[753]: eth0: DHCPv4 address 137.184.14.123/20, gateway 137.184.0.1 acquired from 169.254.169.253 Jan 16 09:04:57.164219 systemd-networkd[753]: eth1: DHCPv4 address 10.124.0.17/20 acquired from 169.254.169.253 Jan 16 09:04:57.189365 ignition[756]: Ignition 2.19.0 Jan 16 09:04:57.189382 ignition[756]: Stage: fetch Jan 16 09:04:57.189713 ignition[756]: no configs at "/usr/lib/ignition/base.d" Jan 16 09:04:57.189732 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:04:57.189917 ignition[756]: parsed url from cmdline: "" Jan 16 09:04:57.189924 ignition[756]: no config URL provided Jan 16 09:04:57.189933 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 09:04:57.189947 ignition[756]: no config at "/usr/lib/ignition/user.ign" Jan 16 09:04:57.189998 ignition[756]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 16 09:04:57.209303 ignition[756]: GET result: OK Jan 16 09:04:57.209658 ignition[756]: parsing config with SHA512: 28641bd94bfcaff1b484d51e727f9a19178c459b885985f31fa6949ba159d11aa98a02f59e5f6ee1b09934098f56682a5672c3a90606eae91ed54c18b6005d44 Jan 16 09:04:57.218092 unknown[756]: fetched base config from "system" Jan 16 09:04:57.218113 unknown[756]: fetched base config from "system" Jan 16 09:04:57.218965 ignition[756]: fetch: fetch complete Jan 16 09:04:57.218128 unknown[756]: fetched user config from "digitalocean" Jan 16 09:04:57.219016 ignition[756]: fetch: fetch passed Jan 16 09:04:57.219105 ignition[756]: Ignition finished successfully Jan 16 09:04:57.222405 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 16 09:04:57.240448 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 16 09:04:57.269737 ignition[764]: Ignition 2.19.0 Jan 16 09:04:57.269748 ignition[764]: Stage: kargs Jan 16 09:04:57.269994 ignition[764]: no configs at "/usr/lib/ignition/base.d" Jan 16 09:04:57.270007 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:04:57.271334 ignition[764]: kargs: kargs passed Jan 16 09:04:57.274204 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 16 09:04:57.271406 ignition[764]: Ignition finished successfully Jan 16 09:04:57.282464 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 16 09:04:57.320083 ignition[771]: Ignition 2.19.0 Jan 16 09:04:57.320103 ignition[771]: Stage: disks Jan 16 09:04:57.320493 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 16 09:04:57.320512 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:04:57.322640 ignition[771]: disks: disks passed Jan 16 09:04:57.322729 ignition[771]: Ignition finished successfully Jan 16 09:04:57.327887 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 16 09:04:57.330216 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 16 09:04:57.331398 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 16 09:04:57.332940 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 09:04:57.334350 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 09:04:57.335497 systemd[1]: Reached target basic.target - Basic System. Jan 16 09:04:57.345301 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 16 09:04:57.392611 systemd-fsck[779]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 16 09:04:57.396216 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 16 09:04:57.412366 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 16 09:04:57.566052 kernel: EXT4-fs (vda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 16 09:04:57.569215 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 16 09:04:57.570826 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 16 09:04:57.580240 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 09:04:57.584192 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 16 09:04:57.587321 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 16 09:04:57.594188 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 16 09:04:57.597041 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (787) Jan 16 09:04:57.597854 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 16 09:04:57.598852 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 09:04:57.603514 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 16 09:04:57.607112 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 09:04:57.607175 kernel: BTRFS info (device vda6): using free space tree Jan 16 09:04:57.622238 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 16 09:04:57.627293 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 16 09:04:57.659048 kernel: BTRFS info (device vda6): auto enabling async discard Jan 16 09:04:57.667327 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 09:04:57.775043 initrd-setup-root[817]: cut: /sysroot/etc/passwd: No such file or directory Jan 16 09:04:57.795096 coreos-metadata[789]: Jan 16 09:04:57.794 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 09:04:57.796964 coreos-metadata[790]: Jan 16 09:04:57.796 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 09:04:57.799776 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Jan 16 09:04:57.810287 coreos-metadata[789]: Jan 16 09:04:57.808 INFO Fetch successful Jan 16 09:04:57.814015 coreos-metadata[790]: Jan 16 09:04:57.813 INFO Fetch successful Jan 16 09:04:57.814778 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Jan 16 09:04:57.823798 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 16 09:04:57.824031 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 16 09:04:57.830422 coreos-metadata[790]: Jan 16 09:04:57.825 INFO wrote hostname ci-4081.3.0-f-3b05cacdca to /sysroot/etc/hostname Jan 16 09:04:57.833172 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Jan 16 09:04:57.831181 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 09:04:58.045163 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 16 09:04:58.057261 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 16 09:04:58.059274 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 16 09:04:58.076868 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 16 09:04:58.078164 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 16 09:04:58.137568 ignition[907]: INFO : Ignition 2.19.0 Jan 16 09:04:58.141156 ignition[907]: INFO : Stage: mount Jan 16 09:04:58.141156 ignition[907]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 09:04:58.141156 ignition[907]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:04:58.144397 ignition[907]: INFO : mount: mount passed Jan 16 09:04:58.144397 ignition[907]: INFO : Ignition finished successfully Jan 16 09:04:58.145892 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 16 09:04:58.154246 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 16 09:04:58.158222 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 16 09:04:58.182361 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 09:04:58.199056 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (920) Jan 16 09:04:58.204136 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 16 09:04:58.204244 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 09:04:58.204266 kernel: BTRFS info (device vda6): using free space tree Jan 16 09:04:58.221240 kernel: BTRFS info (device vda6): auto enabling async discard Jan 16 09:04:58.226343 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 09:04:58.290075 ignition[937]: INFO : Ignition 2.19.0 Jan 16 09:04:58.290075 ignition[937]: INFO : Stage: files Jan 16 09:04:58.290075 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 09:04:58.290075 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:04:58.300717 ignition[937]: DEBUG : files: compiled without relabeling support, skipping Jan 16 09:04:58.302851 ignition[937]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 16 09:04:58.302851 ignition[937]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 16 09:04:58.308328 ignition[937]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 16 09:04:58.309471 ignition[937]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 16 09:04:58.310913 unknown[937]: wrote ssh authorized keys file for user: core Jan 16 09:04:58.312214 ignition[937]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 16 09:04:58.314914 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 16 09:04:58.314914 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 16 09:04:58.376469 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 16 09:04:58.449210 systemd-networkd[753]: eth0: Gained IPv6LL Jan 16 09:04:58.580342 systemd-networkd[753]: eth1: Gained IPv6LL Jan 16 09:04:58.590765 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 16 09:04:58.590765 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 16 09:04:58.590765 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 16 09:04:58.590765 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 16 09:04:58.595267 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 16 09:04:58.595267 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 09:04:58.595267 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 09:04:58.595267 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 09:04:58.595267 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 09:04:58.595267 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 09:04:58.595267 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 09:04:58.595267 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 16 09:04:58.595267 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 16 09:04:58.595267 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 16 09:04:58.595267 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 16 09:04:58.947658 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 16 09:04:59.309693 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 16 09:04:59.309693 ignition[937]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 16 09:04:59.321237 ignition[937]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 09:04:59.321237 ignition[937]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 09:04:59.321237 ignition[937]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 16 09:04:59.321237 ignition[937]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 16 09:04:59.321237 ignition[937]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 16 09:04:59.321237 ignition[937]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 16 09:04:59.321237 ignition[937]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 16 09:04:59.321237 ignition[937]: INFO : files: files passed Jan 16 09:04:59.321237 ignition[937]: INFO : Ignition finished successfully Jan 16 09:04:59.321738 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 16 09:04:59.331257 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 16 09:04:59.336068 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 16 09:04:59.341746 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 16 09:04:59.341888 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 16 09:04:59.369063 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 09:04:59.369063 initrd-setup-root-after-ignition[965]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 16 09:04:59.371913 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 09:04:59.373723 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 09:04:59.374469 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 16 09:04:59.386382 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 16 09:04:59.434928 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 16 09:04:59.435142 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 16 09:04:59.439368 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 16 09:04:59.440627 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 16 09:04:59.441828 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 16 09:04:59.447382 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 16 09:04:59.480534 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 09:04:59.486396 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 16 09:04:59.508415 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 16 09:04:59.509510 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 09:04:59.510816 systemd[1]: Stopped target timers.target - Timer Units. Jan 16 09:04:59.512303 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 16 09:04:59.512537 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 09:04:59.513787 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 16 09:04:59.514539 systemd[1]: Stopped target basic.target - Basic System. Jan 16 09:04:59.515406 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 16 09:04:59.516335 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 09:04:59.517205 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 16 09:04:59.517970 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 16 09:04:59.518692 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 09:04:59.519750 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 16 09:04:59.520567 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 16 09:04:59.521376 systemd[1]: Stopped target swap.target - Swaps. Jan 16 09:04:59.522056 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 16 09:04:59.522347 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 16 09:04:59.523469 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 16 09:04:59.524355 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 09:04:59.525172 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 16 09:04:59.525352 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 09:04:59.526184 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 16 09:04:59.526479 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 16 09:04:59.527683 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 16 09:04:59.528018 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 09:04:59.528999 systemd[1]: ignition-files.service: Deactivated successfully. Jan 16 09:04:59.529274 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 16 09:04:59.530179 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 16 09:04:59.530440 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 09:04:59.565505 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 16 09:04:59.569427 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 16 09:04:59.570193 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 16 09:04:59.570553 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 09:04:59.581418 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 16 09:04:59.583245 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 09:04:59.594329 ignition[989]: INFO : Ignition 2.19.0 Jan 16 09:04:59.595829 ignition[989]: INFO : Stage: umount Jan 16 09:04:59.598008 ignition[989]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 09:04:59.598008 ignition[989]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:04:59.602895 ignition[989]: INFO : umount: umount passed Jan 16 09:04:59.604885 ignition[989]: INFO : Ignition finished successfully Jan 16 09:04:59.605320 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 16 09:04:59.606118 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 16 09:04:59.612712 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 16 09:04:59.613767 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 16 09:04:59.618903 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 16 09:04:59.621125 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 16 09:04:59.621861 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 16 09:04:59.621957 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 16 09:04:59.628445 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 16 09:04:59.631159 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 16 09:04:59.631917 systemd[1]: Stopped target network.target - Network. Jan 16 09:04:59.632866 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 16 09:04:59.632965 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 09:04:59.634324 systemd[1]: Stopped target paths.target - Path Units. Jan 16 09:04:59.635337 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 16 09:04:59.635632 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 09:04:59.636678 systemd[1]: Stopped target slices.target - Slice Units. Jan 16 09:04:59.637582 systemd[1]: Stopped target sockets.target - Socket Units. Jan 16 09:04:59.639936 systemd[1]: iscsid.socket: Deactivated successfully. Jan 16 09:04:59.640036 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 09:04:59.641513 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 16 09:04:59.641572 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 09:04:59.643884 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 16 09:04:59.644009 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 16 09:04:59.644928 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 16 09:04:59.645015 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 16 09:04:59.646054 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 16 09:04:59.652211 systemd-networkd[753]: eth0: DHCPv6 lease lost Jan 16 09:04:59.670113 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 16 09:04:59.672104 systemd-networkd[753]: eth1: DHCPv6 lease lost Jan 16 09:04:59.672962 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 16 09:04:59.677094 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 16 09:04:59.677305 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 16 09:04:59.679186 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 16 09:04:59.679346 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 16 09:04:59.756474 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 16 09:04:59.756588 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 16 09:04:59.765440 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 16 09:04:59.767775 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 16 09:04:59.767890 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 09:04:59.769906 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 16 09:04:59.770009 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 16 09:04:59.772519 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 16 09:04:59.772644 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 16 09:04:59.778602 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 16 09:04:59.778733 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 09:04:59.779605 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 09:04:59.780718 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 16 09:04:59.782301 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 16 09:04:59.797450 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 16 09:04:59.797601 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 16 09:04:59.807860 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 16 09:04:59.808090 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 16 09:04:59.809713 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 16 09:04:59.809955 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 09:04:59.813537 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 16 09:04:59.813606 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 16 09:04:59.817357 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 16 09:04:59.817489 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 09:04:59.818312 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 16 09:04:59.818402 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 16 09:04:59.819267 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 16 09:04:59.819348 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 16 09:04:59.820417 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 09:04:59.820507 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 09:04:59.841487 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 16 09:04:59.842234 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 16 09:04:59.842588 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 09:04:59.843452 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 16 09:04:59.843535 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 09:04:59.844379 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 16 09:04:59.844463 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 09:04:59.845698 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 09:04:59.845789 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:04:59.865124 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 16 09:04:59.865379 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 16 09:04:59.867302 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 16 09:04:59.878429 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 16 09:04:59.905416 systemd[1]: Switching root. Jan 16 09:04:59.942104 systemd-journald[183]: Journal stopped Jan 16 09:05:02.246832 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 16 09:05:02.246968 kernel: SELinux: policy capability network_peer_controls=1 Jan 16 09:05:02.247030 kernel: SELinux: policy capability open_perms=1 Jan 16 09:05:02.247095 kernel: SELinux: policy capability extended_socket_class=1 Jan 16 09:05:02.247119 kernel: SELinux: policy capability always_check_network=0 Jan 16 09:05:02.247139 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 16 09:05:02.247162 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 16 09:05:02.258499 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 16 09:05:02.258543 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 16 09:05:02.258566 kernel: audit: type=1403 audit(1737018300.239:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 16 09:05:02.258592 systemd[1]: Successfully loaded SELinux policy in 59.475ms. Jan 16 09:05:02.258639 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.730ms. Jan 16 09:05:02.258666 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 16 09:05:02.258690 systemd[1]: Detected virtualization kvm. Jan 16 09:05:02.258713 systemd[1]: Detected architecture x86-64. Jan 16 09:05:02.258745 systemd[1]: Detected first boot. Jan 16 09:05:02.258767 systemd[1]: Hostname set to . Jan 16 09:05:02.258798 systemd[1]: Initializing machine ID from VM UUID. Jan 16 09:05:02.258821 zram_generator::config[1031]: No configuration found. Jan 16 09:05:02.258845 systemd[1]: Populated /etc with preset unit settings. Jan 16 09:05:02.258869 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 16 09:05:02.258891 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 16 09:05:02.258912 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 16 09:05:02.258943 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 16 09:05:02.258968 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 16 09:05:02.259039 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 16 09:05:02.259065 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 16 09:05:02.259088 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 16 09:05:02.259112 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 16 09:05:02.259136 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 16 09:05:02.259161 systemd[1]: Created slice user.slice - User and Session Slice. Jan 16 09:05:02.259184 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 09:05:02.259212 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 09:05:02.259235 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 16 09:05:02.259258 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 16 09:05:02.259279 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 16 09:05:02.259301 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 09:05:02.259324 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 16 09:05:02.259348 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 09:05:02.259370 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 16 09:05:02.259416 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 16 09:05:02.259441 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 16 09:05:02.259481 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 16 09:05:02.259505 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 09:05:02.259529 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 09:05:02.259552 systemd[1]: Reached target slices.target - Slice Units. Jan 16 09:05:02.259576 systemd[1]: Reached target swap.target - Swaps. Jan 16 09:05:02.259605 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 16 09:05:02.259629 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 16 09:05:02.259652 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 09:05:02.259676 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 09:05:02.259700 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 09:05:02.259739 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 16 09:05:02.259760 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 16 09:05:02.259782 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 16 09:05:02.259804 systemd[1]: Mounting media.mount - External Media Directory... Jan 16 09:05:02.259833 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:05:02.259855 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 16 09:05:02.259878 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 16 09:05:02.259899 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 16 09:05:02.259922 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 16 09:05:02.259943 systemd[1]: Reached target machines.target - Containers. Jan 16 09:05:02.259965 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 16 09:05:02.262758 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 09:05:02.262820 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 09:05:02.262850 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 16 09:05:02.262873 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 09:05:02.262897 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 09:05:02.262918 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 09:05:02.262939 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 16 09:05:02.262961 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 09:05:02.278077 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 16 09:05:02.278165 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 16 09:05:02.278205 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 16 09:05:02.278229 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 16 09:05:02.278252 systemd[1]: Stopped systemd-fsck-usr.service. Jan 16 09:05:02.278276 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 09:05:02.278298 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 09:05:02.278321 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 16 09:05:02.278343 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 16 09:05:02.278366 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 09:05:02.278391 systemd[1]: verity-setup.service: Deactivated successfully. Jan 16 09:05:02.278418 systemd[1]: Stopped verity-setup.service. Jan 16 09:05:02.278441 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:05:02.278465 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 16 09:05:02.278487 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 16 09:05:02.278509 systemd[1]: Mounted media.mount - External Media Directory. Jan 16 09:05:02.278533 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 16 09:05:02.278561 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 16 09:05:02.278585 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 16 09:05:02.278610 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 09:05:02.278637 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 16 09:05:02.278665 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 16 09:05:02.278687 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 09:05:02.278708 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 09:05:02.278731 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 09:05:02.278755 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 09:05:02.278779 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 09:05:02.278804 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 09:05:02.278826 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 16 09:05:02.278846 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 16 09:05:02.278928 systemd-journald[1107]: Collecting audit messages is disabled. Jan 16 09:05:02.288936 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 16 09:05:02.289059 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 16 09:05:02.289087 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 09:05:02.289110 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 16 09:05:02.289135 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 16 09:05:02.289163 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 16 09:05:02.289198 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 09:05:02.289225 systemd-journald[1107]: Journal started Jan 16 09:05:02.289285 systemd-journald[1107]: Runtime Journal (/run/log/journal/b403c3030ea8451293a4a0f491216f16) is 4.9M, max 39.3M, 34.4M free. Jan 16 09:05:02.307081 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 16 09:05:02.307173 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 09:05:01.537344 systemd[1]: Queued start job for default target multi-user.target. Jan 16 09:05:01.587772 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 16 09:05:01.589587 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 16 09:05:02.318043 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 16 09:05:02.336150 kernel: loop: module loaded Jan 16 09:05:02.341030 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 09:05:02.349062 kernel: fuse: init (API version 7.39) Jan 16 09:05:02.371640 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 16 09:05:02.425019 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 09:05:02.425132 kernel: ACPI: bus type drm_connector registered Jan 16 09:05:02.445768 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 09:05:02.449125 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 16 09:05:02.451194 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 09:05:02.452393 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 09:05:02.455329 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 16 09:05:02.455593 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 16 09:05:02.458724 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 09:05:02.460138 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 09:05:02.461402 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 16 09:05:02.464086 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 16 09:05:02.466707 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 16 09:05:02.541343 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 16 09:05:02.558138 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 16 09:05:02.584071 kernel: loop0: detected capacity change from 0 to 142488 Jan 16 09:05:02.579384 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 16 09:05:02.595280 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 16 09:05:02.596720 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 09:05:02.601386 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 16 09:05:02.716225 systemd-journald[1107]: Time spent on flushing to /var/log/journal/b403c3030ea8451293a4a0f491216f16 is 179.107ms for 992 entries. Jan 16 09:05:02.716225 systemd-journald[1107]: System Journal (/var/log/journal/b403c3030ea8451293a4a0f491216f16) is 8.0M, max 195.6M, 187.6M free. Jan 16 09:05:02.948631 systemd-journald[1107]: Received client request to flush runtime journal. Jan 16 09:05:02.948720 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 16 09:05:02.948752 kernel: loop1: detected capacity change from 0 to 140768 Jan 16 09:05:02.948779 kernel: loop2: detected capacity change from 0 to 8 Jan 16 09:05:02.722902 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 09:05:02.784391 systemd-tmpfiles[1133]: ACLs are not supported, ignoring. Jan 16 09:05:02.784415 systemd-tmpfiles[1133]: ACLs are not supported, ignoring. Jan 16 09:05:02.817208 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 16 09:05:02.820694 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 16 09:05:02.846252 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 09:05:02.858316 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 16 09:05:02.896942 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 09:05:02.915524 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 16 09:05:02.952784 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 16 09:05:02.978549 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 16 09:05:02.999206 kernel: loop3: detected capacity change from 0 to 205544 Jan 16 09:05:03.061445 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 16 09:05:03.072986 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 09:05:03.092619 kernel: loop4: detected capacity change from 0 to 142488 Jan 16 09:05:03.117570 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Jan 16 09:05:03.117607 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Jan 16 09:05:03.128919 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 09:05:03.151300 kernel: loop5: detected capacity change from 0 to 140768 Jan 16 09:05:03.329242 kernel: loop6: detected capacity change from 0 to 8 Jan 16 09:05:03.391294 kernel: loop7: detected capacity change from 0 to 205544 Jan 16 09:05:03.554530 (sd-merge)[1177]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 16 09:05:03.555515 (sd-merge)[1177]: Merged extensions into '/usr'. Jan 16 09:05:03.583401 systemd[1]: Reloading requested from client PID 1132 ('systemd-sysext') (unit systemd-sysext.service)... Jan 16 09:05:03.583426 systemd[1]: Reloading... Jan 16 09:05:03.914207 zram_generator::config[1205]: No configuration found. Jan 16 09:05:04.370041 ldconfig[1128]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 16 09:05:04.422111 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 09:05:04.519359 systemd[1]: Reloading finished in 935 ms. Jan 16 09:05:04.586912 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 16 09:05:04.588356 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 16 09:05:04.616665 systemd[1]: Starting ensure-sysext.service... Jan 16 09:05:04.644067 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 09:05:04.653265 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Jan 16 09:05:04.653290 systemd[1]: Reloading... Jan 16 09:05:04.738757 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 16 09:05:04.740927 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 16 09:05:04.745858 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 16 09:05:04.747866 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jan 16 09:05:04.750492 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jan 16 09:05:04.762577 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 09:05:04.764220 systemd-tmpfiles[1249]: Skipping /boot Jan 16 09:05:04.819422 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 09:05:04.821742 systemd-tmpfiles[1249]: Skipping /boot Jan 16 09:05:04.891009 zram_generator::config[1275]: No configuration found. Jan 16 09:05:05.215782 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 09:05:05.327506 systemd[1]: Reloading finished in 665 ms. Jan 16 09:05:05.364439 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 16 09:05:05.371075 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 09:05:05.395490 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 16 09:05:05.405326 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 16 09:05:05.411387 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 16 09:05:05.422432 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 09:05:05.431594 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 09:05:05.439549 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 16 09:05:05.452599 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:05:05.452966 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 09:05:05.465636 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 09:05:05.477546 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 09:05:05.489463 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 09:05:05.490397 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 09:05:05.490615 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:05:05.501251 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:05:05.501611 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 09:05:05.501919 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 09:05:05.502111 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:05:05.515547 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:05:05.516009 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 09:05:05.526434 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 09:05:05.528139 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 09:05:05.537351 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 16 09:05:05.538050 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:05:05.539243 systemd[1]: Finished ensure-sysext.service. Jan 16 09:05:05.547232 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 16 09:05:05.562302 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 16 09:05:05.566047 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 16 09:05:05.577343 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 09:05:05.577868 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 09:05:05.591424 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 16 09:05:05.604177 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 09:05:05.609319 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 09:05:05.610966 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 09:05:05.612072 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 09:05:05.616021 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 09:05:05.616189 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 09:05:05.637660 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 09:05:05.638092 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 09:05:05.660117 systemd-udevd[1326]: Using default interface naming scheme 'v255'. Jan 16 09:05:05.669632 augenrules[1355]: No rules Jan 16 09:05:05.678338 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 16 09:05:05.680676 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 16 09:05:05.735167 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 16 09:05:05.738228 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 09:05:05.742341 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 16 09:05:05.781501 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 09:05:05.793298 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 09:05:05.878777 systemd-resolved[1325]: Positive Trust Anchors: Jan 16 09:05:05.878797 systemd-resolved[1325]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 09:05:05.878852 systemd-resolved[1325]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 09:05:05.892075 systemd-resolved[1325]: Using system hostname 'ci-4081.3.0-f-3b05cacdca'. Jan 16 09:05:05.895864 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 09:05:05.896752 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 09:05:05.984070 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1381) Jan 16 09:05:06.001224 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 16 09:05:06.002230 systemd[1]: Reached target time-set.target - System Time Set. Jan 16 09:05:06.053104 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 16 09:05:06.079231 systemd-networkd[1372]: lo: Link UP Jan 16 09:05:06.081903 systemd-networkd[1372]: lo: Gained carrier Jan 16 09:05:06.086741 systemd-networkd[1372]: Enumeration completed Jan 16 09:05:06.087135 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 09:05:06.087296 systemd-timesyncd[1344]: No network connectivity, watching for changes. Jan 16 09:05:06.090643 systemd[1]: Reached target network.target - Network. Jan 16 09:05:06.106335 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 16 09:05:06.152307 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 16 09:05:06.153087 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:05:06.153333 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 09:05:06.162472 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 09:05:06.174529 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 09:05:06.183196 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 09:05:06.184365 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 09:05:06.184429 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 09:05:06.184452 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:05:06.197052 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 09:05:06.197299 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 09:05:06.221892 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 09:05:06.224609 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 09:05:06.226956 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 09:05:06.228087 kernel: ISO 9660 Extensions: RRIP_1991A Jan 16 09:05:06.233074 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 16 09:05:06.254534 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 09:05:06.255480 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 09:05:06.258581 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 09:05:06.272935 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 16 09:05:06.283533 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 16 09:05:06.322689 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 16 09:05:06.342024 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 16 09:05:06.350070 kernel: ACPI: button: Power Button [PWRF] Jan 16 09:05:06.392021 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 16 09:05:06.405090 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 16 09:05:06.420890 systemd-networkd[1372]: eth1: Configuring with /run/systemd/network/10-da:73:d4:8f:da:65.network. Jan 16 09:05:06.423259 systemd-networkd[1372]: eth1: Link UP Jan 16 09:05:06.424306 systemd-networkd[1372]: eth1: Gained carrier Jan 16 09:05:06.430873 systemd-networkd[1372]: eth0: Configuring with /run/systemd/network/10-5a:c8:15:1b:cb:9a.network. Jan 16 09:05:06.434873 systemd-networkd[1372]: eth0: Link UP Jan 16 09:05:06.435384 systemd-networkd[1372]: eth0: Gained carrier Jan 16 09:05:06.440969 systemd-timesyncd[1344]: Network configuration changed, trying to establish connection. Jan 16 09:05:06.524026 kernel: mousedev: PS/2 mouse device common for all mice Jan 16 09:05:06.570470 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 09:05:06.659067 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 16 09:05:06.659147 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 16 09:05:06.665008 kernel: Console: switching to colour dummy device 80x25 Jan 16 09:05:06.666485 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 16 09:05:06.666582 kernel: [drm] features: -context_init Jan 16 09:05:06.669108 kernel: [drm] number of scanouts: 1 Jan 16 09:05:06.669225 kernel: [drm] number of cap sets: 0 Jan 16 09:05:06.693117 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 16 09:05:06.701747 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 09:05:06.702172 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:05:06.707050 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 16 09:05:06.710062 kernel: Console: switching to colour frame buffer device 128x48 Jan 16 09:05:06.715462 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 09:05:06.736068 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 16 09:05:06.757714 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 09:05:06.758259 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:05:06.792339 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 09:05:06.809018 kernel: EDAC MC: Ver: 3.0.0 Jan 16 09:05:06.838361 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 16 09:05:06.858209 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 16 09:05:06.887924 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:05:06.891155 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 16 09:05:06.943305 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 16 09:05:06.945802 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 09:05:06.947620 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 09:05:06.948416 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 16 09:05:06.948760 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 16 09:05:06.954431 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 16 09:05:06.958363 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 16 09:05:06.960219 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 16 09:05:06.966546 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 16 09:05:06.966609 systemd[1]: Reached target paths.target - Path Units. Jan 16 09:05:06.966687 systemd[1]: Reached target timers.target - Timer Units. Jan 16 09:05:06.969707 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 16 09:05:06.993714 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 16 09:05:07.007322 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 16 09:05:07.045667 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 16 09:05:07.051945 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 16 09:05:07.054249 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 09:05:07.057011 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 16 09:05:07.055325 systemd[1]: Reached target basic.target - Basic System. Jan 16 09:05:07.058048 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 16 09:05:07.058097 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 16 09:05:07.066354 systemd[1]: Starting containerd.service - containerd container runtime... Jan 16 09:05:07.082276 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 16 09:05:07.123940 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 16 09:05:07.136472 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 16 09:05:07.153333 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 16 09:05:07.156177 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 16 09:05:07.159500 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 16 09:05:07.167267 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 16 09:05:07.182713 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 16 09:05:07.192042 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 16 09:05:07.217135 jq[1438]: false Jan 16 09:05:07.217370 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 16 09:05:07.218878 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 16 09:05:07.222997 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 16 09:05:07.232496 systemd[1]: Starting update-engine.service - Update Engine... Jan 16 09:05:07.234344 dbus-daemon[1437]: [system] SELinux support is enabled Jan 16 09:05:07.248212 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 16 09:05:07.251401 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 16 09:05:07.266102 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 16 09:05:07.277525 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 16 09:05:07.279106 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 16 09:05:07.279868 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 16 09:05:07.281244 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 16 09:05:07.306670 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 16 09:05:07.306790 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 16 09:05:07.307960 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 16 09:05:07.315264 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 16 09:05:07.315309 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 16 09:05:07.322818 jq[1450]: true Jan 16 09:05:07.339018 coreos-metadata[1436]: Jan 16 09:05:07.336 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 09:05:07.350101 coreos-metadata[1436]: Jan 16 09:05:07.348 INFO Fetch successful Jan 16 09:05:07.365631 update_engine[1449]: I20250116 09:05:07.364914 1449 main.cc:92] Flatcar Update Engine starting Jan 16 09:05:07.371876 update_engine[1449]: I20250116 09:05:07.370946 1449 update_check_scheduler.cc:74] Next update check in 11m32s Jan 16 09:05:07.367534 systemd[1]: Started update-engine.service - Update Engine. Jan 16 09:05:07.372390 (ntainerd)[1464]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 16 09:05:07.377449 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 16 09:05:07.380188 systemd[1]: motdgen.service: Deactivated successfully. Jan 16 09:05:07.380535 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 16 09:05:07.413080 jq[1466]: true Jan 16 09:05:07.420177 extend-filesystems[1441]: Found loop4 Jan 16 09:05:07.420177 extend-filesystems[1441]: Found loop5 Jan 16 09:05:07.420177 extend-filesystems[1441]: Found loop6 Jan 16 09:05:07.420177 extend-filesystems[1441]: Found loop7 Jan 16 09:05:07.420177 extend-filesystems[1441]: Found vda Jan 16 09:05:07.420177 extend-filesystems[1441]: Found vda1 Jan 16 09:05:07.420177 extend-filesystems[1441]: Found vda2 Jan 16 09:05:07.420177 extend-filesystems[1441]: Found vda3 Jan 16 09:05:07.420177 extend-filesystems[1441]: Found usr Jan 16 09:05:07.420177 extend-filesystems[1441]: Found vda4 Jan 16 09:05:07.420177 extend-filesystems[1441]: Found vda6 Jan 16 09:05:07.420177 extend-filesystems[1441]: Found vda7 Jan 16 09:05:07.515585 extend-filesystems[1441]: Found vda9 Jan 16 09:05:07.515585 extend-filesystems[1441]: Checking size of /dev/vda9 Jan 16 09:05:07.515585 extend-filesystems[1441]: Resized partition /dev/vda9 Jan 16 09:05:07.478518 systemd-logind[1447]: New seat seat0. Jan 16 09:05:07.537929 tar[1461]: linux-amd64/helm Jan 16 09:05:07.540288 extend-filesystems[1484]: resize2fs 1.47.1 (20-May-2024) Jan 16 09:05:07.488140 systemd-logind[1447]: Watching system buttons on /dev/input/event1 (Power Button) Jan 16 09:05:07.488167 systemd-logind[1447]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 16 09:05:07.506034 systemd[1]: Started systemd-logind.service - User Login Management. Jan 16 09:05:07.550567 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 16 09:05:07.560771 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 16 09:05:07.590054 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 16 09:05:07.666227 systemd-networkd[1372]: eth1: Gained IPv6LL Jan 16 09:05:07.736026 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1384) Jan 16 09:05:07.708912 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 16 09:05:07.794020 systemd[1]: Reached target network-online.target - Network is Online. Jan 16 09:05:07.835431 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 09:05:07.847512 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 16 09:05:07.922113 systemd-networkd[1372]: eth0: Gained IPv6LL Jan 16 09:05:08.003020 bash[1498]: Updated "/home/core/.ssh/authorized_keys" Jan 16 09:05:08.012219 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 16 09:05:08.045789 systemd[1]: Starting sshkeys.service... Jan 16 09:05:08.109148 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 16 09:05:08.125931 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 16 09:05:08.198931 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 16 09:05:08.240418 locksmithd[1470]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 16 09:05:08.322108 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 16 09:05:08.329369 coreos-metadata[1521]: Jan 16 09:05:08.329 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 09:05:08.349004 coreos-metadata[1521]: Jan 16 09:05:08.346 INFO Fetch successful Jan 16 09:05:08.396919 sshd_keygen[1469]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 16 09:05:08.406052 extend-filesystems[1484]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 16 09:05:08.406052 extend-filesystems[1484]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 16 09:05:08.406052 extend-filesystems[1484]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 16 09:05:08.428954 extend-filesystems[1441]: Resized filesystem in /dev/vda9 Jan 16 09:05:08.428954 extend-filesystems[1441]: Found vdb Jan 16 09:05:08.410623 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 16 09:05:08.411353 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 16 09:05:08.418845 unknown[1521]: wrote ssh authorized keys file for user: core Jan 16 09:05:08.539188 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 16 09:05:08.562599 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 16 09:05:08.586751 update-ssh-keys[1533]: Updated "/home/core/.ssh/authorized_keys" Jan 16 09:05:08.590108 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 16 09:05:08.602603 systemd[1]: Finished sshkeys.service. Jan 16 09:05:08.614234 containerd[1464]: time="2025-01-16T09:05:08.611875461Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 16 09:05:08.641963 systemd[1]: issuegen.service: Deactivated successfully. Jan 16 09:05:08.642340 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 16 09:05:08.666494 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 16 09:05:08.682392 containerd[1464]: time="2025-01-16T09:05:08.679419880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 16 09:05:08.685363 containerd[1464]: time="2025-01-16T09:05:08.685291425Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 16 09:05:08.685363 containerd[1464]: time="2025-01-16T09:05:08.685355219Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 16 09:05:08.685528 containerd[1464]: time="2025-01-16T09:05:08.685382396Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 16 09:05:08.685642 containerd[1464]: time="2025-01-16T09:05:08.685607655Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 16 09:05:08.685702 containerd[1464]: time="2025-01-16T09:05:08.685642964Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 16 09:05:08.685770 containerd[1464]: time="2025-01-16T09:05:08.685723023Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 09:05:08.685770 containerd[1464]: time="2025-01-16T09:05:08.685740848Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 16 09:05:08.686098 containerd[1464]: time="2025-01-16T09:05:08.686047230Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 09:05:08.686098 containerd[1464]: time="2025-01-16T09:05:08.686081271Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 16 09:05:08.686098 containerd[1464]: time="2025-01-16T09:05:08.686100712Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 09:05:08.686284 containerd[1464]: time="2025-01-16T09:05:08.686119389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 16 09:05:08.686284 containerd[1464]: time="2025-01-16T09:05:08.686236590Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 16 09:05:08.688008 containerd[1464]: time="2025-01-16T09:05:08.686796104Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 16 09:05:08.690013 containerd[1464]: time="2025-01-16T09:05:08.689155105Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 09:05:08.690013 containerd[1464]: time="2025-01-16T09:05:08.689212193Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 16 09:05:08.690013 containerd[1464]: time="2025-01-16T09:05:08.689397438Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 16 09:05:08.690013 containerd[1464]: time="2025-01-16T09:05:08.689468935Z" level=info msg="metadata content store policy set" policy=shared Jan 16 09:05:08.704929 containerd[1464]: time="2025-01-16T09:05:08.703707573Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 16 09:05:08.704929 containerd[1464]: time="2025-01-16T09:05:08.703812054Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 16 09:05:08.704929 containerd[1464]: time="2025-01-16T09:05:08.703845393Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 16 09:05:08.704929 containerd[1464]: time="2025-01-16T09:05:08.703872478Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 16 09:05:08.704929 containerd[1464]: time="2025-01-16T09:05:08.703898671Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 16 09:05:08.704929 containerd[1464]: time="2025-01-16T09:05:08.704238274Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 16 09:05:08.704929 containerd[1464]: time="2025-01-16T09:05:08.704616418Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 16 09:05:08.704929 containerd[1464]: time="2025-01-16T09:05:08.704810825Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 16 09:05:08.704929 containerd[1464]: time="2025-01-16T09:05:08.704856106Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 16 09:05:08.704929 containerd[1464]: time="2025-01-16T09:05:08.704879866Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 16 09:05:08.704929 containerd[1464]: time="2025-01-16T09:05:08.704899965Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 16 09:05:08.704929 containerd[1464]: time="2025-01-16T09:05:08.704923349Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 16 09:05:08.704929 containerd[1464]: time="2025-01-16T09:05:08.704945104Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 16 09:05:08.710090 containerd[1464]: time="2025-01-16T09:05:08.704967927Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 16 09:05:08.710090 containerd[1464]: time="2025-01-16T09:05:08.707172109Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 16 09:05:08.710090 containerd[1464]: time="2025-01-16T09:05:08.707207200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 16 09:05:08.710090 containerd[1464]: time="2025-01-16T09:05:08.707233275Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 16 09:05:08.710090 containerd[1464]: time="2025-01-16T09:05:08.707255095Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 16 09:05:08.710090 containerd[1464]: time="2025-01-16T09:05:08.707292048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 16 09:05:08.710090 containerd[1464]: time="2025-01-16T09:05:08.707316809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 16 09:05:08.710090 containerd[1464]: time="2025-01-16T09:05:08.707337989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 16 09:05:08.710090 containerd[1464]: time="2025-01-16T09:05:08.707403993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 16 09:05:08.710090 containerd[1464]: time="2025-01-16T09:05:08.707425787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 16 09:05:08.710090 containerd[1464]: time="2025-01-16T09:05:08.707446323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 16 09:05:08.710090 containerd[1464]: time="2025-01-16T09:05:08.707464748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 16 09:05:08.710090 containerd[1464]: time="2025-01-16T09:05:08.707486652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 16 09:05:08.710090 containerd[1464]: time="2025-01-16T09:05:08.707509249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 16 09:05:08.710968 containerd[1464]: time="2025-01-16T09:05:08.707535722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 16 09:05:08.710968 containerd[1464]: time="2025-01-16T09:05:08.707559734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 16 09:05:08.710968 containerd[1464]: time="2025-01-16T09:05:08.707583575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 16 09:05:08.710968 containerd[1464]: time="2025-01-16T09:05:08.707607274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 16 09:05:08.710968 containerd[1464]: time="2025-01-16T09:05:08.707634628Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 16 09:05:08.710968 containerd[1464]: time="2025-01-16T09:05:08.707797811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 16 09:05:08.710968 containerd[1464]: time="2025-01-16T09:05:08.707825038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 16 09:05:08.710968 containerd[1464]: time="2025-01-16T09:05:08.707863065Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 16 09:05:08.710968 containerd[1464]: time="2025-01-16T09:05:08.709046426Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 16 09:05:08.710968 containerd[1464]: time="2025-01-16T09:05:08.709243395Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 16 09:05:08.710968 containerd[1464]: time="2025-01-16T09:05:08.709266437Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 16 09:05:08.710968 containerd[1464]: time="2025-01-16T09:05:08.709286866Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 16 09:05:08.710968 containerd[1464]: time="2025-01-16T09:05:08.709304399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 16 09:05:08.711424 containerd[1464]: time="2025-01-16T09:05:08.709327580Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 16 09:05:08.711424 containerd[1464]: time="2025-01-16T09:05:08.709344380Z" level=info msg="NRI interface is disabled by configuration." Jan 16 09:05:08.711424 containerd[1464]: time="2025-01-16T09:05:08.709362545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 16 09:05:08.711540 containerd[1464]: time="2025-01-16T09:05:08.709850700Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 16 09:05:08.722438 containerd[1464]: time="2025-01-16T09:05:08.709965318Z" level=info msg="Connect containerd service" Jan 16 09:05:08.722438 containerd[1464]: time="2025-01-16T09:05:08.712150666Z" level=info msg="using legacy CRI server" Jan 16 09:05:08.722438 containerd[1464]: time="2025-01-16T09:05:08.712173841Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 16 09:05:08.722438 containerd[1464]: time="2025-01-16T09:05:08.712404346Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 16 09:05:08.722438 containerd[1464]: time="2025-01-16T09:05:08.713536836Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 09:05:08.722438 containerd[1464]: time="2025-01-16T09:05:08.715313648Z" level=info msg="Start subscribing containerd event" Jan 16 09:05:08.722438 containerd[1464]: time="2025-01-16T09:05:08.715440044Z" level=info msg="Start recovering state" Jan 16 09:05:08.722438 containerd[1464]: time="2025-01-16T09:05:08.715573853Z" level=info msg="Start event monitor" Jan 16 09:05:08.722438 containerd[1464]: time="2025-01-16T09:05:08.715607079Z" level=info msg="Start snapshots syncer" Jan 16 09:05:08.722438 containerd[1464]: time="2025-01-16T09:05:08.715624188Z" level=info msg="Start cni network conf syncer for default" Jan 16 09:05:08.722438 containerd[1464]: time="2025-01-16T09:05:08.715635267Z" level=info msg="Start streaming server" Jan 16 09:05:08.716463 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 16 09:05:08.728716 containerd[1464]: time="2025-01-16T09:05:08.727415253Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 16 09:05:08.728716 containerd[1464]: time="2025-01-16T09:05:08.727541503Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 16 09:05:08.728716 containerd[1464]: time="2025-01-16T09:05:08.727691356Z" level=info msg="containerd successfully booted in 0.117177s" Jan 16 09:05:08.739511 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 16 09:05:08.753590 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 16 09:05:08.759592 systemd[1]: Reached target getty.target - Login Prompts. Jan 16 09:05:08.764493 systemd[1]: Started containerd.service - containerd container runtime. Jan 16 09:05:09.597549 tar[1461]: linux-amd64/LICENSE Jan 16 09:05:09.598168 tar[1461]: linux-amd64/README.md Jan 16 09:05:09.615037 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 16 09:05:10.567473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:05:10.570872 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 16 09:05:10.575072 systemd[1]: Startup finished in 1.585s (kernel) + 7.417s (initrd) + 10.393s (userspace) = 19.396s. Jan 16 09:05:10.581388 (kubelet)[1561]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 09:05:11.759057 kubelet[1561]: E0116 09:05:11.754421 1561 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 09:05:11.759933 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 09:05:11.760595 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 09:05:11.761685 systemd[1]: kubelet.service: Consumed 1.593s CPU time. Jan 16 09:05:12.617277 systemd-timesyncd[1344]: Contacted time server 142.202.190.19:123 (0.flatcar.pool.ntp.org). Jan 16 09:05:12.617387 systemd-timesyncd[1344]: Initial clock synchronization to Thu 2025-01-16 09:05:12.936611 UTC. Jan 16 09:05:16.672162 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 16 09:05:16.681122 systemd[1]: Started sshd@0-137.184.14.123:22-139.178.68.195:59632.service - OpenSSH per-connection server daemon (139.178.68.195:59632). Jan 16 09:05:16.899255 sshd[1574]: Accepted publickey for core from 139.178.68.195 port 59632 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:05:16.904245 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:05:16.940391 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 16 09:05:16.968681 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 16 09:05:16.974611 systemd-logind[1447]: New session 1 of user core. Jan 16 09:05:17.007591 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 16 09:05:17.028770 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 16 09:05:17.034471 (systemd)[1578]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 16 09:05:17.256432 systemd[1578]: Queued start job for default target default.target. Jan 16 09:05:17.275364 systemd[1578]: Created slice app.slice - User Application Slice. Jan 16 09:05:17.275473 systemd[1578]: Reached target paths.target - Paths. Jan 16 09:05:17.275501 systemd[1578]: Reached target timers.target - Timers. Jan 16 09:05:17.290463 systemd[1578]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 16 09:05:17.307410 systemd[1578]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 16 09:05:17.307623 systemd[1578]: Reached target sockets.target - Sockets. Jan 16 09:05:17.307649 systemd[1578]: Reached target basic.target - Basic System. Jan 16 09:05:17.307723 systemd[1578]: Reached target default.target - Main User Target. Jan 16 09:05:17.307779 systemd[1578]: Startup finished in 256ms. Jan 16 09:05:17.308321 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 16 09:05:17.312568 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 16 09:05:17.416219 systemd[1]: Started sshd@1-137.184.14.123:22-139.178.68.195:59644.service - OpenSSH per-connection server daemon (139.178.68.195:59644). Jan 16 09:05:17.537036 sshd[1589]: Accepted publickey for core from 139.178.68.195 port 59644 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:05:17.539900 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:05:17.547280 systemd-logind[1447]: New session 2 of user core. Jan 16 09:05:17.570611 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 16 09:05:17.697419 sshd[1589]: pam_unix(sshd:session): session closed for user core Jan 16 09:05:17.720321 systemd[1]: sshd@1-137.184.14.123:22-139.178.68.195:59644.service: Deactivated successfully. Jan 16 09:05:17.724603 systemd[1]: session-2.scope: Deactivated successfully. Jan 16 09:05:17.731403 systemd-logind[1447]: Session 2 logged out. Waiting for processes to exit. Jan 16 09:05:17.742645 systemd[1]: Started sshd@2-137.184.14.123:22-139.178.68.195:59654.service - OpenSSH per-connection server daemon (139.178.68.195:59654). Jan 16 09:05:17.747598 systemd-logind[1447]: Removed session 2. Jan 16 09:05:17.820153 sshd[1596]: Accepted publickey for core from 139.178.68.195 port 59654 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:05:17.827232 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:05:17.848103 systemd-logind[1447]: New session 3 of user core. Jan 16 09:05:17.854424 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 16 09:05:17.951359 sshd[1596]: pam_unix(sshd:session): session closed for user core Jan 16 09:05:17.974397 systemd[1]: sshd@2-137.184.14.123:22-139.178.68.195:59654.service: Deactivated successfully. Jan 16 09:05:17.977967 systemd[1]: session-3.scope: Deactivated successfully. Jan 16 09:05:17.982537 systemd-logind[1447]: Session 3 logged out. Waiting for processes to exit. Jan 16 09:05:17.991624 systemd[1]: Started sshd@3-137.184.14.123:22-139.178.68.195:59662.service - OpenSSH per-connection server daemon (139.178.68.195:59662). Jan 16 09:05:17.994220 systemd-logind[1447]: Removed session 3. Jan 16 09:05:18.072649 sshd[1603]: Accepted publickey for core from 139.178.68.195 port 59662 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:05:18.071647 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:05:18.084618 systemd-logind[1447]: New session 4 of user core. Jan 16 09:05:18.096670 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 16 09:05:18.186901 sshd[1603]: pam_unix(sshd:session): session closed for user core Jan 16 09:05:18.209724 systemd[1]: sshd@3-137.184.14.123:22-139.178.68.195:59662.service: Deactivated successfully. Jan 16 09:05:18.214068 systemd[1]: session-4.scope: Deactivated successfully. Jan 16 09:05:18.218620 systemd-logind[1447]: Session 4 logged out. Waiting for processes to exit. Jan 16 09:05:18.242224 systemd[1]: Started sshd@4-137.184.14.123:22-139.178.68.195:59664.service - OpenSSH per-connection server daemon (139.178.68.195:59664). Jan 16 09:05:18.245254 systemd-logind[1447]: Removed session 4. Jan 16 09:05:18.315741 sshd[1610]: Accepted publickey for core from 139.178.68.195 port 59664 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:05:18.320384 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:05:18.341411 systemd-logind[1447]: New session 5 of user core. Jan 16 09:05:18.358417 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 16 09:05:18.470701 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 16 09:05:18.471213 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 09:05:18.498604 sudo[1613]: pam_unix(sudo:session): session closed for user root Jan 16 09:05:18.504957 sshd[1610]: pam_unix(sshd:session): session closed for user core Jan 16 09:05:18.521776 systemd[1]: sshd@4-137.184.14.123:22-139.178.68.195:59664.service: Deactivated successfully. Jan 16 09:05:18.524907 systemd[1]: session-5.scope: Deactivated successfully. Jan 16 09:05:18.534494 systemd-logind[1447]: Session 5 logged out. Waiting for processes to exit. Jan 16 09:05:18.547384 systemd[1]: Started sshd@5-137.184.14.123:22-139.178.68.195:59678.service - OpenSSH per-connection server daemon (139.178.68.195:59678). Jan 16 09:05:18.551132 systemd-logind[1447]: Removed session 5. Jan 16 09:05:18.624835 sshd[1618]: Accepted publickey for core from 139.178.68.195 port 59678 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:05:18.630642 sshd[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:05:18.651750 systemd-logind[1447]: New session 6 of user core. Jan 16 09:05:18.655716 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 16 09:05:18.734599 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 16 09:05:18.735266 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 09:05:18.758296 sudo[1622]: pam_unix(sudo:session): session closed for user root Jan 16 09:05:18.778832 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 16 09:05:18.779496 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 09:05:18.819574 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 16 09:05:18.838694 auditctl[1625]: No rules Jan 16 09:05:18.840832 systemd[1]: audit-rules.service: Deactivated successfully. Jan 16 09:05:18.841245 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 16 09:05:18.852126 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 16 09:05:18.926987 augenrules[1644]: No rules Jan 16 09:05:18.930108 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 16 09:05:18.934626 sudo[1621]: pam_unix(sudo:session): session closed for user root Jan 16 09:05:18.947423 sshd[1618]: pam_unix(sshd:session): session closed for user core Jan 16 09:05:18.970851 systemd[1]: sshd@5-137.184.14.123:22-139.178.68.195:59678.service: Deactivated successfully. Jan 16 09:05:18.974119 systemd[1]: session-6.scope: Deactivated successfully. Jan 16 09:05:18.978495 systemd-logind[1447]: Session 6 logged out. Waiting for processes to exit. Jan 16 09:05:18.996108 systemd[1]: Started sshd@6-137.184.14.123:22-139.178.68.195:59692.service - OpenSSH per-connection server daemon (139.178.68.195:59692). Jan 16 09:05:19.003240 systemd-logind[1447]: Removed session 6. Jan 16 09:05:19.065998 sshd[1652]: Accepted publickey for core from 139.178.68.195 port 59692 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:05:19.069967 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:05:19.081125 systemd-logind[1447]: New session 7 of user core. Jan 16 09:05:19.090373 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 16 09:05:19.169924 sudo[1655]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 16 09:05:19.170517 sudo[1655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 09:05:20.139130 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 16 09:05:20.140203 (dockerd)[1671]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 16 09:05:20.954168 dockerd[1671]: time="2025-01-16T09:05:20.953430735Z" level=info msg="Starting up" Jan 16 09:05:21.269664 dockerd[1671]: time="2025-01-16T09:05:21.269510269Z" level=info msg="Loading containers: start." Jan 16 09:05:21.664851 kernel: Initializing XFRM netlink socket Jan 16 09:05:21.855014 systemd-networkd[1372]: docker0: Link UP Jan 16 09:05:21.900150 dockerd[1671]: time="2025-01-16T09:05:21.898805702Z" level=info msg="Loading containers: done." Jan 16 09:05:21.936474 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 16 09:05:21.945768 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 09:05:21.990087 dockerd[1671]: time="2025-01-16T09:05:21.986636703Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 16 09:05:21.990087 dockerd[1671]: time="2025-01-16T09:05:21.986808963Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 16 09:05:21.990087 dockerd[1671]: time="2025-01-16T09:05:21.987036828Z" level=info msg="Daemon has completed initialization" Jan 16 09:05:22.225185 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:05:22.245742 (kubelet)[1785]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 09:05:22.425135 kubelet[1785]: E0116 09:05:22.423631 1785 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 09:05:22.434338 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 09:05:22.434679 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 09:05:22.453212 dockerd[1671]: time="2025-01-16T09:05:22.452883435Z" level=info msg="API listen on /run/docker.sock" Jan 16 09:05:22.453562 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 16 09:05:23.949528 containerd[1464]: time="2025-01-16T09:05:23.948819263Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 16 09:05:23.962610 systemd-resolved[1325]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 16 09:05:25.273676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount303014611.mount: Deactivated successfully. Jan 16 09:05:27.057298 systemd-resolved[1325]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 16 09:05:27.985013 containerd[1464]: time="2025-01-16T09:05:27.984910513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:27.991434 containerd[1464]: time="2025-01-16T09:05:27.991347258Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976721" Jan 16 09:05:27.997478 containerd[1464]: time="2025-01-16T09:05:27.997333425Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:28.005012 containerd[1464]: time="2025-01-16T09:05:28.004470373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:28.008382 containerd[1464]: time="2025-01-16T09:05:28.008321269Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 4.05941982s" Jan 16 09:05:28.008615 containerd[1464]: time="2025-01-16T09:05:28.008595591Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 16 09:05:28.011906 containerd[1464]: time="2025-01-16T09:05:28.011543220Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 16 09:05:30.711526 containerd[1464]: time="2025-01-16T09:05:30.711427552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:30.714718 containerd[1464]: time="2025-01-16T09:05:30.714441346Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701143" Jan 16 09:05:30.716325 containerd[1464]: time="2025-01-16T09:05:30.716221949Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:30.745042 containerd[1464]: time="2025-01-16T09:05:30.743120069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:30.746051 containerd[1464]: time="2025-01-16T09:05:30.745959545Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 2.734360355s" Jan 16 09:05:30.746297 containerd[1464]: time="2025-01-16T09:05:30.746259919Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 16 09:05:30.748042 containerd[1464]: time="2025-01-16T09:05:30.747968310Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 16 09:05:30.751175 systemd-resolved[1325]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Jan 16 09:05:32.445920 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 16 09:05:32.465579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 09:05:32.711324 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:05:32.730651 (kubelet)[1901]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 09:05:32.896871 kubelet[1901]: E0116 09:05:32.893347 1901 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 09:05:32.899108 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 09:05:32.899563 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 09:05:33.061432 containerd[1464]: time="2025-01-16T09:05:33.060642565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:33.064024 containerd[1464]: time="2025-01-16T09:05:33.063803141Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652053" Jan 16 09:05:33.071788 containerd[1464]: time="2025-01-16T09:05:33.071682658Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:33.077336 containerd[1464]: time="2025-01-16T09:05:33.077202540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:33.079948 containerd[1464]: time="2025-01-16T09:05:33.079301436Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 2.331097475s" Jan 16 09:05:33.079948 containerd[1464]: time="2025-01-16T09:05:33.079369366Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 16 09:05:33.080604 containerd[1464]: time="2025-01-16T09:05:33.080547923Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 16 09:05:34.932605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4243715370.mount: Deactivated successfully. Jan 16 09:05:35.927088 containerd[1464]: time="2025-01-16T09:05:35.926617903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:35.930007 containerd[1464]: time="2025-01-16T09:05:35.929900845Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 16 09:05:35.933022 containerd[1464]: time="2025-01-16T09:05:35.932196984Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:35.938005 containerd[1464]: time="2025-01-16T09:05:35.937907108Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 2.857199505s" Jan 16 09:05:35.938348 containerd[1464]: time="2025-01-16T09:05:35.938295640Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 16 09:05:35.938572 containerd[1464]: time="2025-01-16T09:05:35.938243273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:35.940432 containerd[1464]: time="2025-01-16T09:05:35.940394223Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 16 09:05:36.604604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount744062958.mount: Deactivated successfully. Jan 16 09:05:38.553769 containerd[1464]: time="2025-01-16T09:05:38.552223798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:38.564265 containerd[1464]: time="2025-01-16T09:05:38.564179099Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 16 09:05:38.567215 containerd[1464]: time="2025-01-16T09:05:38.567150261Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:38.573718 containerd[1464]: time="2025-01-16T09:05:38.573648370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:38.577161 containerd[1464]: time="2025-01-16T09:05:38.577088178Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.636507061s" Jan 16 09:05:38.577529 containerd[1464]: time="2025-01-16T09:05:38.577385404Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 16 09:05:38.579081 containerd[1464]: time="2025-01-16T09:05:38.578564187Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 16 09:05:39.270313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount339880094.mount: Deactivated successfully. Jan 16 09:05:39.297486 containerd[1464]: time="2025-01-16T09:05:39.297395537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:39.301498 containerd[1464]: time="2025-01-16T09:05:39.301393569Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 16 09:05:39.303436 containerd[1464]: time="2025-01-16T09:05:39.303249261Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:39.311401 containerd[1464]: time="2025-01-16T09:05:39.310230544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:39.312188 containerd[1464]: time="2025-01-16T09:05:39.311802537Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 733.178861ms" Jan 16 09:05:39.312188 containerd[1464]: time="2025-01-16T09:05:39.311860114Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 16 09:05:39.313893 containerd[1464]: time="2025-01-16T09:05:39.313553382Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 16 09:05:40.030903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1253388621.mount: Deactivated successfully. Jan 16 09:05:42.946020 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 16 09:05:42.958498 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 09:05:43.292392 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:05:43.292857 (kubelet)[2028]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 09:05:43.402808 kubelet[2028]: E0116 09:05:43.402659 2028 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 09:05:43.405565 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 09:05:43.405797 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 09:05:43.640344 containerd[1464]: time="2025-01-16T09:05:43.639897484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:43.644263 containerd[1464]: time="2025-01-16T09:05:43.644131956Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 16 09:05:43.646465 containerd[1464]: time="2025-01-16T09:05:43.646170170Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:43.653818 containerd[1464]: time="2025-01-16T09:05:43.653746656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:43.656244 containerd[1464]: time="2025-01-16T09:05:43.655778894Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.342156613s" Jan 16 09:05:43.656244 containerd[1464]: time="2025-01-16T09:05:43.655847750Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 16 09:05:48.836906 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:05:48.854539 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 09:05:48.910564 systemd[1]: Reloading requested from client PID 2059 ('systemctl') (unit session-7.scope)... Jan 16 09:05:48.910772 systemd[1]: Reloading... Jan 16 09:05:49.095085 zram_generator::config[2095]: No configuration found. Jan 16 09:05:49.319327 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 09:05:49.432756 systemd[1]: Reloading finished in 521 ms. Jan 16 09:05:49.501570 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 16 09:05:49.501925 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 16 09:05:49.502638 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:05:49.509684 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 09:05:49.706059 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:05:49.724991 (kubelet)[2153]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 09:05:49.824021 kubelet[2153]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 09:05:49.824021 kubelet[2153]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 16 09:05:49.824021 kubelet[2153]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 09:05:49.824021 kubelet[2153]: I0116 09:05:49.823508 2153 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 09:05:50.747200 kubelet[2153]: I0116 09:05:50.747125 2153 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 16 09:05:50.747200 kubelet[2153]: I0116 09:05:50.747181 2153 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 09:05:50.747563 kubelet[2153]: I0116 09:05:50.747531 2153 server.go:929] "Client rotation is on, will bootstrap in background" Jan 16 09:05:50.783890 kubelet[2153]: I0116 09:05:50.783368 2153 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 09:05:50.784375 kubelet[2153]: E0116 09:05:50.784223 2153 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://137.184.14.123:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 137.184.14.123:6443: connect: connection refused" logger="UnhandledError" Jan 16 09:05:50.794785 kubelet[2153]: E0116 09:05:50.794717 2153 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 16 09:05:50.794785 kubelet[2153]: I0116 09:05:50.794765 2153 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 16 09:05:50.807443 kubelet[2153]: I0116 09:05:50.807290 2153 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 09:05:50.812151 kubelet[2153]: I0116 09:05:50.810821 2153 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 16 09:05:50.812151 kubelet[2153]: I0116 09:05:50.811184 2153 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 09:05:50.812151 kubelet[2153]: I0116 09:05:50.811231 2153 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-f-3b05cacdca","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 16 09:05:50.812151 kubelet[2153]: I0116 09:05:50.811559 2153 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 09:05:50.812621 kubelet[2153]: I0116 09:05:50.811578 2153 container_manager_linux.go:300] "Creating device plugin manager" Jan 16 09:05:50.812621 kubelet[2153]: I0116 09:05:50.811766 2153 state_mem.go:36] "Initialized new in-memory state store" Jan 16 09:05:50.818489 kubelet[2153]: I0116 09:05:50.818428 2153 kubelet.go:408] "Attempting to sync node with API server" Jan 16 09:05:50.818760 kubelet[2153]: I0116 09:05:50.818735 2153 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 09:05:50.818913 kubelet[2153]: I0116 09:05:50.818897 2153 kubelet.go:314] "Adding apiserver pod source" Jan 16 09:05:50.819048 kubelet[2153]: I0116 09:05:50.819028 2153 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 09:05:50.825184 kubelet[2153]: W0116 09:05:50.824163 2153 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://137.184.14.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-f-3b05cacdca&limit=500&resourceVersion=0": dial tcp 137.184.14.123:6443: connect: connection refused Jan 16 09:05:50.825184 kubelet[2153]: E0116 09:05:50.824267 2153 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://137.184.14.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-f-3b05cacdca&limit=500&resourceVersion=0\": dial tcp 137.184.14.123:6443: connect: connection refused" logger="UnhandledError" Jan 16 09:05:50.826432 kubelet[2153]: W0116 09:05:50.825967 2153 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://137.184.14.123:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 137.184.14.123:6443: connect: connection refused Jan 16 09:05:50.826617 kubelet[2153]: E0116 09:05:50.826592 2153 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://137.184.14.123:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 137.184.14.123:6443: connect: connection refused" logger="UnhandledError" Jan 16 09:05:50.826843 kubelet[2153]: I0116 09:05:50.826821 2153 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 16 09:05:50.829326 kubelet[2153]: I0116 09:05:50.829284 2153 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 09:05:50.829571 kubelet[2153]: W0116 09:05:50.829557 2153 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 16 09:05:50.831947 kubelet[2153]: I0116 09:05:50.831900 2153 server.go:1269] "Started kubelet" Jan 16 09:05:50.834935 kubelet[2153]: I0116 09:05:50.834798 2153 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 09:05:50.836524 kubelet[2153]: I0116 09:05:50.836281 2153 server.go:460] "Adding debug handlers to kubelet server" Jan 16 09:05:50.840100 kubelet[2153]: I0116 09:05:50.840054 2153 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 09:05:50.840801 kubelet[2153]: I0116 09:05:50.840719 2153 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 09:05:50.841366 kubelet[2153]: I0116 09:05:50.841341 2153 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 09:05:50.847664 kubelet[2153]: E0116 09:05:50.843542 2153 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://137.184.14.123:6443/api/v1/namespaces/default/events\": dial tcp 137.184.14.123:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-f-3b05cacdca.181b20fca9ab091d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-f-3b05cacdca,UID:ci-4081.3.0-f-3b05cacdca,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-f-3b05cacdca,},FirstTimestamp:2025-01-16 09:05:50.831864093 +0000 UTC m=+1.098022176,LastTimestamp:2025-01-16 09:05:50.831864093 +0000 UTC m=+1.098022176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-f-3b05cacdca,}" Jan 16 09:05:50.850270 kubelet[2153]: E0116 09:05:50.850229 2153 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 09:05:50.850825 kubelet[2153]: I0116 09:05:50.850798 2153 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 16 09:05:50.853202 kubelet[2153]: I0116 09:05:50.853079 2153 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 16 09:05:50.855019 kubelet[2153]: E0116 09:05:50.853477 2153 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-f-3b05cacdca\" not found" Jan 16 09:05:50.855459 kubelet[2153]: I0116 09:05:50.855430 2153 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 16 09:05:50.855548 kubelet[2153]: I0116 09:05:50.855532 2153 reconciler.go:26] "Reconciler: start to sync state" Jan 16 09:05:50.858645 kubelet[2153]: I0116 09:05:50.858611 2153 factory.go:221] Registration of the systemd container factory successfully Jan 16 09:05:50.859029 kubelet[2153]: I0116 09:05:50.858971 2153 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 09:05:50.860117 kubelet[2153]: W0116 09:05:50.860053 2153 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://137.184.14.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.14.123:6443: connect: connection refused Jan 16 09:05:50.860391 kubelet[2153]: E0116 09:05:50.860353 2153 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://137.184.14.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 137.184.14.123:6443: connect: connection refused" logger="UnhandledError" Jan 16 09:05:50.860648 kubelet[2153]: E0116 09:05:50.860594 2153 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.14.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-f-3b05cacdca?timeout=10s\": dial tcp 137.184.14.123:6443: connect: connection refused" interval="200ms" Jan 16 09:05:50.863128 kubelet[2153]: I0116 09:05:50.863095 2153 factory.go:221] Registration of the containerd container factory successfully Jan 16 09:05:50.875885 kubelet[2153]: I0116 09:05:50.875805 2153 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 09:05:50.877946 kubelet[2153]: I0116 09:05:50.877882 2153 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 09:05:50.877946 kubelet[2153]: I0116 09:05:50.877936 2153 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 16 09:05:50.878160 kubelet[2153]: I0116 09:05:50.877967 2153 kubelet.go:2321] "Starting kubelet main sync loop" Jan 16 09:05:50.878160 kubelet[2153]: E0116 09:05:50.878057 2153 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 09:05:50.894235 kubelet[2153]: W0116 09:05:50.894109 2153 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://137.184.14.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.14.123:6443: connect: connection refused Jan 16 09:05:50.894443 kubelet[2153]: E0116 09:05:50.894275 2153 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://137.184.14.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 137.184.14.123:6443: connect: connection refused" logger="UnhandledError" Jan 16 09:05:50.901749 kubelet[2153]: I0116 09:05:50.901715 2153 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 16 09:05:50.902068 kubelet[2153]: I0116 09:05:50.901982 2153 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 16 09:05:50.902205 kubelet[2153]: I0116 09:05:50.902188 2153 state_mem.go:36] "Initialized new in-memory state store" Jan 16 09:05:50.906969 kubelet[2153]: I0116 09:05:50.906920 2153 policy_none.go:49] "None policy: Start" Jan 16 09:05:50.908736 kubelet[2153]: I0116 09:05:50.908210 2153 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 16 09:05:50.908736 kubelet[2153]: I0116 09:05:50.908250 2153 state_mem.go:35] "Initializing new in-memory state store" Jan 16 09:05:50.919861 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 16 09:05:50.938226 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 16 09:05:50.944146 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 16 09:05:50.954233 kubelet[2153]: E0116 09:05:50.954148 2153 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-f-3b05cacdca\" not found" Jan 16 09:05:50.958919 kubelet[2153]: I0116 09:05:50.957681 2153 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 09:05:50.958919 kubelet[2153]: I0116 09:05:50.957963 2153 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 16 09:05:50.958919 kubelet[2153]: I0116 09:05:50.958008 2153 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 09:05:50.958919 kubelet[2153]: I0116 09:05:50.958654 2153 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 09:05:50.963926 kubelet[2153]: E0116 09:05:50.963886 2153 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-f-3b05cacdca\" not found" Jan 16 09:05:51.006184 systemd[1]: Created slice kubepods-burstable-pod117039378b3ab0b9558402ae103415c6.slice - libcontainer container kubepods-burstable-pod117039378b3ab0b9558402ae103415c6.slice. Jan 16 09:05:51.031781 systemd[1]: Created slice kubepods-burstable-pod684e590264e363f5524532e48d8eea82.slice - libcontainer container kubepods-burstable-pod684e590264e363f5524532e48d8eea82.slice. Jan 16 09:05:51.043747 systemd[1]: Created slice kubepods-burstable-podc8836c836585b32530d79b50c29de566.slice - libcontainer container kubepods-burstable-podc8836c836585b32530d79b50c29de566.slice. Jan 16 09:05:51.056713 kubelet[2153]: I0116 09:05:51.056646 2153 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/684e590264e363f5524532e48d8eea82-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-f-3b05cacdca\" (UID: \"684e590264e363f5524532e48d8eea82\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-3b05cacdca" Jan 16 09:05:51.057448 kubelet[2153]: I0116 09:05:51.056972 2153 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/684e590264e363f5524532e48d8eea82-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-f-3b05cacdca\" (UID: \"684e590264e363f5524532e48d8eea82\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-3b05cacdca" Jan 16 09:05:51.057448 kubelet[2153]: I0116 09:05:51.057344 2153 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/684e590264e363f5524532e48d8eea82-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-f-3b05cacdca\" (UID: \"684e590264e363f5524532e48d8eea82\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-3b05cacdca" Jan 16 09:05:51.057448 kubelet[2153]: I0116 09:05:51.057405 2153 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/117039378b3ab0b9558402ae103415c6-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-f-3b05cacdca\" (UID: \"117039378b3ab0b9558402ae103415c6\") " pod="kube-system/kube-apiserver-ci-4081.3.0-f-3b05cacdca" Jan 16 09:05:51.058129 kubelet[2153]: I0116 09:05:51.057845 2153 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/117039378b3ab0b9558402ae103415c6-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-f-3b05cacdca\" (UID: \"117039378b3ab0b9558402ae103415c6\") " pod="kube-system/kube-apiserver-ci-4081.3.0-f-3b05cacdca" Jan 16 09:05:51.058129 kubelet[2153]: I0116 09:05:51.057908 2153 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/117039378b3ab0b9558402ae103415c6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-f-3b05cacdca\" (UID: \"117039378b3ab0b9558402ae103415c6\") " pod="kube-system/kube-apiserver-ci-4081.3.0-f-3b05cacdca" Jan 16 09:05:51.058129 kubelet[2153]: I0116 09:05:51.057940 2153 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/684e590264e363f5524532e48d8eea82-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-f-3b05cacdca\" (UID: \"684e590264e363f5524532e48d8eea82\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-3b05cacdca" Jan 16 09:05:51.058129 kubelet[2153]: I0116 09:05:51.057996 2153 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/684e590264e363f5524532e48d8eea82-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-f-3b05cacdca\" (UID: \"684e590264e363f5524532e48d8eea82\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-3b05cacdca" Jan 16 09:05:51.058129 kubelet[2153]: I0116 09:05:51.058023 2153 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c8836c836585b32530d79b50c29de566-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-f-3b05cacdca\" (UID: \"c8836c836585b32530d79b50c29de566\") " pod="kube-system/kube-scheduler-ci-4081.3.0-f-3b05cacdca" Jan 16 09:05:51.060066 kubelet[2153]: I0116 09:05:51.059982 2153 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-f-3b05cacdca" Jan 16 09:05:51.060814 kubelet[2153]: E0116 09:05:51.060485 2153 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://137.184.14.123:6443/api/v1/nodes\": dial tcp 137.184.14.123:6443: connect: connection refused" node="ci-4081.3.0-f-3b05cacdca" Jan 16 09:05:51.061562 kubelet[2153]: E0116 09:05:51.061513 2153 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.14.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-f-3b05cacdca?timeout=10s\": dial tcp 137.184.14.123:6443: connect: connection refused" interval="400ms" Jan 16 09:05:51.262847 kubelet[2153]: I0116 09:05:51.262522 2153 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-f-3b05cacdca" Jan 16 09:05:51.264749 kubelet[2153]: E0116 09:05:51.264663 2153 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://137.184.14.123:6443/api/v1/nodes\": dial tcp 137.184.14.123:6443: connect: connection refused" node="ci-4081.3.0-f-3b05cacdca" Jan 16 09:05:51.323562 kubelet[2153]: E0116 09:05:51.323503 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:51.324586 containerd[1464]: time="2025-01-16T09:05:51.324515499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-f-3b05cacdca,Uid:117039378b3ab0b9558402ae103415c6,Namespace:kube-system,Attempt:0,}" Jan 16 09:05:51.337008 kubelet[2153]: E0116 09:05:51.336897 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:51.346074 containerd[1464]: time="2025-01-16T09:05:51.346004499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-f-3b05cacdca,Uid:684e590264e363f5524532e48d8eea82,Namespace:kube-system,Attempt:0,}" Jan 16 09:05:51.350151 kubelet[2153]: E0116 09:05:51.349562 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:51.350522 containerd[1464]: time="2025-01-16T09:05:51.350428518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-f-3b05cacdca,Uid:c8836c836585b32530d79b50c29de566,Namespace:kube-system,Attempt:0,}" Jan 16 09:05:51.462535 kubelet[2153]: E0116 09:05:51.462410 2153 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.14.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-f-3b05cacdca?timeout=10s\": dial tcp 137.184.14.123:6443: connect: connection refused" interval="800ms" Jan 16 09:05:51.666756 kubelet[2153]: I0116 09:05:51.666589 2153 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-f-3b05cacdca" Jan 16 09:05:51.667838 kubelet[2153]: E0116 09:05:51.667792 2153 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://137.184.14.123:6443/api/v1/nodes\": dial tcp 137.184.14.123:6443: connect: connection refused" node="ci-4081.3.0-f-3b05cacdca" Jan 16 09:05:51.732997 kubelet[2153]: W0116 09:05:51.732891 2153 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://137.184.14.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-f-3b05cacdca&limit=500&resourceVersion=0": dial tcp 137.184.14.123:6443: connect: connection refused Jan 16 09:05:51.733204 kubelet[2153]: E0116 09:05:51.733022 2153 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://137.184.14.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-f-3b05cacdca&limit=500&resourceVersion=0\": dial tcp 137.184.14.123:6443: connect: connection refused" logger="UnhandledError" Jan 16 09:05:51.735753 kubelet[2153]: W0116 09:05:51.735672 2153 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://137.184.14.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.14.123:6443: connect: connection refused Jan 16 09:05:51.736039 kubelet[2153]: E0116 09:05:51.735762 2153 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://137.184.14.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 137.184.14.123:6443: connect: connection refused" logger="UnhandledError" Jan 16 09:05:51.936716 kubelet[2153]: W0116 09:05:51.936472 2153 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://137.184.14.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.14.123:6443: connect: connection refused Jan 16 09:05:51.936716 kubelet[2153]: E0116 09:05:51.936601 2153 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://137.184.14.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 137.184.14.123:6443: connect: connection refused" logger="UnhandledError" Jan 16 09:05:51.982164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount396835616.mount: Deactivated successfully. Jan 16 09:05:52.016918 containerd[1464]: time="2025-01-16T09:05:52.016740162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 09:05:52.021965 kubelet[2153]: W0116 09:05:52.021311 2153 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://137.184.14.123:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 137.184.14.123:6443: connect: connection refused Jan 16 09:05:52.021965 kubelet[2153]: E0116 09:05:52.021420 2153 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://137.184.14.123:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 137.184.14.123:6443: connect: connection refused" logger="UnhandledError" Jan 16 09:05:52.026811 containerd[1464]: time="2025-01-16T09:05:52.023797741Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 16 09:05:52.029386 containerd[1464]: time="2025-01-16T09:05:52.028337610Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 09:05:52.031334 containerd[1464]: time="2025-01-16T09:05:52.031149854Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 09:05:52.032663 containerd[1464]: time="2025-01-16T09:05:52.032480283Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 16 09:05:52.034022 containerd[1464]: time="2025-01-16T09:05:52.033695065Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 16 09:05:52.039426 containerd[1464]: time="2025-01-16T09:05:52.039352268Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 09:05:52.042928 containerd[1464]: time="2025-01-16T09:05:52.042590857Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 717.946306ms" Jan 16 09:05:52.047396 containerd[1464]: time="2025-01-16T09:05:52.046270186Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 695.706854ms" Jan 16 09:05:52.051604 containerd[1464]: time="2025-01-16T09:05:52.051327271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 09:05:52.084481 containerd[1464]: time="2025-01-16T09:05:52.084392823Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 738.05495ms" Jan 16 09:05:52.181326 update_engine[1449]: I20250116 09:05:52.180233 1449 update_attempter.cc:509] Updating boot flags... Jan 16 09:05:52.264112 kubelet[2153]: E0116 09:05:52.264030 2153 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.14.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-f-3b05cacdca?timeout=10s\": dial tcp 137.184.14.123:6443: connect: connection refused" interval="1.6s" Jan 16 09:05:52.332505 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2205) Jan 16 09:05:52.452335 containerd[1464]: time="2025-01-16T09:05:52.449535375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:05:52.473041 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2207) Jan 16 09:05:52.473224 kubelet[2153]: I0116 09:05:52.472540 2153 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-f-3b05cacdca" Jan 16 09:05:52.474370 kubelet[2153]: E0116 09:05:52.474287 2153 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://137.184.14.123:6443/api/v1/nodes\": dial tcp 137.184.14.123:6443: connect: connection refused" node="ci-4081.3.0-f-3b05cacdca" Jan 16 09:05:52.494299 containerd[1464]: time="2025-01-16T09:05:52.492364894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:05:52.494299 containerd[1464]: time="2025-01-16T09:05:52.492467519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:05:52.494299 containerd[1464]: time="2025-01-16T09:05:52.492494910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:05:52.494299 containerd[1464]: time="2025-01-16T09:05:52.492652719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:05:52.494928 containerd[1464]: time="2025-01-16T09:05:52.478702595Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:05:52.494928 containerd[1464]: time="2025-01-16T09:05:52.488112680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:05:52.494928 containerd[1464]: time="2025-01-16T09:05:52.488172396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:05:52.494928 containerd[1464]: time="2025-01-16T09:05:52.488343005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:05:52.503372 containerd[1464]: time="2025-01-16T09:05:52.491887501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:05:52.503372 containerd[1464]: time="2025-01-16T09:05:52.502146879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:05:52.508911 containerd[1464]: time="2025-01-16T09:05:52.508610711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:05:52.576297 systemd[1]: Started cri-containerd-9a5c0c9fd1dad06b0a448e9053a700074002fddc4cacce182ac3f7612c6f3dfb.scope - libcontainer container 9a5c0c9fd1dad06b0a448e9053a700074002fddc4cacce182ac3f7612c6f3dfb. Jan 16 09:05:52.640899 systemd[1]: Started cri-containerd-6e21faba2a2f8f206718055246c91bb636615de0261a991c212d31d37b2f956b.scope - libcontainer container 6e21faba2a2f8f206718055246c91bb636615de0261a991c212d31d37b2f956b. Jan 16 09:05:52.644623 systemd[1]: Started cri-containerd-9c4a8526bb22852177efa8789d35a2d1dc1c94796de0fde25b6c94bec79028eb.scope - libcontainer container 9c4a8526bb22852177efa8789d35a2d1dc1c94796de0fde25b6c94bec79028eb. Jan 16 09:05:52.769009 containerd[1464]: time="2025-01-16T09:05:52.767791653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-f-3b05cacdca,Uid:117039378b3ab0b9558402ae103415c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a5c0c9fd1dad06b0a448e9053a700074002fddc4cacce182ac3f7612c6f3dfb\"" Jan 16 09:05:52.770436 containerd[1464]: time="2025-01-16T09:05:52.769656025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-f-3b05cacdca,Uid:684e590264e363f5524532e48d8eea82,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c4a8526bb22852177efa8789d35a2d1dc1c94796de0fde25b6c94bec79028eb\"" Jan 16 09:05:52.775016 kubelet[2153]: E0116 09:05:52.774555 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:52.775423 kubelet[2153]: E0116 09:05:52.775214 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:52.781196 containerd[1464]: time="2025-01-16T09:05:52.781134718Z" level=info msg="CreateContainer within sandbox \"9a5c0c9fd1dad06b0a448e9053a700074002fddc4cacce182ac3f7612c6f3dfb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 16 09:05:52.784295 containerd[1464]: time="2025-01-16T09:05:52.783728095Z" level=info msg="CreateContainer within sandbox \"9c4a8526bb22852177efa8789d35a2d1dc1c94796de0fde25b6c94bec79028eb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 16 09:05:52.815500 containerd[1464]: time="2025-01-16T09:05:52.815312253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-f-3b05cacdca,Uid:c8836c836585b32530d79b50c29de566,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e21faba2a2f8f206718055246c91bb636615de0261a991c212d31d37b2f956b\"" Jan 16 09:05:52.817508 kubelet[2153]: E0116 09:05:52.817327 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:52.820465 containerd[1464]: time="2025-01-16T09:05:52.820423511Z" level=info msg="CreateContainer within sandbox \"6e21faba2a2f8f206718055246c91bb636615de0261a991c212d31d37b2f956b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 16 09:05:52.887340 containerd[1464]: time="2025-01-16T09:05:52.885396293Z" level=info msg="CreateContainer within sandbox \"9a5c0c9fd1dad06b0a448e9053a700074002fddc4cacce182ac3f7612c6f3dfb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"826dc23c288f087358f921b926ce00ae7e1afbbc7d0107fe2f3faf922954985b\"" Jan 16 09:05:52.888676 containerd[1464]: time="2025-01-16T09:05:52.888352872Z" level=info msg="StartContainer for \"826dc23c288f087358f921b926ce00ae7e1afbbc7d0107fe2f3faf922954985b\"" Jan 16 09:05:52.909657 containerd[1464]: time="2025-01-16T09:05:52.909568222Z" level=info msg="CreateContainer within sandbox \"6e21faba2a2f8f206718055246c91bb636615de0261a991c212d31d37b2f956b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e73f7cc2dd80340abbaf5b7c3aa373784defd7aa6b4517c2fd6343980d936710\"" Jan 16 09:05:52.911198 containerd[1464]: time="2025-01-16T09:05:52.911094983Z" level=info msg="StartContainer for \"e73f7cc2dd80340abbaf5b7c3aa373784defd7aa6b4517c2fd6343980d936710\"" Jan 16 09:05:52.920630 containerd[1464]: time="2025-01-16T09:05:52.920560922Z" level=info msg="CreateContainer within sandbox \"9c4a8526bb22852177efa8789d35a2d1dc1c94796de0fde25b6c94bec79028eb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"268ec501f89be9c26f096f6c2fbfe815b3a11c0c58169d00ef430f9633226b25\"" Jan 16 09:05:52.921415 containerd[1464]: time="2025-01-16T09:05:52.921359381Z" level=info msg="StartContainer for \"268ec501f89be9c26f096f6c2fbfe815b3a11c0c58169d00ef430f9633226b25\"" Jan 16 09:05:52.969371 kubelet[2153]: E0116 09:05:52.969313 2153 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://137.184.14.123:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 137.184.14.123:6443: connect: connection refused" logger="UnhandledError" Jan 16 09:05:53.041262 systemd[1]: Started cri-containerd-826dc23c288f087358f921b926ce00ae7e1afbbc7d0107fe2f3faf922954985b.scope - libcontainer container 826dc23c288f087358f921b926ce00ae7e1afbbc7d0107fe2f3faf922954985b. Jan 16 09:05:53.059266 systemd[1]: Started cri-containerd-268ec501f89be9c26f096f6c2fbfe815b3a11c0c58169d00ef430f9633226b25.scope - libcontainer container 268ec501f89be9c26f096f6c2fbfe815b3a11c0c58169d00ef430f9633226b25. Jan 16 09:05:53.078309 systemd[1]: Started cri-containerd-e73f7cc2dd80340abbaf5b7c3aa373784defd7aa6b4517c2fd6343980d936710.scope - libcontainer container e73f7cc2dd80340abbaf5b7c3aa373784defd7aa6b4517c2fd6343980d936710. Jan 16 09:05:53.178359 containerd[1464]: time="2025-01-16T09:05:53.178010240Z" level=info msg="StartContainer for \"826dc23c288f087358f921b926ce00ae7e1afbbc7d0107fe2f3faf922954985b\" returns successfully" Jan 16 09:05:53.217148 containerd[1464]: time="2025-01-16T09:05:53.215685672Z" level=info msg="StartContainer for \"268ec501f89be9c26f096f6c2fbfe815b3a11c0c58169d00ef430f9633226b25\" returns successfully" Jan 16 09:05:53.239182 containerd[1464]: time="2025-01-16T09:05:53.238962390Z" level=info msg="StartContainer for \"e73f7cc2dd80340abbaf5b7c3aa373784defd7aa6b4517c2fd6343980d936710\" returns successfully" Jan 16 09:05:53.927394 kubelet[2153]: E0116 09:05:53.927342 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:53.934933 kubelet[2153]: E0116 09:05:53.934886 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:53.939085 kubelet[2153]: E0116 09:05:53.937166 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:54.075861 kubelet[2153]: I0116 09:05:54.075817 2153 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-f-3b05cacdca" Jan 16 09:05:54.940743 kubelet[2153]: E0116 09:05:54.940694 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:54.941318 kubelet[2153]: E0116 09:05:54.941288 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:54.945889 kubelet[2153]: E0116 09:05:54.942487 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:55.930823 kubelet[2153]: E0116 09:05:55.930766 2153 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-f-3b05cacdca\" not found" node="ci-4081.3.0-f-3b05cacdca" Jan 16 09:05:55.944784 kubelet[2153]: E0116 09:05:55.944705 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:55.945351 kubelet[2153]: E0116 09:05:55.945310 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:56.176989 kubelet[2153]: I0116 09:05:56.176917 2153 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.0-f-3b05cacdca" Jan 16 09:05:56.176989 kubelet[2153]: E0116 09:05:56.176989 2153 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.3.0-f-3b05cacdca\": node \"ci-4081.3.0-f-3b05cacdca\" not found" Jan 16 09:05:56.839368 kubelet[2153]: I0116 09:05:56.839065 2153 apiserver.go:52] "Watching apiserver" Jan 16 09:05:56.862638 kubelet[2153]: I0116 09:05:56.859133 2153 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 16 09:05:59.244500 systemd[1]: Reloading requested from client PID 2444 ('systemctl') (unit session-7.scope)... Jan 16 09:05:59.244533 systemd[1]: Reloading... Jan 16 09:05:59.455283 zram_generator::config[2487]: No configuration found. Jan 16 09:05:59.709510 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 09:05:59.881150 systemd[1]: Reloading finished in 635 ms. Jan 16 09:05:59.953192 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 09:05:59.975728 systemd[1]: kubelet.service: Deactivated successfully. Jan 16 09:05:59.976326 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:05:59.976437 systemd[1]: kubelet.service: Consumed 1.645s CPU time, 111.0M memory peak, 0B memory swap peak. Jan 16 09:05:59.987363 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 09:06:00.512464 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:06:00.525677 (kubelet)[2534]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 09:06:00.826403 kubelet[2534]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 09:06:00.826403 kubelet[2534]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 16 09:06:00.830958 kubelet[2534]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 09:06:00.830958 kubelet[2534]: I0116 09:06:00.828629 2534 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 09:06:00.875491 kubelet[2534]: I0116 09:06:00.875149 2534 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 16 09:06:00.875491 kubelet[2534]: I0116 09:06:00.875200 2534 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 09:06:00.876994 kubelet[2534]: I0116 09:06:00.876933 2534 server.go:929] "Client rotation is on, will bootstrap in background" Jan 16 09:06:00.884467 kubelet[2534]: I0116 09:06:00.884335 2534 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 16 09:06:00.895692 kubelet[2534]: I0116 09:06:00.895626 2534 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 09:06:00.913522 kubelet[2534]: E0116 09:06:00.913439 2534 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 16 09:06:00.913522 kubelet[2534]: I0116 09:06:00.913521 2534 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 16 09:06:00.925078 kubelet[2534]: I0116 09:06:00.924370 2534 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 09:06:00.925078 kubelet[2534]: I0116 09:06:00.924572 2534 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 16 09:06:00.925078 kubelet[2534]: I0116 09:06:00.924793 2534 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 09:06:00.925653 kubelet[2534]: I0116 09:06:00.924843 2534 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-f-3b05cacdca","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 16 09:06:00.926156 kubelet[2534]: I0116 09:06:00.926121 2534 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 09:06:00.926306 kubelet[2534]: I0116 09:06:00.926288 2534 container_manager_linux.go:300] "Creating device plugin manager" Jan 16 09:06:00.926460 kubelet[2534]: I0116 09:06:00.926445 2534 state_mem.go:36] "Initialized new in-memory state store" Jan 16 09:06:00.926803 kubelet[2534]: I0116 09:06:00.926773 2534 kubelet.go:408] "Attempting to sync node with API server" Jan 16 09:06:00.926939 kubelet[2534]: I0116 09:06:00.926922 2534 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 09:06:00.927915 kubelet[2534]: I0116 09:06:00.927114 2534 kubelet.go:314] "Adding apiserver pod source" Jan 16 09:06:00.927915 kubelet[2534]: I0116 09:06:00.927149 2534 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 09:06:00.930481 kubelet[2534]: I0116 09:06:00.930393 2534 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 16 09:06:00.932628 kubelet[2534]: I0116 09:06:00.932188 2534 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 09:06:00.933870 kubelet[2534]: I0116 09:06:00.933818 2534 server.go:1269] "Started kubelet" Jan 16 09:06:00.947135 kubelet[2534]: I0116 09:06:00.945681 2534 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 09:06:00.974781 kubelet[2534]: I0116 09:06:00.971214 2534 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 09:06:00.980763 kubelet[2534]: I0116 09:06:00.979357 2534 server.go:460] "Adding debug handlers to kubelet server" Jan 16 09:06:00.988127 kubelet[2534]: I0116 09:06:00.987588 2534 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 09:06:00.990587 kubelet[2534]: I0116 09:06:00.989608 2534 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 09:06:00.996773 kubelet[2534]: I0116 09:06:00.993153 2534 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 16 09:06:00.998861 kubelet[2534]: I0116 09:06:00.998802 2534 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 16 09:06:01.000032 kubelet[2534]: E0116 09:06:00.999958 2534 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-f-3b05cacdca\" not found" Jan 16 09:06:01.013675 kubelet[2534]: I0116 09:06:01.013587 2534 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 09:06:01.022017 kubelet[2534]: I0116 09:06:01.021848 2534 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 09:06:01.022017 kubelet[2534]: I0116 09:06:01.021901 2534 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 16 09:06:01.022017 kubelet[2534]: I0116 09:06:01.021936 2534 kubelet.go:2321] "Starting kubelet main sync loop" Jan 16 09:06:01.033693 kubelet[2534]: E0116 09:06:01.022431 2534 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 09:06:01.035595 kubelet[2534]: I0116 09:06:01.015368 2534 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 16 09:06:01.037339 kubelet[2534]: I0116 09:06:01.037175 2534 factory.go:221] Registration of the systemd container factory successfully Jan 16 09:06:01.038458 kubelet[2534]: I0116 09:06:01.037428 2534 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 09:06:01.049889 kubelet[2534]: I0116 09:06:01.015705 2534 reconciler.go:26] "Reconciler: start to sync state" Jan 16 09:06:01.085888 kubelet[2534]: E0116 09:06:01.085732 2534 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 09:06:01.090431 kubelet[2534]: I0116 09:06:01.090358 2534 factory.go:221] Registration of the containerd container factory successfully Jan 16 09:06:01.126139 kubelet[2534]: E0116 09:06:01.125202 2534 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 16 09:06:01.222599 kubelet[2534]: I0116 09:06:01.222560 2534 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 16 09:06:01.222599 kubelet[2534]: I0116 09:06:01.222585 2534 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 16 09:06:01.222929 kubelet[2534]: I0116 09:06:01.222625 2534 state_mem.go:36] "Initialized new in-memory state store" Jan 16 09:06:01.224604 kubelet[2534]: I0116 09:06:01.223238 2534 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 16 09:06:01.224604 kubelet[2534]: I0116 09:06:01.223270 2534 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 16 09:06:01.224604 kubelet[2534]: I0116 09:06:01.223299 2534 policy_none.go:49] "None policy: Start" Jan 16 09:06:01.227042 kubelet[2534]: I0116 09:06:01.226298 2534 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 16 09:06:01.227042 kubelet[2534]: I0116 09:06:01.226343 2534 state_mem.go:35] "Initializing new in-memory state store" Jan 16 09:06:01.228079 kubelet[2534]: I0116 09:06:01.227814 2534 state_mem.go:75] "Updated machine memory state" Jan 16 09:06:01.279384 kubelet[2534]: I0116 09:06:01.279125 2534 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 09:06:01.280918 kubelet[2534]: I0116 09:06:01.280561 2534 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 16 09:06:01.281572 kubelet[2534]: I0116 09:06:01.280648 2534 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 09:06:01.281572 kubelet[2534]: I0116 09:06:01.281414 2534 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 09:06:01.348793 kubelet[2534]: I0116 09:06:01.348187 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/684e590264e363f5524532e48d8eea82-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-f-3b05cacdca\" (UID: \"684e590264e363f5524532e48d8eea82\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:01.348793 kubelet[2534]: I0116 09:06:01.348253 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/684e590264e363f5524532e48d8eea82-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-f-3b05cacdca\" (UID: \"684e590264e363f5524532e48d8eea82\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:01.348793 kubelet[2534]: I0116 09:06:01.348286 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/684e590264e363f5524532e48d8eea82-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-f-3b05cacdca\" (UID: \"684e590264e363f5524532e48d8eea82\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:01.348793 kubelet[2534]: I0116 09:06:01.348379 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c8836c836585b32530d79b50c29de566-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-f-3b05cacdca\" (UID: \"c8836c836585b32530d79b50c29de566\") " pod="kube-system/kube-scheduler-ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:01.348793 kubelet[2534]: I0116 09:06:01.348410 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/117039378b3ab0b9558402ae103415c6-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-f-3b05cacdca\" (UID: \"117039378b3ab0b9558402ae103415c6\") " pod="kube-system/kube-apiserver-ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:01.349547 kubelet[2534]: I0116 09:06:01.348441 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/117039378b3ab0b9558402ae103415c6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-f-3b05cacdca\" (UID: \"117039378b3ab0b9558402ae103415c6\") " pod="kube-system/kube-apiserver-ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:01.349547 kubelet[2534]: I0116 09:06:01.348467 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/684e590264e363f5524532e48d8eea82-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-f-3b05cacdca\" (UID: \"684e590264e363f5524532e48d8eea82\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:01.349547 kubelet[2534]: I0116 09:06:01.348491 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/684e590264e363f5524532e48d8eea82-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-f-3b05cacdca\" (UID: \"684e590264e363f5524532e48d8eea82\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:01.349547 kubelet[2534]: I0116 09:06:01.348515 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/117039378b3ab0b9558402ae103415c6-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-f-3b05cacdca\" (UID: \"117039378b3ab0b9558402ae103415c6\") " pod="kube-system/kube-apiserver-ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:01.381425 kubelet[2534]: W0116 09:06:01.381180 2534 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 09:06:01.392566 kubelet[2534]: W0116 09:06:01.391943 2534 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 09:06:01.393420 kubelet[2534]: W0116 09:06:01.393110 2534 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 09:06:01.399451 kubelet[2534]: I0116 09:06:01.399241 2534 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:01.460071 kubelet[2534]: I0116 09:06:01.459407 2534 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:01.460071 kubelet[2534]: I0116 09:06:01.459541 2534 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:01.690210 kubelet[2534]: E0116 09:06:01.690026 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:01.696033 kubelet[2534]: E0116 09:06:01.694610 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:01.696033 kubelet[2534]: E0116 09:06:01.694821 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:01.929308 kubelet[2534]: I0116 09:06:01.929235 2534 apiserver.go:52] "Watching apiserver" Jan 16 09:06:01.939202 kubelet[2534]: I0116 09:06:01.936076 2534 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 16 09:06:02.171907 kubelet[2534]: E0116 09:06:02.170858 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:02.171907 kubelet[2534]: E0116 09:06:02.171913 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:02.174009 kubelet[2534]: E0116 09:06:02.173602 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:02.345442 kubelet[2534]: I0116 09:06:02.344788 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-f-3b05cacdca" podStartSLOduration=1.344758535 podStartE2EDuration="1.344758535s" podCreationTimestamp="2025-01-16 09:06:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 09:06:02.303509506 +0000 UTC m=+1.729174785" watchObservedRunningTime="2025-01-16 09:06:02.344758535 +0000 UTC m=+1.770423797" Jan 16 09:06:02.345442 kubelet[2534]: I0116 09:06:02.345240 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-f-3b05cacdca" podStartSLOduration=1.3452273909999999 podStartE2EDuration="1.345227391s" podCreationTimestamp="2025-01-16 09:06:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 09:06:02.34046297 +0000 UTC m=+1.766128238" watchObservedRunningTime="2025-01-16 09:06:02.345227391 +0000 UTC m=+1.770892657" Jan 16 09:06:02.379813 kubelet[2534]: I0116 09:06:02.379675 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-f-3b05cacdca" podStartSLOduration=1.379645676 podStartE2EDuration="1.379645676s" podCreationTimestamp="2025-01-16 09:06:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 09:06:02.379454467 +0000 UTC m=+1.805119730" watchObservedRunningTime="2025-01-16 09:06:02.379645676 +0000 UTC m=+1.805310942" Jan 16 09:06:03.174545 kubelet[2534]: E0116 09:06:03.174490 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:04.175019 kubelet[2534]: I0116 09:06:04.174919 2534 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 16 09:06:04.176203 containerd[1464]: time="2025-01-16T09:06:04.175490229Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 16 09:06:04.181275 kubelet[2534]: I0116 09:06:04.178008 2534 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 16 09:06:04.637664 systemd[1]: Created slice kubepods-besteffort-poda808a3dc_3b6c_4319_8e78_a47f001b39a2.slice - libcontainer container kubepods-besteffort-poda808a3dc_3b6c_4319_8e78_a47f001b39a2.slice. Jan 16 09:06:04.654784 kubelet[2534]: W0116 09:06:04.654622 2534 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.0-f-3b05cacdca" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.3.0-f-3b05cacdca' and this object Jan 16 09:06:04.654784 kubelet[2534]: E0116 09:06:04.654702 2534 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081.3.0-f-3b05cacdca\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081.3.0-f-3b05cacdca' and this object" logger="UnhandledError" Jan 16 09:06:04.654784 kubelet[2534]: W0116 09:06:04.654620 2534 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4081.3.0-f-3b05cacdca" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.3.0-f-3b05cacdca' and this object Jan 16 09:06:04.654784 kubelet[2534]: E0116 09:06:04.654757 2534 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4081.3.0-f-3b05cacdca\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081.3.0-f-3b05cacdca' and this object" logger="UnhandledError" Jan 16 09:06:04.738415 kubelet[2534]: I0116 09:06:04.738191 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a808a3dc-3b6c-4319-8e78-a47f001b39a2-xtables-lock\") pod \"kube-proxy-kdttd\" (UID: \"a808a3dc-3b6c-4319-8e78-a47f001b39a2\") " pod="kube-system/kube-proxy-kdttd" Jan 16 09:06:04.738415 kubelet[2534]: I0116 09:06:04.738259 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5kb6\" (UniqueName: \"kubernetes.io/projected/a808a3dc-3b6c-4319-8e78-a47f001b39a2-kube-api-access-k5kb6\") pod \"kube-proxy-kdttd\" (UID: \"a808a3dc-3b6c-4319-8e78-a47f001b39a2\") " pod="kube-system/kube-proxy-kdttd" Jan 16 09:06:04.738415 kubelet[2534]: I0116 09:06:04.738292 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a808a3dc-3b6c-4319-8e78-a47f001b39a2-kube-proxy\") pod \"kube-proxy-kdttd\" (UID: \"a808a3dc-3b6c-4319-8e78-a47f001b39a2\") " pod="kube-system/kube-proxy-kdttd" Jan 16 09:06:04.738415 kubelet[2534]: I0116 09:06:04.738321 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a808a3dc-3b6c-4319-8e78-a47f001b39a2-lib-modules\") pod \"kube-proxy-kdttd\" (UID: \"a808a3dc-3b6c-4319-8e78-a47f001b39a2\") " pod="kube-system/kube-proxy-kdttd" Jan 16 09:06:04.801198 kubelet[2534]: E0116 09:06:04.800778 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:05.186025 kubelet[2534]: E0116 09:06:05.184786 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:05.853310 kubelet[2534]: E0116 09:06:05.853072 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:05.854347 containerd[1464]: time="2025-01-16T09:06:05.854182915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kdttd,Uid:a808a3dc-3b6c-4319-8e78-a47f001b39a2,Namespace:kube-system,Attempt:0,}" Jan 16 09:06:05.923628 containerd[1464]: time="2025-01-16T09:06:05.923132091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:06:05.923628 containerd[1464]: time="2025-01-16T09:06:05.923227485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:06:05.923628 containerd[1464]: time="2025-01-16T09:06:05.923249235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:05.923628 containerd[1464]: time="2025-01-16T09:06:05.923496123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:05.984301 systemd[1]: Started cri-containerd-0a1174c9a0118d4f14d947c522f1e447a938e6332ae8d40f9268da8f7d29b006.scope - libcontainer container 0a1174c9a0118d4f14d947c522f1e447a938e6332ae8d40f9268da8f7d29b006. Jan 16 09:06:06.176041 containerd[1464]: time="2025-01-16T09:06:06.172961612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kdttd,Uid:a808a3dc-3b6c-4319-8e78-a47f001b39a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a1174c9a0118d4f14d947c522f1e447a938e6332ae8d40f9268da8f7d29b006\"" Jan 16 09:06:06.176303 kubelet[2534]: E0116 09:06:06.174665 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:06.182939 containerd[1464]: time="2025-01-16T09:06:06.182448268Z" level=info msg="CreateContainer within sandbox \"0a1174c9a0118d4f14d947c522f1e447a938e6332ae8d40f9268da8f7d29b006\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 16 09:06:06.258606 containerd[1464]: time="2025-01-16T09:06:06.258374257Z" level=info msg="CreateContainer within sandbox \"0a1174c9a0118d4f14d947c522f1e447a938e6332ae8d40f9268da8f7d29b006\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c3dca6cccba4164f1dcea03c6f5dd2692ec8eeff285c700281e0a16fcad1cc63\"" Jan 16 09:06:06.260681 containerd[1464]: time="2025-01-16T09:06:06.260096985Z" level=info msg="StartContainer for \"c3dca6cccba4164f1dcea03c6f5dd2692ec8eeff285c700281e0a16fcad1cc63\"" Jan 16 09:06:06.398381 systemd[1]: Started cri-containerd-c3dca6cccba4164f1dcea03c6f5dd2692ec8eeff285c700281e0a16fcad1cc63.scope - libcontainer container c3dca6cccba4164f1dcea03c6f5dd2692ec8eeff285c700281e0a16fcad1cc63. Jan 16 09:06:06.751658 systemd[1]: Created slice kubepods-besteffort-pode977c749_f89c_48a8_b416_ad2018fa9aee.slice - libcontainer container kubepods-besteffort-pode977c749_f89c_48a8_b416_ad2018fa9aee.slice. Jan 16 09:06:06.777585 kubelet[2534]: I0116 09:06:06.776871 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e977c749-f89c-48a8-b416-ad2018fa9aee-var-lib-calico\") pod \"tigera-operator-76c4976dd7-6466f\" (UID: \"e977c749-f89c-48a8-b416-ad2018fa9aee\") " pod="tigera-operator/tigera-operator-76c4976dd7-6466f" Jan 16 09:06:06.777585 kubelet[2534]: I0116 09:06:06.776943 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rbnz\" (UniqueName: \"kubernetes.io/projected/e977c749-f89c-48a8-b416-ad2018fa9aee-kube-api-access-8rbnz\") pod \"tigera-operator-76c4976dd7-6466f\" (UID: \"e977c749-f89c-48a8-b416-ad2018fa9aee\") " pod="tigera-operator/tigera-operator-76c4976dd7-6466f" Jan 16 09:06:06.808830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3917785548.mount: Deactivated successfully. Jan 16 09:06:06.905854 containerd[1464]: time="2025-01-16T09:06:06.905742967Z" level=info msg="StartContainer for \"c3dca6cccba4164f1dcea03c6f5dd2692ec8eeff285c700281e0a16fcad1cc63\" returns successfully" Jan 16 09:06:07.062430 containerd[1464]: time="2025-01-16T09:06:07.060185008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-6466f,Uid:e977c749-f89c-48a8-b416-ad2018fa9aee,Namespace:tigera-operator,Attempt:0,}" Jan 16 09:06:07.202701 containerd[1464]: time="2025-01-16T09:06:07.202375920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:06:07.202701 containerd[1464]: time="2025-01-16T09:06:07.202481494Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:06:07.202701 containerd[1464]: time="2025-01-16T09:06:07.202498974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:07.203691 containerd[1464]: time="2025-01-16T09:06:07.202639614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:07.261564 kubelet[2534]: E0116 09:06:07.258164 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:07.290335 systemd[1]: Started cri-containerd-b94c216f36dd20b3874972bb972005eee08d5df31136e25b6883f82d077a7e17.scope - libcontainer container b94c216f36dd20b3874972bb972005eee08d5df31136e25b6883f82d077a7e17. Jan 16 09:06:07.392106 containerd[1464]: time="2025-01-16T09:06:07.391118681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-6466f,Uid:e977c749-f89c-48a8-b416-ad2018fa9aee,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b94c216f36dd20b3874972bb972005eee08d5df31136e25b6883f82d077a7e17\"" Jan 16 09:06:07.398032 containerd[1464]: time="2025-01-16T09:06:07.397357057Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 16 09:06:07.804685 systemd[1]: run-containerd-runc-k8s.io-b94c216f36dd20b3874972bb972005eee08d5df31136e25b6883f82d077a7e17-runc.mDMtyx.mount: Deactivated successfully. Jan 16 09:06:08.278009 kubelet[2534]: E0116 09:06:08.276270 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:08.287872 kubelet[2534]: E0116 09:06:08.287739 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:08.343063 kubelet[2534]: I0116 09:06:08.341923 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kdttd" podStartSLOduration=4.341896844 podStartE2EDuration="4.341896844s" podCreationTimestamp="2025-01-16 09:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 09:06:07.295381944 +0000 UTC m=+6.721047229" watchObservedRunningTime="2025-01-16 09:06:08.341896844 +0000 UTC m=+7.767562221" Jan 16 09:06:08.863621 kubelet[2534]: E0116 09:06:08.859443 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:08.951077 sudo[1655]: pam_unix(sudo:session): session closed for user root Jan 16 09:06:08.958361 sshd[1652]: pam_unix(sshd:session): session closed for user core Jan 16 09:06:08.968232 systemd[1]: sshd@6-137.184.14.123:22-139.178.68.195:59692.service: Deactivated successfully. Jan 16 09:06:08.975508 systemd[1]: session-7.scope: Deactivated successfully. Jan 16 09:06:08.976677 systemd[1]: session-7.scope: Consumed 8.400s CPU time, 152.9M memory peak, 0B memory swap peak. Jan 16 09:06:08.987462 systemd-logind[1447]: Session 7 logged out. Waiting for processes to exit. Jan 16 09:06:08.991144 systemd-logind[1447]: Removed session 7. Jan 16 09:06:09.283807 kubelet[2534]: E0116 09:06:09.280394 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:09.287540 kubelet[2534]: E0116 09:06:09.287366 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:10.283106 kubelet[2534]: E0116 09:06:10.283046 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:13.287715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount54090668.mount: Deactivated successfully. Jan 16 09:06:15.386175 containerd[1464]: time="2025-01-16T09:06:15.386045911Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:15.424763 containerd[1464]: time="2025-01-16T09:06:15.405318315Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764333" Jan 16 09:06:15.426792 containerd[1464]: time="2025-01-16T09:06:15.426698827Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:15.467350 containerd[1464]: time="2025-01-16T09:06:15.465424056Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:15.467862 containerd[1464]: time="2025-01-16T09:06:15.467057283Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 8.069622131s" Jan 16 09:06:15.467862 containerd[1464]: time="2025-01-16T09:06:15.467647210Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 16 09:06:15.477481 containerd[1464]: time="2025-01-16T09:06:15.477427159Z" level=info msg="CreateContainer within sandbox \"b94c216f36dd20b3874972bb972005eee08d5df31136e25b6883f82d077a7e17\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 16 09:06:15.537412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2355164569.mount: Deactivated successfully. Jan 16 09:06:15.538779 containerd[1464]: time="2025-01-16T09:06:15.537752485Z" level=info msg="CreateContainer within sandbox \"b94c216f36dd20b3874972bb972005eee08d5df31136e25b6883f82d077a7e17\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4dcc2d5e74aa8ece49bf14d20d67077e465caf0aa89c17886df55f88c78e4009\"" Jan 16 09:06:15.543010 containerd[1464]: time="2025-01-16T09:06:15.542211337Z" level=info msg="StartContainer for \"4dcc2d5e74aa8ece49bf14d20d67077e465caf0aa89c17886df55f88c78e4009\"" Jan 16 09:06:15.691502 systemd[1]: Started cri-containerd-4dcc2d5e74aa8ece49bf14d20d67077e465caf0aa89c17886df55f88c78e4009.scope - libcontainer container 4dcc2d5e74aa8ece49bf14d20d67077e465caf0aa89c17886df55f88c78e4009. Jan 16 09:06:15.789593 containerd[1464]: time="2025-01-16T09:06:15.789401493Z" level=info msg="StartContainer for \"4dcc2d5e74aa8ece49bf14d20d67077e465caf0aa89c17886df55f88c78e4009\" returns successfully" Jan 16 09:06:19.885019 kubelet[2534]: I0116 09:06:19.884389 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-6466f" podStartSLOduration=5.809664899 podStartE2EDuration="13.884356498s" podCreationTimestamp="2025-01-16 09:06:06 +0000 UTC" firstStartedPulling="2025-01-16 09:06:07.395764352 +0000 UTC m=+6.821429614" lastFinishedPulling="2025-01-16 09:06:15.47045596 +0000 UTC m=+14.896121213" observedRunningTime="2025-01-16 09:06:16.335774523 +0000 UTC m=+15.761439789" watchObservedRunningTime="2025-01-16 09:06:19.884356498 +0000 UTC m=+19.310021766" Jan 16 09:06:19.909287 systemd[1]: Created slice kubepods-besteffort-pod50c5be45_f4b0_4f89_8053_18dee5079d95.slice - libcontainer container kubepods-besteffort-pod50c5be45_f4b0_4f89_8053_18dee5079d95.slice. Jan 16 09:06:20.010503 kubelet[2534]: I0116 09:06:20.010427 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr4xz\" (UniqueName: \"kubernetes.io/projected/50c5be45-f4b0-4f89-8053-18dee5079d95-kube-api-access-jr4xz\") pod \"calico-typha-999c8dfd4-zcckp\" (UID: \"50c5be45-f4b0-4f89-8053-18dee5079d95\") " pod="calico-system/calico-typha-999c8dfd4-zcckp" Jan 16 09:06:20.010503 kubelet[2534]: I0116 09:06:20.010502 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50c5be45-f4b0-4f89-8053-18dee5079d95-tigera-ca-bundle\") pod \"calico-typha-999c8dfd4-zcckp\" (UID: \"50c5be45-f4b0-4f89-8053-18dee5079d95\") " pod="calico-system/calico-typha-999c8dfd4-zcckp" Jan 16 09:06:20.010503 kubelet[2534]: I0116 09:06:20.010531 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/50c5be45-f4b0-4f89-8053-18dee5079d95-typha-certs\") pod \"calico-typha-999c8dfd4-zcckp\" (UID: \"50c5be45-f4b0-4f89-8053-18dee5079d95\") " pod="calico-system/calico-typha-999c8dfd4-zcckp" Jan 16 09:06:20.227409 kubelet[2534]: E0116 09:06:20.227074 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:20.229453 containerd[1464]: time="2025-01-16T09:06:20.229228684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-999c8dfd4-zcckp,Uid:50c5be45-f4b0-4f89-8053-18dee5079d95,Namespace:calico-system,Attempt:0,}" Jan 16 09:06:20.238401 systemd[1]: Created slice kubepods-besteffort-pod25258c1a_8bd3_419f_8f37_a841c92c5201.slice - libcontainer container kubepods-besteffort-pod25258c1a_8bd3_419f_8f37_a841c92c5201.slice. Jan 16 09:06:20.315099 kubelet[2534]: I0116 09:06:20.314545 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/25258c1a-8bd3-419f-8f37-a841c92c5201-cni-log-dir\") pod \"calico-node-kx6c2\" (UID: \"25258c1a-8bd3-419f-8f37-a841c92c5201\") " pod="calico-system/calico-node-kx6c2" Jan 16 09:06:20.315099 kubelet[2534]: I0116 09:06:20.314633 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/25258c1a-8bd3-419f-8f37-a841c92c5201-node-certs\") pod \"calico-node-kx6c2\" (UID: \"25258c1a-8bd3-419f-8f37-a841c92c5201\") " pod="calico-system/calico-node-kx6c2" Jan 16 09:06:20.315099 kubelet[2534]: I0116 09:06:20.314663 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/25258c1a-8bd3-419f-8f37-a841c92c5201-var-lib-calico\") pod \"calico-node-kx6c2\" (UID: \"25258c1a-8bd3-419f-8f37-a841c92c5201\") " pod="calico-system/calico-node-kx6c2" Jan 16 09:06:20.315099 kubelet[2534]: I0116 09:06:20.314690 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/25258c1a-8bd3-419f-8f37-a841c92c5201-cni-bin-dir\") pod \"calico-node-kx6c2\" (UID: \"25258c1a-8bd3-419f-8f37-a841c92c5201\") " pod="calico-system/calico-node-kx6c2" Jan 16 09:06:20.315099 kubelet[2534]: I0116 09:06:20.314716 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/25258c1a-8bd3-419f-8f37-a841c92c5201-flexvol-driver-host\") pod \"calico-node-kx6c2\" (UID: \"25258c1a-8bd3-419f-8f37-a841c92c5201\") " pod="calico-system/calico-node-kx6c2" Jan 16 09:06:20.315557 kubelet[2534]: I0116 09:06:20.314762 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25258c1a-8bd3-419f-8f37-a841c92c5201-lib-modules\") pod \"calico-node-kx6c2\" (UID: \"25258c1a-8bd3-419f-8f37-a841c92c5201\") " pod="calico-system/calico-node-kx6c2" Jan 16 09:06:20.315557 kubelet[2534]: I0116 09:06:20.314796 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25258c1a-8bd3-419f-8f37-a841c92c5201-tigera-ca-bundle\") pod \"calico-node-kx6c2\" (UID: \"25258c1a-8bd3-419f-8f37-a841c92c5201\") " pod="calico-system/calico-node-kx6c2" Jan 16 09:06:20.315557 kubelet[2534]: I0116 09:06:20.314826 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/25258c1a-8bd3-419f-8f37-a841c92c5201-var-run-calico\") pod \"calico-node-kx6c2\" (UID: \"25258c1a-8bd3-419f-8f37-a841c92c5201\") " pod="calico-system/calico-node-kx6c2" Jan 16 09:06:20.315557 kubelet[2534]: I0116 09:06:20.314861 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/25258c1a-8bd3-419f-8f37-a841c92c5201-cni-net-dir\") pod \"calico-node-kx6c2\" (UID: \"25258c1a-8bd3-419f-8f37-a841c92c5201\") " pod="calico-system/calico-node-kx6c2" Jan 16 09:06:20.315557 kubelet[2534]: I0116 09:06:20.314893 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25258c1a-8bd3-419f-8f37-a841c92c5201-xtables-lock\") pod \"calico-node-kx6c2\" (UID: \"25258c1a-8bd3-419f-8f37-a841c92c5201\") " pod="calico-system/calico-node-kx6c2" Jan 16 09:06:20.315829 kubelet[2534]: I0116 09:06:20.314927 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/25258c1a-8bd3-419f-8f37-a841c92c5201-policysync\") pod \"calico-node-kx6c2\" (UID: \"25258c1a-8bd3-419f-8f37-a841c92c5201\") " pod="calico-system/calico-node-kx6c2" Jan 16 09:06:20.315829 kubelet[2534]: I0116 09:06:20.314959 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9l2t\" (UniqueName: \"kubernetes.io/projected/25258c1a-8bd3-419f-8f37-a841c92c5201-kube-api-access-l9l2t\") pod \"calico-node-kx6c2\" (UID: \"25258c1a-8bd3-419f-8f37-a841c92c5201\") " pod="calico-system/calico-node-kx6c2" Jan 16 09:06:20.389941 kubelet[2534]: E0116 09:06:20.389789 2534 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vhc8p" podUID="d531d993-f717-4dc6-b57d-367e3bb2fd54" Jan 16 09:06:20.391741 containerd[1464]: time="2025-01-16T09:06:20.383850446Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:06:20.391741 containerd[1464]: time="2025-01-16T09:06:20.386696336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:06:20.391741 containerd[1464]: time="2025-01-16T09:06:20.386848023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:20.394203 containerd[1464]: time="2025-01-16T09:06:20.393207258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:20.443241 kubelet[2534]: E0116 09:06:20.442955 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.443241 kubelet[2534]: W0116 09:06:20.443008 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.443241 kubelet[2534]: E0116 09:06:20.443068 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.445650 kubelet[2534]: E0116 09:06:20.445418 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.446094 kubelet[2534]: W0116 09:06:20.445729 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.446094 kubelet[2534]: E0116 09:06:20.445770 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.454710 kubelet[2534]: E0116 09:06:20.452328 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.454710 kubelet[2534]: W0116 09:06:20.452367 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.456869 kubelet[2534]: E0116 09:06:20.456496 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.457217 kubelet[2534]: W0116 09:06:20.456535 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.458612 kubelet[2534]: E0116 09:06:20.458279 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.470039 kubelet[2534]: E0116 09:06:20.460570 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.476722 kubelet[2534]: E0116 09:06:20.476546 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.479486 kubelet[2534]: W0116 09:06:20.476583 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.479486 kubelet[2534]: E0116 09:06:20.476964 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.479362 systemd[1]: Started cri-containerd-b33dfc7871b71615a03b4f39265374f1abc4639117a05619dfe3d91dd115200e.scope - libcontainer container b33dfc7871b71615a03b4f39265374f1abc4639117a05619dfe3d91dd115200e. Jan 16 09:06:20.485109 kubelet[2534]: E0116 09:06:20.484601 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.485109 kubelet[2534]: W0116 09:06:20.484633 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.485109 kubelet[2534]: E0116 09:06:20.484667 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.487055 kubelet[2534]: E0116 09:06:20.486089 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.487055 kubelet[2534]: W0116 09:06:20.486119 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.487055 kubelet[2534]: E0116 09:06:20.486151 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.487615 kubelet[2534]: E0116 09:06:20.487450 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.487615 kubelet[2534]: W0116 09:06:20.487479 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.487615 kubelet[2534]: E0116 09:06:20.487507 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.490161 kubelet[2534]: E0116 09:06:20.489484 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.490161 kubelet[2534]: W0116 09:06:20.489521 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.490161 kubelet[2534]: E0116 09:06:20.489552 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.493810 kubelet[2534]: E0116 09:06:20.493620 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.493810 kubelet[2534]: W0116 09:06:20.493654 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.493810 kubelet[2534]: E0116 09:06:20.493693 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.496092 kubelet[2534]: E0116 09:06:20.495630 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.496092 kubelet[2534]: W0116 09:06:20.495663 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.496092 kubelet[2534]: E0116 09:06:20.495697 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.497591 kubelet[2534]: E0116 09:06:20.497341 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.499298 kubelet[2534]: W0116 09:06:20.499093 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.499298 kubelet[2534]: E0116 09:06:20.499175 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.499915 kubelet[2534]: E0116 09:06:20.499727 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.499915 kubelet[2534]: W0116 09:06:20.499754 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.499915 kubelet[2534]: E0116 09:06:20.499778 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.501529 kubelet[2534]: E0116 09:06:20.501149 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.501529 kubelet[2534]: W0116 09:06:20.501406 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.501529 kubelet[2534]: E0116 09:06:20.501442 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.503237 kubelet[2534]: E0116 09:06:20.502594 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.503237 kubelet[2534]: W0116 09:06:20.502623 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.503237 kubelet[2534]: E0116 09:06:20.502649 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.504052 kubelet[2534]: E0116 09:06:20.503517 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.504052 kubelet[2534]: W0116 09:06:20.503537 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.504052 kubelet[2534]: E0116 09:06:20.503564 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.505303 kubelet[2534]: E0116 09:06:20.504969 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.505303 kubelet[2534]: W0116 09:06:20.505282 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.505303 kubelet[2534]: E0116 09:06:20.505318 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.506609 kubelet[2534]: E0116 09:06:20.506485 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.506609 kubelet[2534]: W0116 09:06:20.506556 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.506881 kubelet[2534]: E0116 09:06:20.506718 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.507972 kubelet[2534]: E0116 09:06:20.507772 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.507972 kubelet[2534]: W0116 09:06:20.507797 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.507972 kubelet[2534]: E0116 09:06:20.507848 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.509406 kubelet[2534]: E0116 09:06:20.509012 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.509406 kubelet[2534]: W0116 09:06:20.509035 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.509406 kubelet[2534]: E0116 09:06:20.509059 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.510198 kubelet[2534]: E0116 09:06:20.509953 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.510198 kubelet[2534]: W0116 09:06:20.509971 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.510198 kubelet[2534]: E0116 09:06:20.510146 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.511480 kubelet[2534]: E0116 09:06:20.511325 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.511480 kubelet[2534]: W0116 09:06:20.511355 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.511480 kubelet[2534]: E0116 09:06:20.511378 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.519026 kubelet[2534]: E0116 09:06:20.518324 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.519026 kubelet[2534]: W0116 09:06:20.518617 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.519026 kubelet[2534]: E0116 09:06:20.518659 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.519026 kubelet[2534]: I0116 09:06:20.518703 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d531d993-f717-4dc6-b57d-367e3bb2fd54-varrun\") pod \"csi-node-driver-vhc8p\" (UID: \"d531d993-f717-4dc6-b57d-367e3bb2fd54\") " pod="calico-system/csi-node-driver-vhc8p" Jan 16 09:06:20.521129 kubelet[2534]: E0116 09:06:20.520683 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.521129 kubelet[2534]: W0116 09:06:20.520716 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.521129 kubelet[2534]: E0116 09:06:20.520770 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.522751 kubelet[2534]: I0116 09:06:20.521793 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6v57\" (UniqueName: \"kubernetes.io/projected/d531d993-f717-4dc6-b57d-367e3bb2fd54-kube-api-access-z6v57\") pod \"csi-node-driver-vhc8p\" (UID: \"d531d993-f717-4dc6-b57d-367e3bb2fd54\") " pod="calico-system/csi-node-driver-vhc8p" Jan 16 09:06:20.522751 kubelet[2534]: E0116 09:06:20.522480 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.522751 kubelet[2534]: W0116 09:06:20.522523 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.522751 kubelet[2534]: E0116 09:06:20.522557 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.532295 kubelet[2534]: E0116 09:06:20.523482 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.532295 kubelet[2534]: W0116 09:06:20.523507 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.532295 kubelet[2534]: E0116 09:06:20.523566 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.532295 kubelet[2534]: I0116 09:06:20.523604 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d531d993-f717-4dc6-b57d-367e3bb2fd54-registration-dir\") pod \"csi-node-driver-vhc8p\" (UID: \"d531d993-f717-4dc6-b57d-367e3bb2fd54\") " pod="calico-system/csi-node-driver-vhc8p" Jan 16 09:06:20.532295 kubelet[2534]: E0116 09:06:20.525823 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.532295 kubelet[2534]: W0116 09:06:20.525847 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.532295 kubelet[2534]: E0116 09:06:20.525883 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.532295 kubelet[2534]: E0116 09:06:20.526354 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.532295 kubelet[2534]: W0116 09:06:20.526376 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.535619 kubelet[2534]: E0116 09:06:20.526399 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.535619 kubelet[2534]: E0116 09:06:20.527502 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.535619 kubelet[2534]: W0116 09:06:20.527524 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.535619 kubelet[2534]: E0116 09:06:20.527559 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.535619 kubelet[2534]: E0116 09:06:20.527961 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.535619 kubelet[2534]: W0116 09:06:20.528015 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.535619 kubelet[2534]: E0116 09:06:20.528034 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.535619 kubelet[2534]: E0116 09:06:20.528586 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.535619 kubelet[2534]: W0116 09:06:20.528603 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.535619 kubelet[2534]: E0116 09:06:20.528627 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.539711 kubelet[2534]: I0116 09:06:20.528664 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d531d993-f717-4dc6-b57d-367e3bb2fd54-socket-dir\") pod \"csi-node-driver-vhc8p\" (UID: \"d531d993-f717-4dc6-b57d-367e3bb2fd54\") " pod="calico-system/csi-node-driver-vhc8p" Jan 16 09:06:20.539711 kubelet[2534]: E0116 09:06:20.529686 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.539711 kubelet[2534]: W0116 09:06:20.529706 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.539711 kubelet[2534]: E0116 09:06:20.529727 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.539711 kubelet[2534]: E0116 09:06:20.532169 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.539711 kubelet[2534]: W0116 09:06:20.532211 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.539711 kubelet[2534]: E0116 09:06:20.532258 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.539711 kubelet[2534]: E0116 09:06:20.532677 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.539711 kubelet[2534]: W0116 09:06:20.532702 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.540243 kubelet[2534]: E0116 09:06:20.532772 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.540243 kubelet[2534]: E0116 09:06:20.534432 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.540243 kubelet[2534]: W0116 09:06:20.534466 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.540243 kubelet[2534]: E0116 09:06:20.534510 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.540243 kubelet[2534]: I0116 09:06:20.534574 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d531d993-f717-4dc6-b57d-367e3bb2fd54-kubelet-dir\") pod \"csi-node-driver-vhc8p\" (UID: \"d531d993-f717-4dc6-b57d-367e3bb2fd54\") " pod="calico-system/csi-node-driver-vhc8p" Jan 16 09:06:20.540243 kubelet[2534]: E0116 09:06:20.536838 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.540243 kubelet[2534]: W0116 09:06:20.536875 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.540243 kubelet[2534]: E0116 09:06:20.536908 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.543400 kubelet[2534]: E0116 09:06:20.543191 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.543400 kubelet[2534]: W0116 09:06:20.543233 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.543400 kubelet[2534]: E0116 09:06:20.543269 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.548990 kubelet[2534]: E0116 09:06:20.547292 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:20.549589 containerd[1464]: time="2025-01-16T09:06:20.548420686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kx6c2,Uid:25258c1a-8bd3-419f-8f37-a841c92c5201,Namespace:calico-system,Attempt:0,}" Jan 16 09:06:20.641419 kubelet[2534]: E0116 09:06:20.640179 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.641419 kubelet[2534]: W0116 09:06:20.640220 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.641419 kubelet[2534]: E0116 09:06:20.640275 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.644904 kubelet[2534]: E0116 09:06:20.644842 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.644904 kubelet[2534]: W0116 09:06:20.644882 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.645384 kubelet[2534]: E0116 09:06:20.644927 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.646202 kubelet[2534]: E0116 09:06:20.645736 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.646202 kubelet[2534]: W0116 09:06:20.645783 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.646202 kubelet[2534]: E0116 09:06:20.645826 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.647565 kubelet[2534]: E0116 09:06:20.646959 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.647565 kubelet[2534]: W0116 09:06:20.646999 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.647565 kubelet[2534]: E0116 09:06:20.647026 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.655708 kubelet[2534]: E0116 09:06:20.654687 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.655708 kubelet[2534]: W0116 09:06:20.654728 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.655708 kubelet[2534]: E0116 09:06:20.654969 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.657898 containerd[1464]: time="2025-01-16T09:06:20.654398829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:06:20.657898 containerd[1464]: time="2025-01-16T09:06:20.655408992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:06:20.657898 containerd[1464]: time="2025-01-16T09:06:20.655439237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:20.657898 containerd[1464]: time="2025-01-16T09:06:20.655637634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:20.658407 kubelet[2534]: E0116 09:06:20.657695 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.658407 kubelet[2534]: W0116 09:06:20.657723 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.658407 kubelet[2534]: E0116 09:06:20.658069 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.661560 kubelet[2534]: E0116 09:06:20.659261 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.661560 kubelet[2534]: W0116 09:06:20.659286 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.661560 kubelet[2534]: E0116 09:06:20.659362 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.661560 kubelet[2534]: E0116 09:06:20.661237 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.661560 kubelet[2534]: W0116 09:06:20.661311 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.662859 kubelet[2534]: E0116 09:06:20.662255 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.668157 kubelet[2534]: E0116 09:06:20.663794 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.669531 kubelet[2534]: W0116 09:06:20.668864 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.669531 kubelet[2534]: E0116 09:06:20.669125 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.675448 kubelet[2534]: E0116 09:06:20.674658 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.675448 kubelet[2534]: W0116 09:06:20.674707 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.676759 kubelet[2534]: E0116 09:06:20.676075 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.677805 kubelet[2534]: E0116 09:06:20.677596 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.677805 kubelet[2534]: W0116 09:06:20.677645 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.678655 kubelet[2534]: E0116 09:06:20.677973 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.679472 kubelet[2534]: E0116 09:06:20.679103 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.679472 kubelet[2534]: W0116 09:06:20.679130 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.680137 kubelet[2534]: E0116 09:06:20.680092 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.681152 kubelet[2534]: E0116 09:06:20.680956 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.681152 kubelet[2534]: W0116 09:06:20.681096 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.681679 kubelet[2534]: E0116 09:06:20.681555 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.682936 kubelet[2534]: E0116 09:06:20.682654 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.682936 kubelet[2534]: W0116 09:06:20.682723 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.683842 kubelet[2534]: E0116 09:06:20.683688 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.690855 kubelet[2534]: E0116 09:06:20.690459 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.690855 kubelet[2534]: W0116 09:06:20.690505 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.691831 kubelet[2534]: E0116 09:06:20.691399 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.694199 kubelet[2534]: E0116 09:06:20.694071 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.694681 kubelet[2534]: W0116 09:06:20.694425 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.695420 kubelet[2534]: E0116 09:06:20.694953 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.697616 kubelet[2534]: E0116 09:06:20.697291 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.697616 kubelet[2534]: W0116 09:06:20.697322 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.698415 kubelet[2534]: E0116 09:06:20.698293 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.699972 kubelet[2534]: E0116 09:06:20.699785 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.699972 kubelet[2534]: W0116 09:06:20.699813 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.701012 kubelet[2534]: E0116 09:06:20.700912 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.703168 kubelet[2534]: E0116 09:06:20.703126 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.703168 kubelet[2534]: W0116 09:06:20.703161 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.703472 kubelet[2534]: E0116 09:06:20.703306 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.707059 kubelet[2534]: E0116 09:06:20.706822 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.707059 kubelet[2534]: W0116 09:06:20.706854 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.707059 kubelet[2534]: E0116 09:06:20.707001 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.711384 kubelet[2534]: E0116 09:06:20.709945 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.711384 kubelet[2534]: W0116 09:06:20.710020 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.711929 kubelet[2534]: E0116 09:06:20.711693 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.727047 kubelet[2534]: E0116 09:06:20.716458 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.727047 kubelet[2534]: W0116 09:06:20.716500 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.727911 kubelet[2534]: E0116 09:06:20.727519 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.732479 kubelet[2534]: E0116 09:06:20.731804 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.732479 kubelet[2534]: W0116 09:06:20.731844 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.740613 kubelet[2534]: E0116 09:06:20.733542 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.740613 kubelet[2534]: E0116 09:06:20.735565 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.740613 kubelet[2534]: W0116 09:06:20.735596 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.740613 kubelet[2534]: E0116 09:06:20.735635 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.740613 kubelet[2534]: E0116 09:06:20.738129 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.740613 kubelet[2534]: W0116 09:06:20.738159 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.740613 kubelet[2534]: E0116 09:06:20.738186 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.741821 systemd[1]: Started cri-containerd-231a7630d022fa9af7a65e881470eb44abc665387a25be1d63702e1779320a30.scope - libcontainer container 231a7630d022fa9af7a65e881470eb44abc665387a25be1d63702e1779320a30. Jan 16 09:06:20.786896 kubelet[2534]: E0116 09:06:20.773163 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 09:06:20.786896 kubelet[2534]: W0116 09:06:20.773197 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 09:06:20.786896 kubelet[2534]: E0116 09:06:20.773277 2534 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 09:06:20.856861 containerd[1464]: time="2025-01-16T09:06:20.856752716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kx6c2,Uid:25258c1a-8bd3-419f-8f37-a841c92c5201,Namespace:calico-system,Attempt:0,} returns sandbox id \"231a7630d022fa9af7a65e881470eb44abc665387a25be1d63702e1779320a30\"" Jan 16 09:06:20.863474 kubelet[2534]: E0116 09:06:20.862825 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:20.868361 containerd[1464]: time="2025-01-16T09:06:20.867527921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 16 09:06:20.945116 containerd[1464]: time="2025-01-16T09:06:20.944691029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-999c8dfd4-zcckp,Uid:50c5be45-f4b0-4f89-8053-18dee5079d95,Namespace:calico-system,Attempt:0,} returns sandbox id \"b33dfc7871b71615a03b4f39265374f1abc4639117a05619dfe3d91dd115200e\"" Jan 16 09:06:20.947430 kubelet[2534]: E0116 09:06:20.946464 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:22.024601 kubelet[2534]: E0116 09:06:22.023117 2534 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vhc8p" podUID="d531d993-f717-4dc6-b57d-367e3bb2fd54" Jan 16 09:06:22.955342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount492609116.mount: Deactivated successfully. Jan 16 09:06:23.583757 containerd[1464]: time="2025-01-16T09:06:23.583682966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:23.593448 containerd[1464]: time="2025-01-16T09:06:23.593354517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 16 09:06:23.596054 containerd[1464]: time="2025-01-16T09:06:23.594765547Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:23.614481 containerd[1464]: time="2025-01-16T09:06:23.608082380Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:23.614481 containerd[1464]: time="2025-01-16T09:06:23.611499675Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.7427357s" Jan 16 09:06:23.614481 containerd[1464]: time="2025-01-16T09:06:23.611565593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 16 09:06:23.618372 containerd[1464]: time="2025-01-16T09:06:23.618290347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 16 09:06:23.622802 containerd[1464]: time="2025-01-16T09:06:23.622743873Z" level=info msg="CreateContainer within sandbox \"231a7630d022fa9af7a65e881470eb44abc665387a25be1d63702e1779320a30\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 16 09:06:23.696305 containerd[1464]: time="2025-01-16T09:06:23.696201303Z" level=info msg="CreateContainer within sandbox \"231a7630d022fa9af7a65e881470eb44abc665387a25be1d63702e1779320a30\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"afc6b02720552cd76af7251391a9f4b9b8a7d15990c3a267fd9bde3628bb090e\"" Jan 16 09:06:23.698044 containerd[1464]: time="2025-01-16T09:06:23.697553641Z" level=info msg="StartContainer for \"afc6b02720552cd76af7251391a9f4b9b8a7d15990c3a267fd9bde3628bb090e\"" Jan 16 09:06:23.825713 systemd[1]: Started cri-containerd-afc6b02720552cd76af7251391a9f4b9b8a7d15990c3a267fd9bde3628bb090e.scope - libcontainer container afc6b02720552cd76af7251391a9f4b9b8a7d15990c3a267fd9bde3628bb090e. Jan 16 09:06:23.965280 containerd[1464]: time="2025-01-16T09:06:23.962315217Z" level=info msg="StartContainer for \"afc6b02720552cd76af7251391a9f4b9b8a7d15990c3a267fd9bde3628bb090e\" returns successfully" Jan 16 09:06:24.008517 systemd[1]: cri-containerd-afc6b02720552cd76af7251391a9f4b9b8a7d15990c3a267fd9bde3628bb090e.scope: Deactivated successfully. Jan 16 09:06:24.022622 kubelet[2534]: E0116 09:06:24.022543 2534 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vhc8p" podUID="d531d993-f717-4dc6-b57d-367e3bb2fd54" Jan 16 09:06:24.082511 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afc6b02720552cd76af7251391a9f4b9b8a7d15990c3a267fd9bde3628bb090e-rootfs.mount: Deactivated successfully. Jan 16 09:06:24.210084 containerd[1464]: time="2025-01-16T09:06:24.209987197Z" level=info msg="shim disconnected" id=afc6b02720552cd76af7251391a9f4b9b8a7d15990c3a267fd9bde3628bb090e namespace=k8s.io Jan 16 09:06:24.211345 containerd[1464]: time="2025-01-16T09:06:24.210370919Z" level=warning msg="cleaning up after shim disconnected" id=afc6b02720552cd76af7251391a9f4b9b8a7d15990c3a267fd9bde3628bb090e namespace=k8s.io Jan 16 09:06:24.211345 containerd[1464]: time="2025-01-16T09:06:24.210398673Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:06:24.380663 kubelet[2534]: E0116 09:06:24.378024 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:26.038809 kubelet[2534]: E0116 09:06:26.038194 2534 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vhc8p" podUID="d531d993-f717-4dc6-b57d-367e3bb2fd54" Jan 16 09:06:28.023127 kubelet[2534]: E0116 09:06:28.023045 2534 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vhc8p" podUID="d531d993-f717-4dc6-b57d-367e3bb2fd54" Jan 16 09:06:28.437312 containerd[1464]: time="2025-01-16T09:06:28.437036714Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:28.442476 containerd[1464]: time="2025-01-16T09:06:28.442381124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 16 09:06:28.451284 containerd[1464]: time="2025-01-16T09:06:28.450236035Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:28.462035 containerd[1464]: time="2025-01-16T09:06:28.461409411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:28.463030 containerd[1464]: time="2025-01-16T09:06:28.462910029Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 4.843261588s" Jan 16 09:06:28.463030 containerd[1464]: time="2025-01-16T09:06:28.463027062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 16 09:06:28.475017 containerd[1464]: time="2025-01-16T09:06:28.473140044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 16 09:06:28.546499 containerd[1464]: time="2025-01-16T09:06:28.546430941Z" level=info msg="CreateContainer within sandbox \"b33dfc7871b71615a03b4f39265374f1abc4639117a05619dfe3d91dd115200e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 16 09:06:28.647297 containerd[1464]: time="2025-01-16T09:06:28.647240090Z" level=info msg="CreateContainer within sandbox \"b33dfc7871b71615a03b4f39265374f1abc4639117a05619dfe3d91dd115200e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3c129dd8e31b0dd853ec4109fae3978f0432a93526bfcfaca3f20a83081df6b5\"" Jan 16 09:06:28.652276 containerd[1464]: time="2025-01-16T09:06:28.650746511Z" level=info msg="StartContainer for \"3c129dd8e31b0dd853ec4109fae3978f0432a93526bfcfaca3f20a83081df6b5\"" Jan 16 09:06:28.827544 systemd[1]: Started cri-containerd-3c129dd8e31b0dd853ec4109fae3978f0432a93526bfcfaca3f20a83081df6b5.scope - libcontainer container 3c129dd8e31b0dd853ec4109fae3978f0432a93526bfcfaca3f20a83081df6b5. Jan 16 09:06:29.184423 containerd[1464]: time="2025-01-16T09:06:29.183901318Z" level=info msg="StartContainer for \"3c129dd8e31b0dd853ec4109fae3978f0432a93526bfcfaca3f20a83081df6b5\" returns successfully" Jan 16 09:06:29.439166 kubelet[2534]: E0116 09:06:29.424632 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:29.597197 kubelet[2534]: I0116 09:06:29.596792 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-999c8dfd4-zcckp" podStartSLOduration=3.079985845 podStartE2EDuration="10.596749767s" podCreationTimestamp="2025-01-16 09:06:19 +0000 UTC" firstStartedPulling="2025-01-16 09:06:20.948086346 +0000 UTC m=+20.373751587" lastFinishedPulling="2025-01-16 09:06:28.464850254 +0000 UTC m=+27.890515509" observedRunningTime="2025-01-16 09:06:29.594962137 +0000 UTC m=+29.020627403" watchObservedRunningTime="2025-01-16 09:06:29.596749767 +0000 UTC m=+29.022415042" Jan 16 09:06:30.023842 kubelet[2534]: E0116 09:06:30.023758 2534 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vhc8p" podUID="d531d993-f717-4dc6-b57d-367e3bb2fd54" Jan 16 09:06:30.431855 kubelet[2534]: I0116 09:06:30.428893 2534 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 09:06:30.431855 kubelet[2534]: E0116 09:06:30.429389 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:32.023550 kubelet[2534]: E0116 09:06:32.023420 2534 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vhc8p" podUID="d531d993-f717-4dc6-b57d-367e3bb2fd54" Jan 16 09:06:34.035270 kubelet[2534]: E0116 09:06:34.035190 2534 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vhc8p" podUID="d531d993-f717-4dc6-b57d-367e3bb2fd54" Jan 16 09:06:36.024053 kubelet[2534]: E0116 09:06:36.023416 2534 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vhc8p" podUID="d531d993-f717-4dc6-b57d-367e3bb2fd54" Jan 16 09:06:37.004613 containerd[1464]: time="2025-01-16T09:06:37.004313442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:37.007634 containerd[1464]: time="2025-01-16T09:06:37.007526276Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 16 09:06:37.075014 containerd[1464]: time="2025-01-16T09:06:37.073524625Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:37.096492 containerd[1464]: time="2025-01-16T09:06:37.096419864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:37.098606 containerd[1464]: time="2025-01-16T09:06:37.098537847Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 8.625321496s" Jan 16 09:06:37.098606 containerd[1464]: time="2025-01-16T09:06:37.098594112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 16 09:06:37.105416 containerd[1464]: time="2025-01-16T09:06:37.105254670Z" level=info msg="CreateContainer within sandbox \"231a7630d022fa9af7a65e881470eb44abc665387a25be1d63702e1779320a30\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 16 09:06:37.231417 containerd[1464]: time="2025-01-16T09:06:37.229751317Z" level=info msg="CreateContainer within sandbox \"231a7630d022fa9af7a65e881470eb44abc665387a25be1d63702e1779320a30\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0acd51fe3326527ec8f977df2e4822e5a9ab9d5795a35693f76da4650c4f4cc6\"" Jan 16 09:06:37.232863 containerd[1464]: time="2025-01-16T09:06:37.232792967Z" level=info msg="StartContainer for \"0acd51fe3326527ec8f977df2e4822e5a9ab9d5795a35693f76da4650c4f4cc6\"" Jan 16 09:06:37.511108 systemd[1]: run-containerd-runc-k8s.io-0acd51fe3326527ec8f977df2e4822e5a9ab9d5795a35693f76da4650c4f4cc6-runc.LiP2Sv.mount: Deactivated successfully. Jan 16 09:06:37.528367 systemd[1]: Started cri-containerd-0acd51fe3326527ec8f977df2e4822e5a9ab9d5795a35693f76da4650c4f4cc6.scope - libcontainer container 0acd51fe3326527ec8f977df2e4822e5a9ab9d5795a35693f76da4650c4f4cc6. Jan 16 09:06:37.609498 containerd[1464]: time="2025-01-16T09:06:37.609251729Z" level=info msg="StartContainer for \"0acd51fe3326527ec8f977df2e4822e5a9ab9d5795a35693f76da4650c4f4cc6\" returns successfully" Jan 16 09:06:38.023208 kubelet[2534]: E0116 09:06:38.022405 2534 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vhc8p" podUID="d531d993-f717-4dc6-b57d-367e3bb2fd54" Jan 16 09:06:38.560543 kubelet[2534]: E0116 09:06:38.560477 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:39.090810 systemd[1]: cri-containerd-0acd51fe3326527ec8f977df2e4822e5a9ab9d5795a35693f76da4650c4f4cc6.scope: Deactivated successfully. Jan 16 09:06:39.156063 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0acd51fe3326527ec8f977df2e4822e5a9ab9d5795a35693f76da4650c4f4cc6-rootfs.mount: Deactivated successfully. Jan 16 09:06:39.162413 containerd[1464]: time="2025-01-16T09:06:39.162093268Z" level=info msg="shim disconnected" id=0acd51fe3326527ec8f977df2e4822e5a9ab9d5795a35693f76da4650c4f4cc6 namespace=k8s.io Jan 16 09:06:39.162413 containerd[1464]: time="2025-01-16T09:06:39.162373344Z" level=warning msg="cleaning up after shim disconnected" id=0acd51fe3326527ec8f977df2e4822e5a9ab9d5795a35693f76da4650c4f4cc6 namespace=k8s.io Jan 16 09:06:39.162413 containerd[1464]: time="2025-01-16T09:06:39.162392280Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:06:39.192551 containerd[1464]: time="2025-01-16T09:06:39.192194197Z" level=warning msg="cleanup warnings time=\"2025-01-16T09:06:39Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 16 09:06:39.210278 kubelet[2534]: I0116 09:06:39.209366 2534 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 16 09:06:39.287091 systemd[1]: Created slice kubepods-burstable-podb702ae32_666c_42f3_b39e_07dfe42f3e21.slice - libcontainer container kubepods-burstable-podb702ae32_666c_42f3_b39e_07dfe42f3e21.slice. Jan 16 09:06:39.312999 systemd[1]: Created slice kubepods-besteffort-podaa2600c0_23e1_499d_8648_c34d81d3d9fd.slice - libcontainer container kubepods-besteffort-podaa2600c0_23e1_499d_8648_c34d81d3d9fd.slice. Jan 16 09:06:39.340246 systemd[1]: Created slice kubepods-burstable-podbf0a906a_5aaa_4b41_ac45_1d14d68ce2ba.slice - libcontainer container kubepods-burstable-podbf0a906a_5aaa_4b41_ac45_1d14d68ce2ba.slice. Jan 16 09:06:39.356488 kubelet[2534]: I0116 09:06:39.356304 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj4gd\" (UniqueName: \"kubernetes.io/projected/0181cb9a-b55f-410c-8edb-885fdf552f70-kube-api-access-vj4gd\") pod \"calico-kube-controllers-6c7c5469d4-468p9\" (UID: \"0181cb9a-b55f-410c-8edb-885fdf552f70\") " pod="calico-system/calico-kube-controllers-6c7c5469d4-468p9" Jan 16 09:06:39.356488 kubelet[2534]: I0116 09:06:39.356367 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6x7j\" (UniqueName: \"kubernetes.io/projected/aa2600c0-23e1-499d-8648-c34d81d3d9fd-kube-api-access-s6x7j\") pod \"calico-apiserver-665b6f6bf5-2vcpf\" (UID: \"aa2600c0-23e1-499d-8648-c34d81d3d9fd\") " pod="calico-apiserver/calico-apiserver-665b6f6bf5-2vcpf" Jan 16 09:06:39.356488 kubelet[2534]: I0116 09:06:39.356398 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7aa01779-2af2-4a27-987a-9e1a693e6e72-calico-apiserver-certs\") pod \"calico-apiserver-665b6f6bf5-ch8jl\" (UID: \"7aa01779-2af2-4a27-987a-9e1a693e6e72\") " pod="calico-apiserver/calico-apiserver-665b6f6bf5-ch8jl" Jan 16 09:06:39.356488 kubelet[2534]: I0116 09:06:39.356423 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xnrs\" (UniqueName: \"kubernetes.io/projected/7aa01779-2af2-4a27-987a-9e1a693e6e72-kube-api-access-6xnrs\") pod \"calico-apiserver-665b6f6bf5-ch8jl\" (UID: \"7aa01779-2af2-4a27-987a-9e1a693e6e72\") " pod="calico-apiserver/calico-apiserver-665b6f6bf5-ch8jl" Jan 16 09:06:39.358994 kubelet[2534]: I0116 09:06:39.356455 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b702ae32-666c-42f3-b39e-07dfe42f3e21-config-volume\") pod \"coredns-6f6b679f8f-4wc79\" (UID: \"b702ae32-666c-42f3-b39e-07dfe42f3e21\") " pod="kube-system/coredns-6f6b679f8f-4wc79" Jan 16 09:06:39.358994 kubelet[2534]: I0116 09:06:39.357272 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0181cb9a-b55f-410c-8edb-885fdf552f70-tigera-ca-bundle\") pod \"calico-kube-controllers-6c7c5469d4-468p9\" (UID: \"0181cb9a-b55f-410c-8edb-885fdf552f70\") " pod="calico-system/calico-kube-controllers-6c7c5469d4-468p9" Jan 16 09:06:39.358994 kubelet[2534]: I0116 09:06:39.357300 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p62xj\" (UniqueName: \"kubernetes.io/projected/bf0a906a-5aaa-4b41-ac45-1d14d68ce2ba-kube-api-access-p62xj\") pod \"coredns-6f6b679f8f-b9474\" (UID: \"bf0a906a-5aaa-4b41-ac45-1d14d68ce2ba\") " pod="kube-system/coredns-6f6b679f8f-b9474" Jan 16 09:06:39.358994 kubelet[2534]: I0116 09:06:39.357335 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/aa2600c0-23e1-499d-8648-c34d81d3d9fd-calico-apiserver-certs\") pod \"calico-apiserver-665b6f6bf5-2vcpf\" (UID: \"aa2600c0-23e1-499d-8648-c34d81d3d9fd\") " pod="calico-apiserver/calico-apiserver-665b6f6bf5-2vcpf" Jan 16 09:06:39.358994 kubelet[2534]: I0116 09:06:39.357360 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtqqp\" (UniqueName: \"kubernetes.io/projected/b702ae32-666c-42f3-b39e-07dfe42f3e21-kube-api-access-qtqqp\") pod \"coredns-6f6b679f8f-4wc79\" (UID: \"b702ae32-666c-42f3-b39e-07dfe42f3e21\") " pod="kube-system/coredns-6f6b679f8f-4wc79" Jan 16 09:06:39.359705 kubelet[2534]: I0116 09:06:39.357383 2534 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf0a906a-5aaa-4b41-ac45-1d14d68ce2ba-config-volume\") pod \"coredns-6f6b679f8f-b9474\" (UID: \"bf0a906a-5aaa-4b41-ac45-1d14d68ce2ba\") " pod="kube-system/coredns-6f6b679f8f-b9474" Jan 16 09:06:39.367709 systemd[1]: Created slice kubepods-besteffort-pod0181cb9a_b55f_410c_8edb_885fdf552f70.slice - libcontainer container kubepods-besteffort-pod0181cb9a_b55f_410c_8edb_885fdf552f70.slice. Jan 16 09:06:39.378797 systemd[1]: Created slice kubepods-besteffort-pod7aa01779_2af2_4a27_987a_9e1a693e6e72.slice - libcontainer container kubepods-besteffort-pod7aa01779_2af2_4a27_987a_9e1a693e6e72.slice. Jan 16 09:06:39.575002 kubelet[2534]: E0116 09:06:39.574857 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:39.577744 containerd[1464]: time="2025-01-16T09:06:39.577276767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 16 09:06:39.602328 kubelet[2534]: E0116 09:06:39.601769 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:39.605208 containerd[1464]: time="2025-01-16T09:06:39.605144790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4wc79,Uid:b702ae32-666c-42f3-b39e-07dfe42f3e21,Namespace:kube-system,Attempt:0,}" Jan 16 09:06:39.638912 containerd[1464]: time="2025-01-16T09:06:39.637345407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-665b6f6bf5-2vcpf,Uid:aa2600c0-23e1-499d-8648-c34d81d3d9fd,Namespace:calico-apiserver,Attempt:0,}" Jan 16 09:06:39.658362 kubelet[2534]: E0116 09:06:39.657768 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:39.681458 containerd[1464]: time="2025-01-16T09:06:39.676370155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c7c5469d4-468p9,Uid:0181cb9a-b55f-410c-8edb-885fdf552f70,Namespace:calico-system,Attempt:0,}" Jan 16 09:06:39.691688 containerd[1464]: time="2025-01-16T09:06:39.687103721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b9474,Uid:bf0a906a-5aaa-4b41-ac45-1d14d68ce2ba,Namespace:kube-system,Attempt:0,}" Jan 16 09:06:39.696019 containerd[1464]: time="2025-01-16T09:06:39.695873438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-665b6f6bf5-ch8jl,Uid:7aa01779-2af2-4a27-987a-9e1a693e6e72,Namespace:calico-apiserver,Attempt:0,}" Jan 16 09:06:40.046254 systemd[1]: Created slice kubepods-besteffort-podd531d993_f717_4dc6_b57d_367e3bb2fd54.slice - libcontainer container kubepods-besteffort-podd531d993_f717_4dc6_b57d_367e3bb2fd54.slice. Jan 16 09:06:40.152622 containerd[1464]: time="2025-01-16T09:06:40.152018303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vhc8p,Uid:d531d993-f717-4dc6-b57d-367e3bb2fd54,Namespace:calico-system,Attempt:0,}" Jan 16 09:06:40.589328 containerd[1464]: time="2025-01-16T09:06:40.589262875Z" level=error msg="Failed to destroy network for sandbox \"042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:40.596536 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7-shm.mount: Deactivated successfully. Jan 16 09:06:40.612384 containerd[1464]: time="2025-01-16T09:06:40.612316135Z" level=error msg="Failed to destroy network for sandbox \"b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:40.614173 containerd[1464]: time="2025-01-16T09:06:40.614091420Z" level=error msg="encountered an error cleaning up failed sandbox \"042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:40.616273 containerd[1464]: time="2025-01-16T09:06:40.616199220Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-665b6f6bf5-ch8jl,Uid:7aa01779-2af2-4a27-987a-9e1a693e6e72,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:40.617666 containerd[1464]: time="2025-01-16T09:06:40.617571523Z" level=error msg="encountered an error cleaning up failed sandbox \"b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:40.617822 containerd[1464]: time="2025-01-16T09:06:40.617689423Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c7c5469d4-468p9,Uid:0181cb9a-b55f-410c-8edb-885fdf552f70,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:40.618961 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50-shm.mount: Deactivated successfully. Jan 16 09:06:40.630150 containerd[1464]: time="2025-01-16T09:06:40.629260360Z" level=error msg="Failed to destroy network for sandbox \"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:40.631893 kubelet[2534]: E0116 09:06:40.630810 2534 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:40.631893 kubelet[2534]: E0116 09:06:40.630927 2534 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-665b6f6bf5-ch8jl" Jan 16 09:06:40.631893 kubelet[2534]: E0116 09:06:40.630964 2534 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-665b6f6bf5-ch8jl" Jan 16 09:06:40.635432 containerd[1464]: time="2025-01-16T09:06:40.630954371Z" level=error msg="encountered an error cleaning up failed sandbox \"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:40.635432 containerd[1464]: time="2025-01-16T09:06:40.631096342Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b9474,Uid:bf0a906a-5aaa-4b41-ac45-1d14d68ce2ba,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:40.635528 kubelet[2534]: E0116 09:06:40.631052 2534 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-665b6f6bf5-ch8jl_calico-apiserver(7aa01779-2af2-4a27-987a-9e1a693e6e72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-665b6f6bf5-ch8jl_calico-apiserver(7aa01779-2af2-4a27-987a-9e1a693e6e72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-665b6f6bf5-ch8jl" podUID="7aa01779-2af2-4a27-987a-9e1a693e6e72" Jan 16 09:06:40.635528 kubelet[2534]: E0116 09:06:40.631557 2534 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:40.635528 kubelet[2534]: E0116 09:06:40.631640 2534 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-b9474" Jan 16 09:06:40.635687 kubelet[2534]: E0116 09:06:40.631697 2534 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-b9474" Jan 16 09:06:40.635687 kubelet[2534]: E0116 09:06:40.631786 2534 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-b9474_kube-system(bf0a906a-5aaa-4b41-ac45-1d14d68ce2ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-b9474_kube-system(bf0a906a-5aaa-4b41-ac45-1d14d68ce2ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-b9474" podUID="bf0a906a-5aaa-4b41-ac45-1d14d68ce2ba" Jan 16 09:06:40.635687 kubelet[2534]: E0116 09:06:40.632127 2534 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:40.635828 kubelet[2534]: E0116 09:06:40.632227 2534 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c7c5469d4-468p9" Jan 16 09:06:40.635828 kubelet[2534]: E0116 09:06:40.632307 2534 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c7c5469d4-468p9" Jan 16 09:06:40.635828 kubelet[2534]: E0116 09:06:40.632402 2534 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c7c5469d4-468p9_calico-system(0181cb9a-b55f-410c-8edb-885fdf552f70)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c7c5469d4-468p9_calico-system(0181cb9a-b55f-410c-8edb-885fdf552f70)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c7c5469d4-468p9" podUID="0181cb9a-b55f-410c-8edb-885fdf552f70" Jan 16 09:06:40.665730 containerd[1464]: time="2025-01-16T09:06:40.665652938Z" level=error msg="Failed to destroy network for sandbox \"52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:40.666296 containerd[1464]: time="2025-01-16T09:06:40.666234687Z" level=error msg="encountered an error cleaning up failed sandbox \"52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:40.666406 containerd[1464]: time="2025-01-16T09:06:40.666333379Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4wc79,Uid:b702ae32-666c-42f3-b39e-07dfe42f3e21,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:40.666730 kubelet[2534]: E0116 09:06:40.666675 2534 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:40.666818 kubelet[2534]: E0116 09:06:40.666772 2534 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-4wc79" Jan 16 09:06:40.666852 kubelet[2534]: E0116 09:06:40.666824 2534 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-4wc79" Jan 16 09:06:40.666988 kubelet[2534]: E0116 09:06:40.666904 2534 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-4wc79_kube-system(b702ae32-666c-42f3-b39e-07dfe42f3e21)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-4wc79_kube-system(b702ae32-666c-42f3-b39e-07dfe42f3e21)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-4wc79" podUID="b702ae32-666c-42f3-b39e-07dfe42f3e21" Jan 16 09:06:40.672359 containerd[1464]: time="2025-01-16T09:06:40.672135938Z" level=error msg="Failed to destroy network for sandbox \"3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:40.673381 containerd[1464]: time="2025-01-16T09:06:40.673326275Z" level=error msg="Failed to destroy network for sandbox \"f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:40.674262 containerd[1464]: time="2025-01-16T09:06:40.673804805Z" level=error msg="encountered an error cleaning up failed sandbox \"3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:40.674485 containerd[1464]: time="2025-01-16T09:06:40.674434340Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-665b6f6bf5-2vcpf,Uid:aa2600c0-23e1-499d-8648-c34d81d3d9fd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:40.674709 containerd[1464]: time="2025-01-16T09:06:40.674570643Z" level=error msg="encountered an error cleaning up failed sandbox \"f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:40.675077 kubelet[2534]: E0116 09:06:40.674951 2534 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:40.675371 kubelet[2534]: E0116 09:06:40.675117 2534 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-665b6f6bf5-2vcpf" Jan 16 09:06:40.675371 kubelet[2534]: E0116 09:06:40.675157 2534 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-665b6f6bf5-2vcpf" Jan 16 09:06:40.678055 containerd[1464]: time="2025-01-16T09:06:40.674780898Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vhc8p,Uid:d531d993-f717-4dc6-b57d-367e3bb2fd54,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:40.678264 kubelet[2534]: E0116 09:06:40.676195 2534 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-665b6f6bf5-2vcpf_calico-apiserver(aa2600c0-23e1-499d-8648-c34d81d3d9fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-665b6f6bf5-2vcpf_calico-apiserver(aa2600c0-23e1-499d-8648-c34d81d3d9fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-665b6f6bf5-2vcpf" podUID="aa2600c0-23e1-499d-8648-c34d81d3d9fd" Jan 16 09:06:40.678708 kubelet[2534]: E0116 09:06:40.678650 2534 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:40.678852 kubelet[2534]: E0116 09:06:40.678738 2534 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vhc8p" Jan 16 09:06:40.678852 kubelet[2534]: E0116 09:06:40.678775 2534 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vhc8p" Jan 16 09:06:40.678995 kubelet[2534]: E0116 09:06:40.678908 2534 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vhc8p_calico-system(d531d993-f717-4dc6-b57d-367e3bb2fd54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vhc8p_calico-system(d531d993-f717-4dc6-b57d-367e3bb2fd54)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vhc8p" podUID="d531d993-f717-4dc6-b57d-367e3bb2fd54" Jan 16 09:06:41.154874 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea-shm.mount: Deactivated successfully. Jan 16 09:06:41.155607 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5-shm.mount: Deactivated successfully. Jan 16 09:06:41.155743 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147-shm.mount: Deactivated successfully. Jan 16 09:06:41.155854 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9-shm.mount: Deactivated successfully. Jan 16 09:06:41.594494 kubelet[2534]: I0116 09:06:41.594452 2534 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" Jan 16 09:06:41.622072 containerd[1464]: time="2025-01-16T09:06:41.621466830Z" level=info msg="StopPodSandbox for \"b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50\"" Jan 16 09:06:41.624792 containerd[1464]: time="2025-01-16T09:06:41.624624940Z" level=info msg="Ensure that sandbox b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50 in task-service has been cleanup successfully" Jan 16 09:06:41.640411 kubelet[2534]: I0116 09:06:41.639799 2534 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" Jan 16 09:06:41.651104 containerd[1464]: time="2025-01-16T09:06:41.651019070Z" level=info msg="StopPodSandbox for \"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5\"" Jan 16 09:06:41.651597 containerd[1464]: time="2025-01-16T09:06:41.651528995Z" level=info msg="Ensure that sandbox 1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5 in task-service has been cleanup successfully" Jan 16 09:06:41.653676 kubelet[2534]: I0116 09:06:41.653187 2534 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" Jan 16 09:06:41.656504 containerd[1464]: time="2025-01-16T09:06:41.656451041Z" level=info msg="StopPodSandbox for \"f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea\"" Jan 16 09:06:41.657492 containerd[1464]: time="2025-01-16T09:06:41.657257577Z" level=info msg="Ensure that sandbox f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea in task-service has been cleanup successfully" Jan 16 09:06:41.683801 kubelet[2534]: I0116 09:06:41.682250 2534 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" Jan 16 09:06:41.692272 containerd[1464]: time="2025-01-16T09:06:41.691096717Z" level=info msg="StopPodSandbox for \"042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7\"" Jan 16 09:06:41.695531 kubelet[2534]: I0116 09:06:41.695488 2534 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" Jan 16 09:06:41.707688 containerd[1464]: time="2025-01-16T09:06:41.707120435Z" level=info msg="StopPodSandbox for \"3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147\"" Jan 16 09:06:41.708245 containerd[1464]: time="2025-01-16T09:06:41.708020026Z" level=info msg="Ensure that sandbox 042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7 in task-service has been cleanup successfully" Jan 16 09:06:41.712381 containerd[1464]: time="2025-01-16T09:06:41.711693092Z" level=info msg="Ensure that sandbox 3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147 in task-service has been cleanup successfully" Jan 16 09:06:41.731709 kubelet[2534]: I0116 09:06:41.731659 2534 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" Jan 16 09:06:41.736968 containerd[1464]: time="2025-01-16T09:06:41.736909637Z" level=info msg="StopPodSandbox for \"52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9\"" Jan 16 09:06:41.742011 containerd[1464]: time="2025-01-16T09:06:41.741461491Z" level=info msg="Ensure that sandbox 52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9 in task-service has been cleanup successfully" Jan 16 09:06:41.897724 containerd[1464]: time="2025-01-16T09:06:41.897397960Z" level=error msg="StopPodSandbox for \"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5\" failed" error="failed to destroy network for sandbox \"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:41.898484 kubelet[2534]: E0116 09:06:41.897748 2534 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" Jan 16 09:06:41.898484 kubelet[2534]: E0116 09:06:41.897855 2534 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5"} Jan 16 09:06:41.898484 kubelet[2534]: E0116 09:06:41.898007 2534 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bf0a906a-5aaa-4b41-ac45-1d14d68ce2ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 09:06:41.898484 kubelet[2534]: E0116 09:06:41.898049 2534 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bf0a906a-5aaa-4b41-ac45-1d14d68ce2ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-b9474" podUID="bf0a906a-5aaa-4b41-ac45-1d14d68ce2ba" Jan 16 09:06:41.956653 containerd[1464]: time="2025-01-16T09:06:41.956333769Z" level=error msg="StopPodSandbox for \"f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea\" failed" error="failed to destroy network for sandbox \"f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:41.958281 kubelet[2534]: E0116 09:06:41.956661 2534 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" Jan 16 09:06:41.958281 kubelet[2534]: E0116 09:06:41.956722 2534 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea"} Jan 16 09:06:41.958281 kubelet[2534]: E0116 09:06:41.956766 2534 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d531d993-f717-4dc6-b57d-367e3bb2fd54\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 09:06:41.958281 kubelet[2534]: E0116 09:06:41.956799 2534 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d531d993-f717-4dc6-b57d-367e3bb2fd54\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vhc8p" podUID="d531d993-f717-4dc6-b57d-367e3bb2fd54" Jan 16 09:06:41.958914 containerd[1464]: time="2025-01-16T09:06:41.958667989Z" level=error msg="StopPodSandbox for \"042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7\" failed" error="failed to destroy network for sandbox \"042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:41.959636 kubelet[2534]: E0116 09:06:41.959508 2534 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" Jan 16 09:06:41.959890 kubelet[2534]: E0116 09:06:41.959656 2534 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7"} Jan 16 09:06:41.959890 kubelet[2534]: E0116 09:06:41.959704 2534 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7aa01779-2af2-4a27-987a-9e1a693e6e72\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 09:06:41.959890 kubelet[2534]: E0116 09:06:41.959739 2534 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7aa01779-2af2-4a27-987a-9e1a693e6e72\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-665b6f6bf5-ch8jl" podUID="7aa01779-2af2-4a27-987a-9e1a693e6e72" Jan 16 09:06:41.964317 containerd[1464]: time="2025-01-16T09:06:41.964167716Z" level=error msg="StopPodSandbox for \"b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50\" failed" error="failed to destroy network for sandbox \"b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:41.964716 kubelet[2534]: E0116 09:06:41.964522 2534 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" Jan 16 09:06:41.964815 kubelet[2534]: E0116 09:06:41.964745 2534 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50"} Jan 16 09:06:41.966044 kubelet[2534]: E0116 09:06:41.964803 2534 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0181cb9a-b55f-410c-8edb-885fdf552f70\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 09:06:41.966044 kubelet[2534]: E0116 09:06:41.965804 2534 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0181cb9a-b55f-410c-8edb-885fdf552f70\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c7c5469d4-468p9" podUID="0181cb9a-b55f-410c-8edb-885fdf552f70" Jan 16 09:06:41.995775 containerd[1464]: time="2025-01-16T09:06:41.995489527Z" level=error msg="StopPodSandbox for \"3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147\" failed" error="failed to destroy network for sandbox \"3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:41.996377 kubelet[2534]: E0116 09:06:41.995799 2534 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" Jan 16 09:06:41.996377 kubelet[2534]: E0116 09:06:41.995878 2534 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147"} Jan 16 09:06:41.996377 kubelet[2534]: E0116 09:06:41.995925 2534 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aa2600c0-23e1-499d-8648-c34d81d3d9fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 09:06:41.996377 kubelet[2534]: E0116 09:06:41.995955 2534 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aa2600c0-23e1-499d-8648-c34d81d3d9fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-665b6f6bf5-2vcpf" podUID="aa2600c0-23e1-499d-8648-c34d81d3d9fd" Jan 16 09:06:42.005672 containerd[1464]: time="2025-01-16T09:06:42.005607027Z" level=error msg="StopPodSandbox for \"52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9\" failed" error="failed to destroy network for sandbox \"52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:42.008034 kubelet[2534]: E0116 09:06:42.006329 2534 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" Jan 16 09:06:42.008034 kubelet[2534]: E0116 09:06:42.006403 2534 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9"} Jan 16 09:06:42.008034 kubelet[2534]: E0116 09:06:42.006453 2534 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b702ae32-666c-42f3-b39e-07dfe42f3e21\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 09:06:42.008034 kubelet[2534]: E0116 09:06:42.006481 2534 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b702ae32-666c-42f3-b39e-07dfe42f3e21\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-4wc79" podUID="b702ae32-666c-42f3-b39e-07dfe42f3e21" Jan 16 09:06:44.930014 kubelet[2534]: I0116 09:06:44.929941 2534 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 09:06:44.932029 kubelet[2534]: E0116 09:06:44.931959 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:45.751772 kubelet[2534]: E0116 09:06:45.751608 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:51.505356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2965814516.mount: Deactivated successfully. Jan 16 09:06:51.995514 containerd[1464]: time="2025-01-16T09:06:51.857772440Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 16 09:06:52.058382 containerd[1464]: time="2025-01-16T09:06:52.058300230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:52.126189 containerd[1464]: time="2025-01-16T09:06:52.126036005Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:52.140912 containerd[1464]: time="2025-01-16T09:06:52.140815475Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:06:52.151448 containerd[1464]: time="2025-01-16T09:06:52.151164610Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 12.564477317s" Jan 16 09:06:52.151448 containerd[1464]: time="2025-01-16T09:06:52.151262089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 16 09:06:52.357554 containerd[1464]: time="2025-01-16T09:06:52.357091115Z" level=info msg="CreateContainer within sandbox \"231a7630d022fa9af7a65e881470eb44abc665387a25be1d63702e1779320a30\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 16 09:06:52.854054 containerd[1464]: time="2025-01-16T09:06:52.853738986Z" level=info msg="CreateContainer within sandbox \"231a7630d022fa9af7a65e881470eb44abc665387a25be1d63702e1779320a30\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"edcf1cc48fe682ecd68f9c7afa3d37d32155b44cd67842a96ecd78c247bc8578\"" Jan 16 09:06:52.876060 containerd[1464]: time="2025-01-16T09:06:52.873729701Z" level=info msg="StartContainer for \"edcf1cc48fe682ecd68f9c7afa3d37d32155b44cd67842a96ecd78c247bc8578\"" Jan 16 09:06:53.028035 containerd[1464]: time="2025-01-16T09:06:53.027963297Z" level=info msg="StopPodSandbox for \"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5\"" Jan 16 09:06:53.147905 systemd[1]: run-containerd-runc-k8s.io-edcf1cc48fe682ecd68f9c7afa3d37d32155b44cd67842a96ecd78c247bc8578-runc.YCYQYI.mount: Deactivated successfully. Jan 16 09:06:53.160403 systemd[1]: Started cri-containerd-edcf1cc48fe682ecd68f9c7afa3d37d32155b44cd67842a96ecd78c247bc8578.scope - libcontainer container edcf1cc48fe682ecd68f9c7afa3d37d32155b44cd67842a96ecd78c247bc8578. Jan 16 09:06:53.333654 containerd[1464]: time="2025-01-16T09:06:53.333550744Z" level=error msg="StopPodSandbox for \"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5\" failed" error="failed to destroy network for sandbox \"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:06:53.335537 kubelet[2534]: E0116 09:06:53.335234 2534 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" Jan 16 09:06:53.337942 kubelet[2534]: E0116 09:06:53.337734 2534 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5"} Jan 16 09:06:53.337942 kubelet[2534]: E0116 09:06:53.337833 2534 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bf0a906a-5aaa-4b41-ac45-1d14d68ce2ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 09:06:53.337942 kubelet[2534]: E0116 09:06:53.337876 2534 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bf0a906a-5aaa-4b41-ac45-1d14d68ce2ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-b9474" podUID="bf0a906a-5aaa-4b41-ac45-1d14d68ce2ba" Jan 16 09:06:53.415060 containerd[1464]: time="2025-01-16T09:06:53.414558252Z" level=info msg="StartContainer for \"edcf1cc48fe682ecd68f9c7afa3d37d32155b44cd67842a96ecd78c247bc8578\" returns successfully" Jan 16 09:06:53.757084 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 16 09:06:53.763270 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 16 09:06:53.869450 kubelet[2534]: E0116 09:06:53.868074 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:54.029819 kubelet[2534]: I0116 09:06:54.004406 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kx6c2" podStartSLOduration=2.675632796 podStartE2EDuration="33.988996877s" podCreationTimestamp="2025-01-16 09:06:20 +0000 UTC" firstStartedPulling="2025-01-16 09:06:20.866974104 +0000 UTC m=+20.292639343" lastFinishedPulling="2025-01-16 09:06:52.180338184 +0000 UTC m=+51.606003424" observedRunningTime="2025-01-16 09:06:53.977657537 +0000 UTC m=+53.403322800" watchObservedRunningTime="2025-01-16 09:06:53.988996877 +0000 UTC m=+53.414662143" Jan 16 09:06:54.123630 systemd[1]: run-containerd-runc-k8s.io-edcf1cc48fe682ecd68f9c7afa3d37d32155b44cd67842a96ecd78c247bc8578-runc.0dyFZu.mount: Deactivated successfully. Jan 16 09:06:54.889419 kubelet[2534]: E0116 09:06:54.889352 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:55.029803 containerd[1464]: time="2025-01-16T09:06:55.029723208Z" level=info msg="StopPodSandbox for \"042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7\"" Jan 16 09:06:55.034765 containerd[1464]: time="2025-01-16T09:06:55.034698343Z" level=info msg="StopPodSandbox for \"f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea\"" Jan 16 09:06:55.811581 containerd[1464]: 2025-01-16 09:06:55.371 [INFO][3703] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" Jan 16 09:06:55.811581 containerd[1464]: 2025-01-16 09:06:55.371 [INFO][3703] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" iface="eth0" netns="/var/run/netns/cni-c400727c-1f54-b15d-fd52-7b7fde5f0c0d" Jan 16 09:06:55.811581 containerd[1464]: 2025-01-16 09:06:55.371 [INFO][3703] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" iface="eth0" netns="/var/run/netns/cni-c400727c-1f54-b15d-fd52-7b7fde5f0c0d" Jan 16 09:06:55.811581 containerd[1464]: 2025-01-16 09:06:55.373 [INFO][3703] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" iface="eth0" netns="/var/run/netns/cni-c400727c-1f54-b15d-fd52-7b7fde5f0c0d" Jan 16 09:06:55.811581 containerd[1464]: 2025-01-16 09:06:55.373 [INFO][3703] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" Jan 16 09:06:55.811581 containerd[1464]: 2025-01-16 09:06:55.373 [INFO][3703] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" Jan 16 09:06:55.811581 containerd[1464]: 2025-01-16 09:06:55.743 [INFO][3721] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" HandleID="k8s-pod-network.042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--ch8jl-eth0" Jan 16 09:06:55.811581 containerd[1464]: 2025-01-16 09:06:55.747 [INFO][3721] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:06:55.811581 containerd[1464]: 2025-01-16 09:06:55.748 [INFO][3721] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:06:55.811581 containerd[1464]: 2025-01-16 09:06:55.791 [WARNING][3721] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" HandleID="k8s-pod-network.042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--ch8jl-eth0" Jan 16 09:06:55.811581 containerd[1464]: 2025-01-16 09:06:55.791 [INFO][3721] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" HandleID="k8s-pod-network.042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--ch8jl-eth0" Jan 16 09:06:55.811581 containerd[1464]: 2025-01-16 09:06:55.800 [INFO][3721] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:06:55.811581 containerd[1464]: 2025-01-16 09:06:55.806 [INFO][3703] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" Jan 16 09:06:55.814416 containerd[1464]: time="2025-01-16T09:06:55.813542405Z" level=info msg="TearDown network for sandbox \"042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7\" successfully" Jan 16 09:06:55.814416 containerd[1464]: time="2025-01-16T09:06:55.814134317Z" level=info msg="StopPodSandbox for \"042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7\" returns successfully" Jan 16 09:06:55.822771 containerd[1464]: time="2025-01-16T09:06:55.821150857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-665b6f6bf5-ch8jl,Uid:7aa01779-2af2-4a27-987a-9e1a693e6e72,Namespace:calico-apiserver,Attempt:1,}" Jan 16 09:06:55.825254 systemd[1]: run-netns-cni\x2dc400727c\x2d1f54\x2db15d\x2dfd52\x2d7b7fde5f0c0d.mount: Deactivated successfully. Jan 16 09:06:55.860275 containerd[1464]: 2025-01-16 09:06:55.367 [INFO][3705] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" Jan 16 09:06:55.860275 containerd[1464]: 2025-01-16 09:06:55.367 [INFO][3705] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" iface="eth0" netns="/var/run/netns/cni-f3d3b8fd-6753-a323-f36b-24edcfb87f7c" Jan 16 09:06:55.860275 containerd[1464]: 2025-01-16 09:06:55.368 [INFO][3705] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" iface="eth0" netns="/var/run/netns/cni-f3d3b8fd-6753-a323-f36b-24edcfb87f7c" Jan 16 09:06:55.860275 containerd[1464]: 2025-01-16 09:06:55.372 [INFO][3705] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" iface="eth0" netns="/var/run/netns/cni-f3d3b8fd-6753-a323-f36b-24edcfb87f7c" Jan 16 09:06:55.860275 containerd[1464]: 2025-01-16 09:06:55.372 [INFO][3705] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" Jan 16 09:06:55.860275 containerd[1464]: 2025-01-16 09:06:55.372 [INFO][3705] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" Jan 16 09:06:55.860275 containerd[1464]: 2025-01-16 09:06:55.750 [INFO][3720] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" HandleID="k8s-pod-network.f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" Workload="ci--4081.3.0--f--3b05cacdca-k8s-csi--node--driver--vhc8p-eth0" Jan 16 09:06:55.860275 containerd[1464]: 2025-01-16 09:06:55.752 [INFO][3720] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:06:55.860275 containerd[1464]: 2025-01-16 09:06:55.800 [INFO][3720] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:06:55.860275 containerd[1464]: 2025-01-16 09:06:55.838 [WARNING][3720] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" HandleID="k8s-pod-network.f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" Workload="ci--4081.3.0--f--3b05cacdca-k8s-csi--node--driver--vhc8p-eth0" Jan 16 09:06:55.860275 containerd[1464]: 2025-01-16 09:06:55.839 [INFO][3720] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" HandleID="k8s-pod-network.f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" Workload="ci--4081.3.0--f--3b05cacdca-k8s-csi--node--driver--vhc8p-eth0" Jan 16 09:06:55.860275 containerd[1464]: 2025-01-16 09:06:55.848 [INFO][3720] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:06:55.860275 containerd[1464]: 2025-01-16 09:06:55.855 [INFO][3705] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" Jan 16 09:06:55.866266 containerd[1464]: time="2025-01-16T09:06:55.866114390Z" level=info msg="TearDown network for sandbox \"f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea\" successfully" Jan 16 09:06:55.866266 containerd[1464]: time="2025-01-16T09:06:55.866174569Z" level=info msg="StopPodSandbox for \"f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea\" returns successfully" Jan 16 09:06:55.890627 containerd[1464]: time="2025-01-16T09:06:55.877854989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vhc8p,Uid:d531d993-f717-4dc6-b57d-367e3bb2fd54,Namespace:calico-system,Attempt:1,}" Jan 16 09:06:55.890887 kubelet[2534]: E0116 09:06:55.886059 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:55.880818 systemd[1]: run-netns-cni\x2df3d3b8fd\x2d6753\x2da323\x2df36b\x2d24edcfb87f7c.mount: Deactivated successfully. Jan 16 09:06:56.029394 containerd[1464]: time="2025-01-16T09:06:56.025578443Z" level=info msg="StopPodSandbox for \"3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147\"" Jan 16 09:06:56.029394 containerd[1464]: time="2025-01-16T09:06:56.027398111Z" level=info msg="StopPodSandbox for \"b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50\"" Jan 16 09:06:56.842196 systemd-networkd[1372]: calie8cf5a2f262: Link UP Jan 16 09:06:56.861173 containerd[1464]: 2025-01-16 09:06:56.544 [INFO][3816] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" Jan 16 09:06:56.861173 containerd[1464]: 2025-01-16 09:06:56.544 [INFO][3816] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" iface="eth0" netns="/var/run/netns/cni-8bff0865-fbed-8b5e-dc55-d8656b815967" Jan 16 09:06:56.861173 containerd[1464]: 2025-01-16 09:06:56.548 [INFO][3816] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" iface="eth0" netns="/var/run/netns/cni-8bff0865-fbed-8b5e-dc55-d8656b815967" Jan 16 09:06:56.861173 containerd[1464]: 2025-01-16 09:06:56.557 [INFO][3816] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" iface="eth0" netns="/var/run/netns/cni-8bff0865-fbed-8b5e-dc55-d8656b815967" Jan 16 09:06:56.861173 containerd[1464]: 2025-01-16 09:06:56.557 [INFO][3816] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" Jan 16 09:06:56.861173 containerd[1464]: 2025-01-16 09:06:56.557 [INFO][3816] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" Jan 16 09:06:56.861173 containerd[1464]: 2025-01-16 09:06:56.728 [INFO][3845] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" HandleID="k8s-pod-network.b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--kube--controllers--6c7c5469d4--468p9-eth0" Jan 16 09:06:56.861173 containerd[1464]: 2025-01-16 09:06:56.729 [INFO][3845] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:06:56.861173 containerd[1464]: 2025-01-16 09:06:56.784 [INFO][3845] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:06:56.861173 containerd[1464]: 2025-01-16 09:06:56.807 [WARNING][3845] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" HandleID="k8s-pod-network.b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--kube--controllers--6c7c5469d4--468p9-eth0" Jan 16 09:06:56.861173 containerd[1464]: 2025-01-16 09:06:56.807 [INFO][3845] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" HandleID="k8s-pod-network.b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--kube--controllers--6c7c5469d4--468p9-eth0" Jan 16 09:06:56.861173 containerd[1464]: 2025-01-16 09:06:56.826 [INFO][3845] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:06:56.861173 containerd[1464]: 2025-01-16 09:06:56.851 [INFO][3816] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" Jan 16 09:06:56.874959 containerd[1464]: time="2025-01-16T09:06:56.862509262Z" level=info msg="TearDown network for sandbox \"b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50\" successfully" Jan 16 09:06:56.874959 containerd[1464]: time="2025-01-16T09:06:56.862585380Z" level=info msg="StopPodSandbox for \"b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50\" returns successfully" Jan 16 09:06:56.874959 containerd[1464]: time="2025-01-16T09:06:56.870062332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c7c5469d4-468p9,Uid:0181cb9a-b55f-410c-8edb-885fdf552f70,Namespace:calico-system,Attempt:1,}" Jan 16 09:06:56.880120 systemd-networkd[1372]: calie8cf5a2f262: Gained carrier Jan 16 09:06:56.891956 systemd[1]: run-netns-cni\x2d8bff0865\x2dfbed\x2d8b5e\x2ddc55\x2dd8656b815967.mount: Deactivated successfully. Jan 16 09:06:56.920914 containerd[1464]: 2025-01-16 09:06:56.167 [INFO][3777] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 09:06:56.920914 containerd[1464]: 2025-01-16 09:06:56.238 [INFO][3777] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--f--3b05cacdca-k8s-csi--node--driver--vhc8p-eth0 csi-node-driver- calico-system d531d993-f717-4dc6-b57d-367e3bb2fd54 796 0 2025-01-16 09:06:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-f-3b05cacdca csi-node-driver-vhc8p eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie8cf5a2f262 [] []}} ContainerID="2382176be03946d0f368078f9f5ea0b12a41fed81f7e4fc03de9219e0f6b30bd" Namespace="calico-system" Pod="csi-node-driver-vhc8p" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-csi--node--driver--vhc8p-" Jan 16 09:06:56.920914 containerd[1464]: 2025-01-16 09:06:56.238 [INFO][3777] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2382176be03946d0f368078f9f5ea0b12a41fed81f7e4fc03de9219e0f6b30bd" Namespace="calico-system" Pod="csi-node-driver-vhc8p" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-csi--node--driver--vhc8p-eth0" Jan 16 09:06:56.920914 containerd[1464]: 2025-01-16 09:06:56.560 [INFO][3826] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2382176be03946d0f368078f9f5ea0b12a41fed81f7e4fc03de9219e0f6b30bd" HandleID="k8s-pod-network.2382176be03946d0f368078f9f5ea0b12a41fed81f7e4fc03de9219e0f6b30bd" Workload="ci--4081.3.0--f--3b05cacdca-k8s-csi--node--driver--vhc8p-eth0" Jan 16 09:06:56.920914 containerd[1464]: 2025-01-16 09:06:56.614 [INFO][3826] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2382176be03946d0f368078f9f5ea0b12a41fed81f7e4fc03de9219e0f6b30bd" HandleID="k8s-pod-network.2382176be03946d0f368078f9f5ea0b12a41fed81f7e4fc03de9219e0f6b30bd" Workload="ci--4081.3.0--f--3b05cacdca-k8s-csi--node--driver--vhc8p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103410), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-f-3b05cacdca", "pod":"csi-node-driver-vhc8p", "timestamp":"2025-01-16 09:06:56.560543272 +0000 UTC"}, Hostname:"ci-4081.3.0-f-3b05cacdca", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 09:06:56.920914 containerd[1464]: 2025-01-16 09:06:56.615 [INFO][3826] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:06:56.920914 containerd[1464]: 2025-01-16 09:06:56.619 [INFO][3826] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:06:56.920914 containerd[1464]: 2025-01-16 09:06:56.665 [INFO][3826] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-f-3b05cacdca' Jan 16 09:06:56.920914 containerd[1464]: 2025-01-16 09:06:56.691 [INFO][3826] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2382176be03946d0f368078f9f5ea0b12a41fed81f7e4fc03de9219e0f6b30bd" host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:56.920914 containerd[1464]: 2025-01-16 09:06:56.724 [INFO][3826] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:56.920914 containerd[1464]: 2025-01-16 09:06:56.746 [INFO][3826] ipam/ipam.go 489: Trying affinity for 192.168.65.192/26 host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:56.920914 containerd[1464]: 2025-01-16 09:06:56.750 [INFO][3826] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.192/26 host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:56.920914 containerd[1464]: 2025-01-16 09:06:56.756 [INFO][3826] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.192/26 host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:56.920914 containerd[1464]: 2025-01-16 09:06:56.756 [INFO][3826] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.192/26 handle="k8s-pod-network.2382176be03946d0f368078f9f5ea0b12a41fed81f7e4fc03de9219e0f6b30bd" host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:56.920914 containerd[1464]: 2025-01-16 09:06:56.759 [INFO][3826] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2382176be03946d0f368078f9f5ea0b12a41fed81f7e4fc03de9219e0f6b30bd Jan 16 09:06:56.920914 containerd[1464]: 2025-01-16 09:06:56.767 [INFO][3826] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.192/26 handle="k8s-pod-network.2382176be03946d0f368078f9f5ea0b12a41fed81f7e4fc03de9219e0f6b30bd" host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:56.920914 containerd[1464]: 2025-01-16 09:06:56.783 [INFO][3826] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.193/26] block=192.168.65.192/26 handle="k8s-pod-network.2382176be03946d0f368078f9f5ea0b12a41fed81f7e4fc03de9219e0f6b30bd" host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:56.920914 containerd[1464]: 2025-01-16 09:06:56.783 [INFO][3826] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.193/26] handle="k8s-pod-network.2382176be03946d0f368078f9f5ea0b12a41fed81f7e4fc03de9219e0f6b30bd" host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:56.920914 containerd[1464]: 2025-01-16 09:06:56.783 [INFO][3826] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:06:56.920914 containerd[1464]: 2025-01-16 09:06:56.784 [INFO][3826] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.193/26] IPv6=[] ContainerID="2382176be03946d0f368078f9f5ea0b12a41fed81f7e4fc03de9219e0f6b30bd" HandleID="k8s-pod-network.2382176be03946d0f368078f9f5ea0b12a41fed81f7e4fc03de9219e0f6b30bd" Workload="ci--4081.3.0--f--3b05cacdca-k8s-csi--node--driver--vhc8p-eth0" Jan 16 09:06:56.923036 containerd[1464]: 2025-01-16 09:06:56.791 [INFO][3777] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2382176be03946d0f368078f9f5ea0b12a41fed81f7e4fc03de9219e0f6b30bd" Namespace="calico-system" Pod="csi-node-driver-vhc8p" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-csi--node--driver--vhc8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--3b05cacdca-k8s-csi--node--driver--vhc8p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d531d993-f717-4dc6-b57d-367e3bb2fd54", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-3b05cacdca", ContainerID:"", Pod:"csi-node-driver-vhc8p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie8cf5a2f262", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:06:56.923036 containerd[1464]: 2025-01-16 09:06:56.791 [INFO][3777] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.193/32] ContainerID="2382176be03946d0f368078f9f5ea0b12a41fed81f7e4fc03de9219e0f6b30bd" Namespace="calico-system" Pod="csi-node-driver-vhc8p" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-csi--node--driver--vhc8p-eth0" Jan 16 09:06:56.923036 containerd[1464]: 2025-01-16 09:06:56.792 [INFO][3777] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie8cf5a2f262 ContainerID="2382176be03946d0f368078f9f5ea0b12a41fed81f7e4fc03de9219e0f6b30bd" Namespace="calico-system" Pod="csi-node-driver-vhc8p" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-csi--node--driver--vhc8p-eth0" Jan 16 09:06:56.923036 containerd[1464]: 2025-01-16 09:06:56.844 [INFO][3777] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2382176be03946d0f368078f9f5ea0b12a41fed81f7e4fc03de9219e0f6b30bd" Namespace="calico-system" Pod="csi-node-driver-vhc8p" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-csi--node--driver--vhc8p-eth0" Jan 16 09:06:56.923036 containerd[1464]: 2025-01-16 09:06:56.848 [INFO][3777] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2382176be03946d0f368078f9f5ea0b12a41fed81f7e4fc03de9219e0f6b30bd" Namespace="calico-system" Pod="csi-node-driver-vhc8p" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-csi--node--driver--vhc8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--3b05cacdca-k8s-csi--node--driver--vhc8p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d531d993-f717-4dc6-b57d-367e3bb2fd54", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-3b05cacdca", ContainerID:"2382176be03946d0f368078f9f5ea0b12a41fed81f7e4fc03de9219e0f6b30bd", Pod:"csi-node-driver-vhc8p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie8cf5a2f262", MAC:"f6:83:cd:5e:54:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:06:56.923036 containerd[1464]: 2025-01-16 09:06:56.910 [INFO][3777] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2382176be03946d0f368078f9f5ea0b12a41fed81f7e4fc03de9219e0f6b30bd" Namespace="calico-system" Pod="csi-node-driver-vhc8p" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-csi--node--driver--vhc8p-eth0" Jan 16 09:06:57.032163 containerd[1464]: time="2025-01-16T09:06:57.031695859Z" level=info msg="StopPodSandbox for \"52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9\"" Jan 16 09:06:57.136798 systemd-networkd[1372]: calif276a7516bc: Link UP Jan 16 09:06:57.142397 systemd-networkd[1372]: calif276a7516bc: Gained carrier Jan 16 09:06:57.218870 containerd[1464]: 2025-01-16 09:06:56.206 [INFO][3759] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 09:06:57.218870 containerd[1464]: 2025-01-16 09:06:56.325 [INFO][3759] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--ch8jl-eth0 calico-apiserver-665b6f6bf5- calico-apiserver 7aa01779-2af2-4a27-987a-9e1a693e6e72 795 0 2025-01-16 09:06:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:665b6f6bf5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-f-3b05cacdca calico-apiserver-665b6f6bf5-ch8jl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif276a7516bc [] []}} ContainerID="8bb7602f83b10afe41e22fb53e61eb10bfa1f73a7c76718a8f7fb7c45659934b" Namespace="calico-apiserver" Pod="calico-apiserver-665b6f6bf5-ch8jl" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--ch8jl-" Jan 16 09:06:57.218870 containerd[1464]: 2025-01-16 09:06:56.327 [INFO][3759] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8bb7602f83b10afe41e22fb53e61eb10bfa1f73a7c76718a8f7fb7c45659934b" Namespace="calico-apiserver" Pod="calico-apiserver-665b6f6bf5-ch8jl" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--ch8jl-eth0" Jan 16 09:06:57.218870 containerd[1464]: 2025-01-16 09:06:56.667 [INFO][3832] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8bb7602f83b10afe41e22fb53e61eb10bfa1f73a7c76718a8f7fb7c45659934b" HandleID="k8s-pod-network.8bb7602f83b10afe41e22fb53e61eb10bfa1f73a7c76718a8f7fb7c45659934b" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--ch8jl-eth0" Jan 16 09:06:57.218870 containerd[1464]: 2025-01-16 09:06:56.733 [INFO][3832] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8bb7602f83b10afe41e22fb53e61eb10bfa1f73a7c76718a8f7fb7c45659934b" HandleID="k8s-pod-network.8bb7602f83b10afe41e22fb53e61eb10bfa1f73a7c76718a8f7fb7c45659934b" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--ch8jl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e84a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-f-3b05cacdca", "pod":"calico-apiserver-665b6f6bf5-ch8jl", "timestamp":"2025-01-16 09:06:56.667872493 +0000 UTC"}, Hostname:"ci-4081.3.0-f-3b05cacdca", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 09:06:57.218870 containerd[1464]: 2025-01-16 09:06:56.733 [INFO][3832] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:06:57.218870 containerd[1464]: 2025-01-16 09:06:56.832 [INFO][3832] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:06:57.218870 containerd[1464]: 2025-01-16 09:06:56.835 [INFO][3832] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-f-3b05cacdca' Jan 16 09:06:57.218870 containerd[1464]: 2025-01-16 09:06:56.847 [INFO][3832] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8bb7602f83b10afe41e22fb53e61eb10bfa1f73a7c76718a8f7fb7c45659934b" host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:57.218870 containerd[1464]: 2025-01-16 09:06:56.913 [INFO][3832] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:57.218870 containerd[1464]: 2025-01-16 09:06:56.954 [INFO][3832] ipam/ipam.go 489: Trying affinity for 192.168.65.192/26 host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:57.218870 containerd[1464]: 2025-01-16 09:06:56.978 [INFO][3832] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.192/26 host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:57.218870 containerd[1464]: 2025-01-16 09:06:57.010 [INFO][3832] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.192/26 host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:57.218870 containerd[1464]: 2025-01-16 09:06:57.010 [INFO][3832] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.192/26 handle="k8s-pod-network.8bb7602f83b10afe41e22fb53e61eb10bfa1f73a7c76718a8f7fb7c45659934b" host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:57.218870 containerd[1464]: 2025-01-16 09:06:57.037 [INFO][3832] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8bb7602f83b10afe41e22fb53e61eb10bfa1f73a7c76718a8f7fb7c45659934b Jan 16 09:06:57.218870 containerd[1464]: 2025-01-16 09:06:57.062 [INFO][3832] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.192/26 handle="k8s-pod-network.8bb7602f83b10afe41e22fb53e61eb10bfa1f73a7c76718a8f7fb7c45659934b" host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:57.218870 containerd[1464]: 2025-01-16 09:06:57.112 [INFO][3832] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.194/26] block=192.168.65.192/26 handle="k8s-pod-network.8bb7602f83b10afe41e22fb53e61eb10bfa1f73a7c76718a8f7fb7c45659934b" host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:57.218870 containerd[1464]: 2025-01-16 09:06:57.113 [INFO][3832] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.194/26] handle="k8s-pod-network.8bb7602f83b10afe41e22fb53e61eb10bfa1f73a7c76718a8f7fb7c45659934b" host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:57.218870 containerd[1464]: 2025-01-16 09:06:57.113 [INFO][3832] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:06:57.218870 containerd[1464]: 2025-01-16 09:06:57.113 [INFO][3832] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.194/26] IPv6=[] ContainerID="8bb7602f83b10afe41e22fb53e61eb10bfa1f73a7c76718a8f7fb7c45659934b" HandleID="k8s-pod-network.8bb7602f83b10afe41e22fb53e61eb10bfa1f73a7c76718a8f7fb7c45659934b" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--ch8jl-eth0" Jan 16 09:06:57.223953 containerd[1464]: 2025-01-16 09:06:57.118 [INFO][3759] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8bb7602f83b10afe41e22fb53e61eb10bfa1f73a7c76718a8f7fb7c45659934b" Namespace="calico-apiserver" Pod="calico-apiserver-665b6f6bf5-ch8jl" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--ch8jl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--ch8jl-eth0", GenerateName:"calico-apiserver-665b6f6bf5-", Namespace:"calico-apiserver", SelfLink:"", UID:"7aa01779-2af2-4a27-987a-9e1a693e6e72", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"665b6f6bf5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-3b05cacdca", ContainerID:"", Pod:"calico-apiserver-665b6f6bf5-ch8jl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif276a7516bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:06:57.223953 containerd[1464]: 2025-01-16 09:06:57.131 [INFO][3759] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.194/32] ContainerID="8bb7602f83b10afe41e22fb53e61eb10bfa1f73a7c76718a8f7fb7c45659934b" Namespace="calico-apiserver" Pod="calico-apiserver-665b6f6bf5-ch8jl" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--ch8jl-eth0" Jan 16 09:06:57.223953 containerd[1464]: 2025-01-16 09:06:57.132 [INFO][3759] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif276a7516bc ContainerID="8bb7602f83b10afe41e22fb53e61eb10bfa1f73a7c76718a8f7fb7c45659934b" Namespace="calico-apiserver" Pod="calico-apiserver-665b6f6bf5-ch8jl" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--ch8jl-eth0" Jan 16 09:06:57.223953 containerd[1464]: 2025-01-16 09:06:57.138 [INFO][3759] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8bb7602f83b10afe41e22fb53e61eb10bfa1f73a7c76718a8f7fb7c45659934b" Namespace="calico-apiserver" Pod="calico-apiserver-665b6f6bf5-ch8jl" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--ch8jl-eth0" Jan 16 09:06:57.223953 containerd[1464]: 2025-01-16 09:06:57.167 [INFO][3759] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8bb7602f83b10afe41e22fb53e61eb10bfa1f73a7c76718a8f7fb7c45659934b" Namespace="calico-apiserver" Pod="calico-apiserver-665b6f6bf5-ch8jl" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--ch8jl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--ch8jl-eth0", GenerateName:"calico-apiserver-665b6f6bf5-", Namespace:"calico-apiserver", SelfLink:"", UID:"7aa01779-2af2-4a27-987a-9e1a693e6e72", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"665b6f6bf5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-3b05cacdca", ContainerID:"8bb7602f83b10afe41e22fb53e61eb10bfa1f73a7c76718a8f7fb7c45659934b", Pod:"calico-apiserver-665b6f6bf5-ch8jl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif276a7516bc", MAC:"a2:3d:86:cb:93:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:06:57.223953 containerd[1464]: 2025-01-16 09:06:57.207 [INFO][3759] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8bb7602f83b10afe41e22fb53e61eb10bfa1f73a7c76718a8f7fb7c45659934b" Namespace="calico-apiserver" Pod="calico-apiserver-665b6f6bf5-ch8jl" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--ch8jl-eth0" Jan 16 09:06:57.232226 containerd[1464]: time="2025-01-16T09:06:57.231714307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:06:57.232226 containerd[1464]: time="2025-01-16T09:06:57.231837114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:06:57.232226 containerd[1464]: time="2025-01-16T09:06:57.231856377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:57.233473 containerd[1464]: time="2025-01-16T09:06:57.232929963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:57.290386 containerd[1464]: 2025-01-16 09:06:56.436 [INFO][3814] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" Jan 16 09:06:57.290386 containerd[1464]: 2025-01-16 09:06:56.436 [INFO][3814] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" iface="eth0" netns="/var/run/netns/cni-f51c2f3a-3dbe-c4fe-6df8-a989cb148d9e" Jan 16 09:06:57.290386 containerd[1464]: 2025-01-16 09:06:56.437 [INFO][3814] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" iface="eth0" netns="/var/run/netns/cni-f51c2f3a-3dbe-c4fe-6df8-a989cb148d9e" Jan 16 09:06:57.290386 containerd[1464]: 2025-01-16 09:06:56.446 [INFO][3814] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" iface="eth0" netns="/var/run/netns/cni-f51c2f3a-3dbe-c4fe-6df8-a989cb148d9e" Jan 16 09:06:57.290386 containerd[1464]: 2025-01-16 09:06:56.446 [INFO][3814] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" Jan 16 09:06:57.290386 containerd[1464]: 2025-01-16 09:06:56.446 [INFO][3814] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" Jan 16 09:06:57.290386 containerd[1464]: 2025-01-16 09:06:56.742 [INFO][3840] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" HandleID="k8s-pod-network.3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--2vcpf-eth0" Jan 16 09:06:57.290386 containerd[1464]: 2025-01-16 09:06:56.743 [INFO][3840] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:06:57.290386 containerd[1464]: 2025-01-16 09:06:57.117 [INFO][3840] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:06:57.290386 containerd[1464]: 2025-01-16 09:06:57.212 [WARNING][3840] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" HandleID="k8s-pod-network.3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--2vcpf-eth0" Jan 16 09:06:57.290386 containerd[1464]: 2025-01-16 09:06:57.212 [INFO][3840] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" HandleID="k8s-pod-network.3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--2vcpf-eth0" Jan 16 09:06:57.290386 containerd[1464]: 2025-01-16 09:06:57.235 [INFO][3840] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:06:57.290386 containerd[1464]: 2025-01-16 09:06:57.255 [INFO][3814] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" Jan 16 09:06:57.290386 containerd[1464]: time="2025-01-16T09:06:57.288691706Z" level=info msg="TearDown network for sandbox \"3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147\" successfully" Jan 16 09:06:57.290386 containerd[1464]: time="2025-01-16T09:06:57.288733852Z" level=info msg="StopPodSandbox for \"3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147\" returns successfully" Jan 16 09:06:57.294372 containerd[1464]: time="2025-01-16T09:06:57.293722821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-665b6f6bf5-2vcpf,Uid:aa2600c0-23e1-499d-8648-c34d81d3d9fd,Namespace:calico-apiserver,Attempt:1,}" Jan 16 09:06:57.319384 systemd[1]: Started cri-containerd-2382176be03946d0f368078f9f5ea0b12a41fed81f7e4fc03de9219e0f6b30bd.scope - libcontainer container 2382176be03946d0f368078f9f5ea0b12a41fed81f7e4fc03de9219e0f6b30bd. Jan 16 09:06:57.476104 containerd[1464]: time="2025-01-16T09:06:57.475719766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:06:57.476104 containerd[1464]: time="2025-01-16T09:06:57.475845138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:06:57.476104 containerd[1464]: time="2025-01-16T09:06:57.475895833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:57.477487 containerd[1464]: time="2025-01-16T09:06:57.476101178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:57.569308 systemd[1]: Started cri-containerd-8bb7602f83b10afe41e22fb53e61eb10bfa1f73a7c76718a8f7fb7c45659934b.scope - libcontainer container 8bb7602f83b10afe41e22fb53e61eb10bfa1f73a7c76718a8f7fb7c45659934b. Jan 16 09:06:57.744253 containerd[1464]: time="2025-01-16T09:06:57.743403729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vhc8p,Uid:d531d993-f717-4dc6-b57d-367e3bb2fd54,Namespace:calico-system,Attempt:1,} returns sandbox id \"2382176be03946d0f368078f9f5ea0b12a41fed81f7e4fc03de9219e0f6b30bd\"" Jan 16 09:06:57.827480 containerd[1464]: time="2025-01-16T09:06:57.825533776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 16 09:06:57.847384 systemd[1]: run-netns-cni\x2df51c2f3a\x2d3dbe\x2dc4fe\x2d6df8\x2da989cb148d9e.mount: Deactivated successfully. Jan 16 09:06:57.980365 systemd[1]: Started sshd@7-137.184.14.123:22-139.178.68.195:42502.service - OpenSSH per-connection server daemon (139.178.68.195:42502). Jan 16 09:06:58.185062 containerd[1464]: time="2025-01-16T09:06:58.184884636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-665b6f6bf5-ch8jl,Uid:7aa01779-2af2-4a27-987a-9e1a693e6e72,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8bb7602f83b10afe41e22fb53e61eb10bfa1f73a7c76718a8f7fb7c45659934b\"" Jan 16 09:06:58.199383 systemd-networkd[1372]: calie8cf5a2f262: Gained IPv6LL Jan 16 09:06:58.210355 sshd[4089]: Accepted publickey for core from 139.178.68.195 port 42502 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:06:58.219962 sshd[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:06:58.254605 systemd-networkd[1372]: cali3667017b1a5: Link UP Jan 16 09:06:58.257085 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 16 09:06:58.259846 systemd-logind[1447]: New session 8 of user core. Jan 16 09:06:58.265451 systemd-networkd[1372]: cali3667017b1a5: Gained carrier Jan 16 09:06:58.359294 containerd[1464]: 2025-01-16 09:06:57.679 [INFO][3947] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" Jan 16 09:06:58.359294 containerd[1464]: 2025-01-16 09:06:57.679 [INFO][3947] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" iface="eth0" netns="/var/run/netns/cni-87cd4b53-fe55-a31b-910c-afa3b0c735ae" Jan 16 09:06:58.359294 containerd[1464]: 2025-01-16 09:06:57.682 [INFO][3947] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" iface="eth0" netns="/var/run/netns/cni-87cd4b53-fe55-a31b-910c-afa3b0c735ae" Jan 16 09:06:58.359294 containerd[1464]: 2025-01-16 09:06:57.686 [INFO][3947] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" iface="eth0" netns="/var/run/netns/cni-87cd4b53-fe55-a31b-910c-afa3b0c735ae" Jan 16 09:06:58.359294 containerd[1464]: 2025-01-16 09:06:57.686 [INFO][3947] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" Jan 16 09:06:58.359294 containerd[1464]: 2025-01-16 09:06:57.686 [INFO][3947] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" Jan 16 09:06:58.359294 containerd[1464]: 2025-01-16 09:06:57.884 [INFO][4053] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" HandleID="k8s-pod-network.52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" Workload="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--4wc79-eth0" Jan 16 09:06:58.359294 containerd[1464]: 2025-01-16 09:06:57.887 [INFO][4053] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:06:58.359294 containerd[1464]: 2025-01-16 09:06:58.196 [INFO][4053] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:06:58.359294 containerd[1464]: 2025-01-16 09:06:58.336 [WARNING][4053] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" HandleID="k8s-pod-network.52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" Workload="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--4wc79-eth0" Jan 16 09:06:58.359294 containerd[1464]: 2025-01-16 09:06:58.337 [INFO][4053] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" HandleID="k8s-pod-network.52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" Workload="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--4wc79-eth0" Jan 16 09:06:58.359294 containerd[1464]: 2025-01-16 09:06:58.346 [INFO][4053] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:06:58.359294 containerd[1464]: 2025-01-16 09:06:58.353 [INFO][3947] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" Jan 16 09:06:58.378157 containerd[1464]: time="2025-01-16T09:06:58.359533360Z" level=info msg="TearDown network for sandbox \"52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9\" successfully" Jan 16 09:06:58.378157 containerd[1464]: time="2025-01-16T09:06:58.359799173Z" level=info msg="StopPodSandbox for \"52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9\" returns successfully" Jan 16 09:06:58.378272 kubelet[2534]: E0116 09:06:58.362733 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:58.385394 systemd[1]: run-netns-cni\x2d87cd4b53\x2dfe55\x2da31b\x2d910c\x2dafa3b0c735ae.mount: Deactivated successfully. Jan 16 09:06:58.412514 containerd[1464]: 2025-01-16 09:06:57.122 [INFO][3906] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 09:06:58.412514 containerd[1464]: 2025-01-16 09:06:57.232 [INFO][3906] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--f--3b05cacdca-k8s-calico--kube--controllers--6c7c5469d4--468p9-eth0 calico-kube-controllers-6c7c5469d4- calico-system 0181cb9a-b55f-410c-8edb-885fdf552f70 804 0 2025-01-16 09:06:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6c7c5469d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-f-3b05cacdca calico-kube-controllers-6c7c5469d4-468p9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3667017b1a5 [] []}} ContainerID="b4ad777a67db3e4757618cc220435acfc0fd9736b95fc7519d35367f9c5d3c24" Namespace="calico-system" Pod="calico-kube-controllers-6c7c5469d4-468p9" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-calico--kube--controllers--6c7c5469d4--468p9-" Jan 16 09:06:58.412514 containerd[1464]: 2025-01-16 09:06:57.234 [INFO][3906] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b4ad777a67db3e4757618cc220435acfc0fd9736b95fc7519d35367f9c5d3c24" Namespace="calico-system" Pod="calico-kube-controllers-6c7c5469d4-468p9" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-calico--kube--controllers--6c7c5469d4--468p9-eth0" Jan 16 09:06:58.412514 containerd[1464]: 2025-01-16 09:06:57.537 [INFO][3984] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4ad777a67db3e4757618cc220435acfc0fd9736b95fc7519d35367f9c5d3c24" HandleID="k8s-pod-network.b4ad777a67db3e4757618cc220435acfc0fd9736b95fc7519d35367f9c5d3c24" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--kube--controllers--6c7c5469d4--468p9-eth0" Jan 16 09:06:58.412514 containerd[1464]: 2025-01-16 09:06:57.768 [INFO][3984] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b4ad777a67db3e4757618cc220435acfc0fd9736b95fc7519d35367f9c5d3c24" HandleID="k8s-pod-network.b4ad777a67db3e4757618cc220435acfc0fd9736b95fc7519d35367f9c5d3c24" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--kube--controllers--6c7c5469d4--468p9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000337900), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-f-3b05cacdca", "pod":"calico-kube-controllers-6c7c5469d4-468p9", "timestamp":"2025-01-16 09:06:57.537731249 +0000 UTC"}, Hostname:"ci-4081.3.0-f-3b05cacdca", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 09:06:58.412514 containerd[1464]: 2025-01-16 09:06:57.769 [INFO][3984] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:06:58.412514 containerd[1464]: 2025-01-16 09:06:57.769 [INFO][3984] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:06:58.412514 containerd[1464]: 2025-01-16 09:06:57.770 [INFO][3984] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-f-3b05cacdca' Jan 16 09:06:58.412514 containerd[1464]: 2025-01-16 09:06:57.904 [INFO][3984] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b4ad777a67db3e4757618cc220435acfc0fd9736b95fc7519d35367f9c5d3c24" host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:58.412514 containerd[1464]: 2025-01-16 09:06:58.012 [INFO][3984] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:58.412514 containerd[1464]: 2025-01-16 09:06:58.067 [INFO][3984] ipam/ipam.go 489: Trying affinity for 192.168.65.192/26 host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:58.412514 containerd[1464]: 2025-01-16 09:06:58.094 [INFO][3984] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.192/26 host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:58.412514 containerd[1464]: 2025-01-16 09:06:58.109 [INFO][3984] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.192/26 host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:58.412514 containerd[1464]: 2025-01-16 09:06:58.109 [INFO][3984] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.192/26 handle="k8s-pod-network.b4ad777a67db3e4757618cc220435acfc0fd9736b95fc7519d35367f9c5d3c24" host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:58.412514 containerd[1464]: 2025-01-16 09:06:58.114 [INFO][3984] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b4ad777a67db3e4757618cc220435acfc0fd9736b95fc7519d35367f9c5d3c24 Jan 16 09:06:58.412514 containerd[1464]: 2025-01-16 09:06:58.142 [INFO][3984] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.192/26 handle="k8s-pod-network.b4ad777a67db3e4757618cc220435acfc0fd9736b95fc7519d35367f9c5d3c24" host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:58.412514 containerd[1464]: 2025-01-16 09:06:58.193 [INFO][3984] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.195/26] block=192.168.65.192/26 handle="k8s-pod-network.b4ad777a67db3e4757618cc220435acfc0fd9736b95fc7519d35367f9c5d3c24" host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:58.412514 containerd[1464]: 2025-01-16 09:06:58.193 [INFO][3984] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.195/26] handle="k8s-pod-network.b4ad777a67db3e4757618cc220435acfc0fd9736b95fc7519d35367f9c5d3c24" host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:58.412514 containerd[1464]: 2025-01-16 09:06:58.194 [INFO][3984] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:06:58.412514 containerd[1464]: 2025-01-16 09:06:58.194 [INFO][3984] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.195/26] IPv6=[] ContainerID="b4ad777a67db3e4757618cc220435acfc0fd9736b95fc7519d35367f9c5d3c24" HandleID="k8s-pod-network.b4ad777a67db3e4757618cc220435acfc0fd9736b95fc7519d35367f9c5d3c24" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--kube--controllers--6c7c5469d4--468p9-eth0" Jan 16 09:06:58.420436 containerd[1464]: 2025-01-16 09:06:58.216 [INFO][3906] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b4ad777a67db3e4757618cc220435acfc0fd9736b95fc7519d35367f9c5d3c24" Namespace="calico-system" Pod="calico-kube-controllers-6c7c5469d4-468p9" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-calico--kube--controllers--6c7c5469d4--468p9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--3b05cacdca-k8s-calico--kube--controllers--6c7c5469d4--468p9-eth0", GenerateName:"calico-kube-controllers-6c7c5469d4-", Namespace:"calico-system", SelfLink:"", UID:"0181cb9a-b55f-410c-8edb-885fdf552f70", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c7c5469d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-3b05cacdca", ContainerID:"", Pod:"calico-kube-controllers-6c7c5469d4-468p9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3667017b1a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:06:58.420436 containerd[1464]: 2025-01-16 09:06:58.216 [INFO][3906] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.195/32] ContainerID="b4ad777a67db3e4757618cc220435acfc0fd9736b95fc7519d35367f9c5d3c24" Namespace="calico-system" Pod="calico-kube-controllers-6c7c5469d4-468p9" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-calico--kube--controllers--6c7c5469d4--468p9-eth0" Jan 16 09:06:58.420436 containerd[1464]: 2025-01-16 09:06:58.222 [INFO][3906] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3667017b1a5 ContainerID="b4ad777a67db3e4757618cc220435acfc0fd9736b95fc7519d35367f9c5d3c24" Namespace="calico-system" Pod="calico-kube-controllers-6c7c5469d4-468p9" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-calico--kube--controllers--6c7c5469d4--468p9-eth0" Jan 16 09:06:58.420436 containerd[1464]: 2025-01-16 09:06:58.240 [INFO][3906] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4ad777a67db3e4757618cc220435acfc0fd9736b95fc7519d35367f9c5d3c24" Namespace="calico-system" Pod="calico-kube-controllers-6c7c5469d4-468p9" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-calico--kube--controllers--6c7c5469d4--468p9-eth0" Jan 16 09:06:58.420436 containerd[1464]: 2025-01-16 09:06:58.249 [INFO][3906] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b4ad777a67db3e4757618cc220435acfc0fd9736b95fc7519d35367f9c5d3c24" Namespace="calico-system" Pod="calico-kube-controllers-6c7c5469d4-468p9" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-calico--kube--controllers--6c7c5469d4--468p9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--3b05cacdca-k8s-calico--kube--controllers--6c7c5469d4--468p9-eth0", GenerateName:"calico-kube-controllers-6c7c5469d4-", Namespace:"calico-system", SelfLink:"", UID:"0181cb9a-b55f-410c-8edb-885fdf552f70", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c7c5469d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-3b05cacdca", ContainerID:"b4ad777a67db3e4757618cc220435acfc0fd9736b95fc7519d35367f9c5d3c24", Pod:"calico-kube-controllers-6c7c5469d4-468p9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3667017b1a5", MAC:"3a:06:51:eb:34:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:06:58.420436 containerd[1464]: 2025-01-16 09:06:58.367 [INFO][3906] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b4ad777a67db3e4757618cc220435acfc0fd9736b95fc7519d35367f9c5d3c24" Namespace="calico-system" Pod="calico-kube-controllers-6c7c5469d4-468p9" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-calico--kube--controllers--6c7c5469d4--468p9-eth0" Jan 16 09:06:58.502480 containerd[1464]: time="2025-01-16T09:06:58.500285975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4wc79,Uid:b702ae32-666c-42f3-b39e-07dfe42f3e21,Namespace:kube-system,Attempt:1,}" Jan 16 09:06:58.577650 systemd-networkd[1372]: calif276a7516bc: Gained IPv6LL Jan 16 09:06:58.592452 containerd[1464]: time="2025-01-16T09:06:58.590133674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:06:58.592452 containerd[1464]: time="2025-01-16T09:06:58.590232654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:06:58.592452 containerd[1464]: time="2025-01-16T09:06:58.590257479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:58.603049 containerd[1464]: time="2025-01-16T09:06:58.601320673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:58.881348 systemd[1]: Started cri-containerd-b4ad777a67db3e4757618cc220435acfc0fd9736b95fc7519d35367f9c5d3c24.scope - libcontainer container b4ad777a67db3e4757618cc220435acfc0fd9736b95fc7519d35367f9c5d3c24. Jan 16 09:06:58.888741 systemd-networkd[1372]: cali82caee57d8d: Link UP Jan 16 09:06:58.913577 systemd-networkd[1372]: cali82caee57d8d: Gained carrier Jan 16 09:06:59.241235 containerd[1464]: time="2025-01-16T09:06:59.241057043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c7c5469d4-468p9,Uid:0181cb9a-b55f-410c-8edb-885fdf552f70,Namespace:calico-system,Attempt:1,} returns sandbox id \"b4ad777a67db3e4757618cc220435acfc0fd9736b95fc7519d35367f9c5d3c24\"" Jan 16 09:06:59.321333 containerd[1464]: 2025-01-16 09:06:57.606 [INFO][4011] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 09:06:59.321333 containerd[1464]: 2025-01-16 09:06:57.868 [INFO][4011] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--2vcpf-eth0 calico-apiserver-665b6f6bf5- calico-apiserver aa2600c0-23e1-499d-8648-c34d81d3d9fd 803 0 2025-01-16 09:06:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:665b6f6bf5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-f-3b05cacdca calico-apiserver-665b6f6bf5-2vcpf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali82caee57d8d [] []}} ContainerID="bfcbb128e666b1926789256dfcfd086b0867d7b2dd4e04eb75544bce677341bd" Namespace="calico-apiserver" Pod="calico-apiserver-665b6f6bf5-2vcpf" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--2vcpf-" Jan 16 09:06:59.321333 containerd[1464]: 2025-01-16 09:06:57.868 [INFO][4011] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bfcbb128e666b1926789256dfcfd086b0867d7b2dd4e04eb75544bce677341bd" Namespace="calico-apiserver" Pod="calico-apiserver-665b6f6bf5-2vcpf" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--2vcpf-eth0" Jan 16 09:06:59.321333 containerd[1464]: 2025-01-16 09:06:58.066 [INFO][4088] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bfcbb128e666b1926789256dfcfd086b0867d7b2dd4e04eb75544bce677341bd" HandleID="k8s-pod-network.bfcbb128e666b1926789256dfcfd086b0867d7b2dd4e04eb75544bce677341bd" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--2vcpf-eth0" Jan 16 09:06:59.321333 containerd[1464]: 2025-01-16 09:06:58.123 [INFO][4088] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bfcbb128e666b1926789256dfcfd086b0867d7b2dd4e04eb75544bce677341bd" HandleID="k8s-pod-network.bfcbb128e666b1926789256dfcfd086b0867d7b2dd4e04eb75544bce677341bd" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--2vcpf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004ee5e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-f-3b05cacdca", "pod":"calico-apiserver-665b6f6bf5-2vcpf", "timestamp":"2025-01-16 09:06:58.066931918 +0000 UTC"}, Hostname:"ci-4081.3.0-f-3b05cacdca", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 09:06:59.321333 containerd[1464]: 2025-01-16 09:06:58.124 [INFO][4088] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:06:59.321333 containerd[1464]: 2025-01-16 09:06:58.346 [INFO][4088] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:06:59.321333 containerd[1464]: 2025-01-16 09:06:58.347 [INFO][4088] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-f-3b05cacdca' Jan 16 09:06:59.321333 containerd[1464]: 2025-01-16 09:06:58.398 [INFO][4088] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bfcbb128e666b1926789256dfcfd086b0867d7b2dd4e04eb75544bce677341bd" host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:59.321333 containerd[1464]: 2025-01-16 09:06:58.447 [INFO][4088] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:59.321333 containerd[1464]: 2025-01-16 09:06:58.494 [INFO][4088] ipam/ipam.go 489: Trying affinity for 192.168.65.192/26 host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:59.321333 containerd[1464]: 2025-01-16 09:06:58.530 [INFO][4088] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.192/26 host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:59.321333 containerd[1464]: 2025-01-16 09:06:58.549 [INFO][4088] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.192/26 host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:59.321333 containerd[1464]: 2025-01-16 09:06:58.550 [INFO][4088] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.192/26 handle="k8s-pod-network.bfcbb128e666b1926789256dfcfd086b0867d7b2dd4e04eb75544bce677341bd" host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:59.321333 containerd[1464]: 2025-01-16 09:06:58.584 [INFO][4088] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bfcbb128e666b1926789256dfcfd086b0867d7b2dd4e04eb75544bce677341bd Jan 16 09:06:59.321333 containerd[1464]: 2025-01-16 09:06:58.703 [INFO][4088] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.192/26 handle="k8s-pod-network.bfcbb128e666b1926789256dfcfd086b0867d7b2dd4e04eb75544bce677341bd" host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:59.321333 containerd[1464]: 2025-01-16 09:06:58.768 [INFO][4088] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.196/26] block=192.168.65.192/26 handle="k8s-pod-network.bfcbb128e666b1926789256dfcfd086b0867d7b2dd4e04eb75544bce677341bd" host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:59.321333 containerd[1464]: 2025-01-16 09:06:58.769 [INFO][4088] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.196/26] handle="k8s-pod-network.bfcbb128e666b1926789256dfcfd086b0867d7b2dd4e04eb75544bce677341bd" host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:59.321333 containerd[1464]: 2025-01-16 09:06:58.770 [INFO][4088] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:06:59.321333 containerd[1464]: 2025-01-16 09:06:58.772 [INFO][4088] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.196/26] IPv6=[] ContainerID="bfcbb128e666b1926789256dfcfd086b0867d7b2dd4e04eb75544bce677341bd" HandleID="k8s-pod-network.bfcbb128e666b1926789256dfcfd086b0867d7b2dd4e04eb75544bce677341bd" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--2vcpf-eth0" Jan 16 09:06:59.326558 containerd[1464]: 2025-01-16 09:06:58.828 [INFO][4011] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bfcbb128e666b1926789256dfcfd086b0867d7b2dd4e04eb75544bce677341bd" Namespace="calico-apiserver" Pod="calico-apiserver-665b6f6bf5-2vcpf" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--2vcpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--2vcpf-eth0", GenerateName:"calico-apiserver-665b6f6bf5-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa2600c0-23e1-499d-8648-c34d81d3d9fd", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"665b6f6bf5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-3b05cacdca", ContainerID:"", Pod:"calico-apiserver-665b6f6bf5-2vcpf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali82caee57d8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:06:59.326558 containerd[1464]: 2025-01-16 09:06:58.829 [INFO][4011] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.196/32] ContainerID="bfcbb128e666b1926789256dfcfd086b0867d7b2dd4e04eb75544bce677341bd" Namespace="calico-apiserver" Pod="calico-apiserver-665b6f6bf5-2vcpf" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--2vcpf-eth0" Jan 16 09:06:59.326558 containerd[1464]: 2025-01-16 09:06:58.829 [INFO][4011] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali82caee57d8d ContainerID="bfcbb128e666b1926789256dfcfd086b0867d7b2dd4e04eb75544bce677341bd" Namespace="calico-apiserver" Pod="calico-apiserver-665b6f6bf5-2vcpf" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--2vcpf-eth0" Jan 16 09:06:59.326558 containerd[1464]: 2025-01-16 09:06:58.915 [INFO][4011] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bfcbb128e666b1926789256dfcfd086b0867d7b2dd4e04eb75544bce677341bd" Namespace="calico-apiserver" Pod="calico-apiserver-665b6f6bf5-2vcpf" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--2vcpf-eth0" Jan 16 09:06:59.326558 containerd[1464]: 2025-01-16 09:06:58.941 [INFO][4011] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bfcbb128e666b1926789256dfcfd086b0867d7b2dd4e04eb75544bce677341bd" Namespace="calico-apiserver" Pod="calico-apiserver-665b6f6bf5-2vcpf" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--2vcpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--2vcpf-eth0", GenerateName:"calico-apiserver-665b6f6bf5-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa2600c0-23e1-499d-8648-c34d81d3d9fd", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"665b6f6bf5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-3b05cacdca", ContainerID:"bfcbb128e666b1926789256dfcfd086b0867d7b2dd4e04eb75544bce677341bd", Pod:"calico-apiserver-665b6f6bf5-2vcpf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali82caee57d8d", MAC:"62:4b:3d:dc:54:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:06:59.326558 containerd[1464]: 2025-01-16 09:06:59.314 [INFO][4011] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bfcbb128e666b1926789256dfcfd086b0867d7b2dd4e04eb75544bce677341bd" Namespace="calico-apiserver" Pod="calico-apiserver-665b6f6bf5-2vcpf" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--2vcpf-eth0" Jan 16 09:06:59.366278 sshd[4089]: pam_unix(sshd:session): session closed for user core Jan 16 09:06:59.377928 systemd[1]: sshd@7-137.184.14.123:22-139.178.68.195:42502.service: Deactivated successfully. Jan 16 09:06:59.388809 systemd[1]: session-8.scope: Deactivated successfully. Jan 16 09:06:59.397647 systemd-logind[1447]: Session 8 logged out. Waiting for processes to exit. Jan 16 09:06:59.401184 systemd-logind[1447]: Removed session 8. Jan 16 09:06:59.437900 containerd[1464]: time="2025-01-16T09:06:59.437258936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:06:59.437900 containerd[1464]: time="2025-01-16T09:06:59.437397130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:06:59.437900 containerd[1464]: time="2025-01-16T09:06:59.437420257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:59.440248 containerd[1464]: time="2025-01-16T09:06:59.439765207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:59.560564 systemd[1]: Started cri-containerd-bfcbb128e666b1926789256dfcfd086b0867d7b2dd4e04eb75544bce677341bd.scope - libcontainer container bfcbb128e666b1926789256dfcfd086b0867d7b2dd4e04eb75544bce677341bd. Jan 16 09:06:59.781384 systemd-networkd[1372]: calid6181397d72: Link UP Jan 16 09:06:59.783784 systemd-networkd[1372]: calid6181397d72: Gained carrier Jan 16 09:06:59.852923 containerd[1464]: 2025-01-16 09:06:58.961 [INFO][4145] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 09:06:59.852923 containerd[1464]: 2025-01-16 09:06:59.380 [INFO][4145] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--4wc79-eth0 coredns-6f6b679f8f- kube-system b702ae32-666c-42f3-b39e-07dfe42f3e21 829 0 2025-01-16 09:06:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-f-3b05cacdca coredns-6f6b679f8f-4wc79 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid6181397d72 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="964084f631facbc741a2891a2d2e0e4ad83eb5b0996ec005e930e38458840d30" Namespace="kube-system" Pod="coredns-6f6b679f8f-4wc79" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--4wc79-" Jan 16 09:06:59.852923 containerd[1464]: 2025-01-16 09:06:59.382 [INFO][4145] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="964084f631facbc741a2891a2d2e0e4ad83eb5b0996ec005e930e38458840d30" Namespace="kube-system" Pod="coredns-6f6b679f8f-4wc79" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--4wc79-eth0" Jan 16 09:06:59.852923 containerd[1464]: 2025-01-16 09:06:59.624 [INFO][4218] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="964084f631facbc741a2891a2d2e0e4ad83eb5b0996ec005e930e38458840d30" HandleID="k8s-pod-network.964084f631facbc741a2891a2d2e0e4ad83eb5b0996ec005e930e38458840d30" Workload="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--4wc79-eth0" Jan 16 09:06:59.852923 containerd[1464]: 2025-01-16 09:06:59.643 [INFO][4218] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="964084f631facbc741a2891a2d2e0e4ad83eb5b0996ec005e930e38458840d30" HandleID="k8s-pod-network.964084f631facbc741a2891a2d2e0e4ad83eb5b0996ec005e930e38458840d30" Workload="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--4wc79-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ec090), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-f-3b05cacdca", "pod":"coredns-6f6b679f8f-4wc79", "timestamp":"2025-01-16 09:06:59.624896032 +0000 UTC"}, Hostname:"ci-4081.3.0-f-3b05cacdca", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 09:06:59.852923 containerd[1464]: 2025-01-16 09:06:59.643 [INFO][4218] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:06:59.852923 containerd[1464]: 2025-01-16 09:06:59.643 [INFO][4218] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:06:59.852923 containerd[1464]: 2025-01-16 09:06:59.644 [INFO][4218] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-f-3b05cacdca' Jan 16 09:06:59.852923 containerd[1464]: 2025-01-16 09:06:59.650 [INFO][4218] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.964084f631facbc741a2891a2d2e0e4ad83eb5b0996ec005e930e38458840d30" host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:59.852923 containerd[1464]: 2025-01-16 09:06:59.662 [INFO][4218] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:59.852923 containerd[1464]: 2025-01-16 09:06:59.682 [INFO][4218] ipam/ipam.go 489: Trying affinity for 192.168.65.192/26 host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:59.852923 containerd[1464]: 2025-01-16 09:06:59.689 [INFO][4218] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.192/26 host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:59.852923 containerd[1464]: 2025-01-16 09:06:59.702 [INFO][4218] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.192/26 host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:59.852923 containerd[1464]: 2025-01-16 09:06:59.703 [INFO][4218] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.192/26 handle="k8s-pod-network.964084f631facbc741a2891a2d2e0e4ad83eb5b0996ec005e930e38458840d30" host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:59.852923 containerd[1464]: 2025-01-16 09:06:59.708 [INFO][4218] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.964084f631facbc741a2891a2d2e0e4ad83eb5b0996ec005e930e38458840d30 Jan 16 09:06:59.852923 containerd[1464]: 2025-01-16 09:06:59.720 [INFO][4218] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.192/26 handle="k8s-pod-network.964084f631facbc741a2891a2d2e0e4ad83eb5b0996ec005e930e38458840d30" host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:59.852923 containerd[1464]: 2025-01-16 09:06:59.736 [INFO][4218] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.197/26] block=192.168.65.192/26 handle="k8s-pod-network.964084f631facbc741a2891a2d2e0e4ad83eb5b0996ec005e930e38458840d30" host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:59.852923 containerd[1464]: 2025-01-16 09:06:59.737 [INFO][4218] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.197/26] handle="k8s-pod-network.964084f631facbc741a2891a2d2e0e4ad83eb5b0996ec005e930e38458840d30" host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:06:59.852923 containerd[1464]: 2025-01-16 09:06:59.737 [INFO][4218] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:06:59.852923 containerd[1464]: 2025-01-16 09:06:59.737 [INFO][4218] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.197/26] IPv6=[] ContainerID="964084f631facbc741a2891a2d2e0e4ad83eb5b0996ec005e930e38458840d30" HandleID="k8s-pod-network.964084f631facbc741a2891a2d2e0e4ad83eb5b0996ec005e930e38458840d30" Workload="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--4wc79-eth0" Jan 16 09:06:59.855085 containerd[1464]: 2025-01-16 09:06:59.751 [INFO][4145] cni-plugin/k8s.go 386: Populated endpoint ContainerID="964084f631facbc741a2891a2d2e0e4ad83eb5b0996ec005e930e38458840d30" Namespace="kube-system" Pod="coredns-6f6b679f8f-4wc79" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--4wc79-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--4wc79-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"b702ae32-666c-42f3-b39e-07dfe42f3e21", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-3b05cacdca", ContainerID:"", Pod:"coredns-6f6b679f8f-4wc79", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid6181397d72", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:06:59.855085 containerd[1464]: 2025-01-16 09:06:59.751 [INFO][4145] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.197/32] ContainerID="964084f631facbc741a2891a2d2e0e4ad83eb5b0996ec005e930e38458840d30" Namespace="kube-system" Pod="coredns-6f6b679f8f-4wc79" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--4wc79-eth0" Jan 16 09:06:59.855085 containerd[1464]: 2025-01-16 09:06:59.752 [INFO][4145] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid6181397d72 ContainerID="964084f631facbc741a2891a2d2e0e4ad83eb5b0996ec005e930e38458840d30" Namespace="kube-system" Pod="coredns-6f6b679f8f-4wc79" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--4wc79-eth0" Jan 16 09:06:59.855085 containerd[1464]: 2025-01-16 09:06:59.784 [INFO][4145] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="964084f631facbc741a2891a2d2e0e4ad83eb5b0996ec005e930e38458840d30" Namespace="kube-system" Pod="coredns-6f6b679f8f-4wc79" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--4wc79-eth0" Jan 16 09:06:59.855085 containerd[1464]: 2025-01-16 09:06:59.791 [INFO][4145] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="964084f631facbc741a2891a2d2e0e4ad83eb5b0996ec005e930e38458840d30" Namespace="kube-system" Pod="coredns-6f6b679f8f-4wc79" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--4wc79-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--4wc79-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"b702ae32-666c-42f3-b39e-07dfe42f3e21", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-3b05cacdca", ContainerID:"964084f631facbc741a2891a2d2e0e4ad83eb5b0996ec005e930e38458840d30", Pod:"coredns-6f6b679f8f-4wc79", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid6181397d72", MAC:"ea:02:1d:d6:35:b6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:06:59.857951 containerd[1464]: 2025-01-16 09:06:59.846 [INFO][4145] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="964084f631facbc741a2891a2d2e0e4ad83eb5b0996ec005e930e38458840d30" Namespace="kube-system" Pod="coredns-6f6b679f8f-4wc79" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--4wc79-eth0" Jan 16 09:06:59.940139 containerd[1464]: time="2025-01-16T09:06:59.940062779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-665b6f6bf5-2vcpf,Uid:aa2600c0-23e1-499d-8648-c34d81d3d9fd,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"bfcbb128e666b1926789256dfcfd086b0867d7b2dd4e04eb75544bce677341bd\"" Jan 16 09:06:59.960291 containerd[1464]: time="2025-01-16T09:06:59.959240107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:06:59.960291 containerd[1464]: time="2025-01-16T09:06:59.959328504Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:06:59.960291 containerd[1464]: time="2025-01-16T09:06:59.959346323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:59.960291 containerd[1464]: time="2025-01-16T09:06:59.959480759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:07:00.023596 systemd[1]: Started cri-containerd-964084f631facbc741a2891a2d2e0e4ad83eb5b0996ec005e930e38458840d30.scope - libcontainer container 964084f631facbc741a2891a2d2e0e4ad83eb5b0996ec005e930e38458840d30. Jan 16 09:07:00.189195 containerd[1464]: time="2025-01-16T09:07:00.188714387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4wc79,Uid:b702ae32-666c-42f3-b39e-07dfe42f3e21,Namespace:kube-system,Attempt:1,} returns sandbox id \"964084f631facbc741a2891a2d2e0e4ad83eb5b0996ec005e930e38458840d30\"" Jan 16 09:07:00.193555 kubelet[2534]: E0116 09:07:00.191811 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:07:00.200966 containerd[1464]: time="2025-01-16T09:07:00.200555890Z" level=info msg="CreateContainer within sandbox \"964084f631facbc741a2891a2d2e0e4ad83eb5b0996ec005e930e38458840d30\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 09:07:00.241317 systemd-networkd[1372]: cali3667017b1a5: Gained IPv6LL Jan 16 09:07:00.352189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount634564249.mount: Deactivated successfully. Jan 16 09:07:00.375946 containerd[1464]: time="2025-01-16T09:07:00.375028004Z" level=info msg="CreateContainer within sandbox \"964084f631facbc741a2891a2d2e0e4ad83eb5b0996ec005e930e38458840d30\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0f6d19f2639168b65ae2af9a47c8ccdefeccbeac132aeca7d63bf3609fd661ca\"" Jan 16 09:07:00.378011 containerd[1464]: time="2025-01-16T09:07:00.377383036Z" level=info msg="StartContainer for \"0f6d19f2639168b65ae2af9a47c8ccdefeccbeac132aeca7d63bf3609fd661ca\"" Jan 16 09:07:00.496332 systemd[1]: Started cri-containerd-0f6d19f2639168b65ae2af9a47c8ccdefeccbeac132aeca7d63bf3609fd661ca.scope - libcontainer container 0f6d19f2639168b65ae2af9a47c8ccdefeccbeac132aeca7d63bf3609fd661ca. Jan 16 09:07:00.522080 kernel: bpftool[4361]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 16 09:07:00.612503 containerd[1464]: time="2025-01-16T09:07:00.612422681Z" level=info msg="StartContainer for \"0f6d19f2639168b65ae2af9a47c8ccdefeccbeac132aeca7d63bf3609fd661ca\" returns successfully" Jan 16 09:07:00.756059 systemd-networkd[1372]: cali82caee57d8d: Gained IPv6LL Jan 16 09:07:00.868747 containerd[1464]: time="2025-01-16T09:07:00.868606497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:00.871404 containerd[1464]: time="2025-01-16T09:07:00.871246925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 16 09:07:00.875584 containerd[1464]: time="2025-01-16T09:07:00.875359777Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:00.885154 containerd[1464]: time="2025-01-16T09:07:00.885083902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:00.888951 containerd[1464]: time="2025-01-16T09:07:00.886295094Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 3.060689526s" Jan 16 09:07:00.888951 containerd[1464]: time="2025-01-16T09:07:00.887810074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 16 09:07:00.897435 containerd[1464]: time="2025-01-16T09:07:00.896279801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 16 09:07:00.901247 containerd[1464]: time="2025-01-16T09:07:00.900130659Z" level=info msg="CreateContainer within sandbox \"2382176be03946d0f368078f9f5ea0b12a41fed81f7e4fc03de9219e0f6b30bd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 16 09:07:00.947206 containerd[1464]: time="2025-01-16T09:07:00.947139277Z" level=info msg="CreateContainer within sandbox \"2382176be03946d0f368078f9f5ea0b12a41fed81f7e4fc03de9219e0f6b30bd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"65b8c0715c0f36c815adb404d9594efda4f9a34045162c03cccbf684a2779e7d\"" Jan 16 09:07:00.948519 containerd[1464]: time="2025-01-16T09:07:00.948474259Z" level=info msg="StartContainer for \"65b8c0715c0f36c815adb404d9594efda4f9a34045162c03cccbf684a2779e7d\"" Jan 16 09:07:01.353380 containerd[1464]: time="2025-01-16T09:07:01.353300657Z" level=info msg="StopPodSandbox for \"52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9\"" Jan 16 09:07:01.389123 systemd[1]: Started cri-containerd-65b8c0715c0f36c815adb404d9594efda4f9a34045162c03cccbf684a2779e7d.scope - libcontainer container 65b8c0715c0f36c815adb404d9594efda4f9a34045162c03cccbf684a2779e7d. Jan 16 09:07:01.393363 systemd-networkd[1372]: calid6181397d72: Gained IPv6LL Jan 16 09:07:01.490615 kubelet[2534]: E0116 09:07:01.489422 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:07:01.526990 kubelet[2534]: I0116 09:07:01.525860 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-4wc79" podStartSLOduration=57.525803818 podStartE2EDuration="57.525803818s" podCreationTimestamp="2025-01-16 09:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 09:07:01.516687423 +0000 UTC m=+60.942352688" watchObservedRunningTime="2025-01-16 09:07:01.525803818 +0000 UTC m=+60.951469087" Jan 16 09:07:01.862906 containerd[1464]: time="2025-01-16T09:07:01.820433374Z" level=info msg="StartContainer for \"65b8c0715c0f36c815adb404d9594efda4f9a34045162c03cccbf684a2779e7d\" returns successfully" Jan 16 09:07:01.999641 containerd[1464]: 2025-01-16 09:07:01.670 [WARNING][4420] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--4wc79-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"b702ae32-666c-42f3-b39e-07dfe42f3e21", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-3b05cacdca", ContainerID:"964084f631facbc741a2891a2d2e0e4ad83eb5b0996ec005e930e38458840d30", Pod:"coredns-6f6b679f8f-4wc79", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid6181397d72", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:01.999641 containerd[1464]: 2025-01-16 09:07:01.671 [INFO][4420] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" Jan 16 09:07:01.999641 containerd[1464]: 2025-01-16 09:07:01.671 [INFO][4420] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" iface="eth0" netns="" Jan 16 09:07:01.999641 containerd[1464]: 2025-01-16 09:07:01.671 [INFO][4420] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" Jan 16 09:07:01.999641 containerd[1464]: 2025-01-16 09:07:01.671 [INFO][4420] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" Jan 16 09:07:01.999641 containerd[1464]: 2025-01-16 09:07:01.923 [INFO][4429] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" HandleID="k8s-pod-network.52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" Workload="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--4wc79-eth0" Jan 16 09:07:01.999641 containerd[1464]: 2025-01-16 09:07:01.923 [INFO][4429] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:01.999641 containerd[1464]: 2025-01-16 09:07:01.924 [INFO][4429] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:01.999641 containerd[1464]: 2025-01-16 09:07:01.957 [WARNING][4429] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" HandleID="k8s-pod-network.52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" Workload="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--4wc79-eth0" Jan 16 09:07:01.999641 containerd[1464]: 2025-01-16 09:07:01.957 [INFO][4429] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" HandleID="k8s-pod-network.52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" Workload="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--4wc79-eth0" Jan 16 09:07:01.999641 containerd[1464]: 2025-01-16 09:07:01.973 [INFO][4429] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:01.999641 containerd[1464]: 2025-01-16 09:07:01.984 [INFO][4420] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" Jan 16 09:07:01.999641 containerd[1464]: time="2025-01-16T09:07:01.999411364Z" level=info msg="TearDown network for sandbox \"52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9\" successfully" Jan 16 09:07:01.999641 containerd[1464]: time="2025-01-16T09:07:01.999502480Z" level=info msg="StopPodSandbox for \"52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9\" returns successfully" Jan 16 09:07:02.005411 containerd[1464]: time="2025-01-16T09:07:02.002770347Z" level=info msg="RemovePodSandbox for \"52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9\"" Jan 16 09:07:02.005411 containerd[1464]: time="2025-01-16T09:07:02.002830898Z" level=info msg="Forcibly stopping sandbox \"52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9\"" Jan 16 09:07:02.143012 systemd-networkd[1372]: vxlan.calico: Link UP Jan 16 09:07:02.143028 systemd-networkd[1372]: vxlan.calico: Gained carrier Jan 16 09:07:02.498455 containerd[1464]: 2025-01-16 09:07:02.228 [WARNING][4479] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--4wc79-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"b702ae32-666c-42f3-b39e-07dfe42f3e21", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-3b05cacdca", ContainerID:"964084f631facbc741a2891a2d2e0e4ad83eb5b0996ec005e930e38458840d30", Pod:"coredns-6f6b679f8f-4wc79", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid6181397d72", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:02.498455 containerd[1464]: 2025-01-16 09:07:02.229 [INFO][4479] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" Jan 16 09:07:02.498455 containerd[1464]: 2025-01-16 09:07:02.229 [INFO][4479] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" iface="eth0" netns="" Jan 16 09:07:02.498455 containerd[1464]: 2025-01-16 09:07:02.229 [INFO][4479] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" Jan 16 09:07:02.498455 containerd[1464]: 2025-01-16 09:07:02.229 [INFO][4479] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" Jan 16 09:07:02.498455 containerd[1464]: 2025-01-16 09:07:02.450 [INFO][4492] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" HandleID="k8s-pod-network.52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" Workload="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--4wc79-eth0" Jan 16 09:07:02.498455 containerd[1464]: 2025-01-16 09:07:02.451 [INFO][4492] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:02.498455 containerd[1464]: 2025-01-16 09:07:02.451 [INFO][4492] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:02.498455 containerd[1464]: 2025-01-16 09:07:02.465 [WARNING][4492] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" HandleID="k8s-pod-network.52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" Workload="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--4wc79-eth0" Jan 16 09:07:02.498455 containerd[1464]: 2025-01-16 09:07:02.465 [INFO][4492] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" HandleID="k8s-pod-network.52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" Workload="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--4wc79-eth0" Jan 16 09:07:02.498455 containerd[1464]: 2025-01-16 09:07:02.483 [INFO][4492] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:02.498455 containerd[1464]: 2025-01-16 09:07:02.492 [INFO][4479] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9" Jan 16 09:07:02.499513 containerd[1464]: time="2025-01-16T09:07:02.498509904Z" level=info msg="TearDown network for sandbox \"52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9\" successfully" Jan 16 09:07:02.516399 containerd[1464]: time="2025-01-16T09:07:02.515866774Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 09:07:02.516399 containerd[1464]: time="2025-01-16T09:07:02.516029303Z" level=info msg="RemovePodSandbox \"52f3043327162ce4e4984c772a74a562c8c5440db1e3d4d963c4b5a295ad3db9\" returns successfully" Jan 16 09:07:02.516931 containerd[1464]: time="2025-01-16T09:07:02.516698085Z" level=info msg="StopPodSandbox for \"f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea\"" Jan 16 09:07:02.532393 kubelet[2534]: E0116 09:07:02.530545 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:07:02.800941 containerd[1464]: 2025-01-16 09:07:02.639 [WARNING][4528] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--3b05cacdca-k8s-csi--node--driver--vhc8p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d531d993-f717-4dc6-b57d-367e3bb2fd54", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-3b05cacdca", ContainerID:"2382176be03946d0f368078f9f5ea0b12a41fed81f7e4fc03de9219e0f6b30bd", Pod:"csi-node-driver-vhc8p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie8cf5a2f262", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:02.800941 containerd[1464]: 2025-01-16 09:07:02.640 [INFO][4528] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" Jan 16 09:07:02.800941 containerd[1464]: 2025-01-16 09:07:02.641 [INFO][4528] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" iface="eth0" netns="" Jan 16 09:07:02.800941 containerd[1464]: 2025-01-16 09:07:02.641 [INFO][4528] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" Jan 16 09:07:02.800941 containerd[1464]: 2025-01-16 09:07:02.641 [INFO][4528] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" Jan 16 09:07:02.800941 containerd[1464]: 2025-01-16 09:07:02.726 [INFO][4534] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" HandleID="k8s-pod-network.f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" Workload="ci--4081.3.0--f--3b05cacdca-k8s-csi--node--driver--vhc8p-eth0" Jan 16 09:07:02.800941 containerd[1464]: 2025-01-16 09:07:02.727 [INFO][4534] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:02.800941 containerd[1464]: 2025-01-16 09:07:02.727 [INFO][4534] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:02.800941 containerd[1464]: 2025-01-16 09:07:02.767 [WARNING][4534] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" HandleID="k8s-pod-network.f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" Workload="ci--4081.3.0--f--3b05cacdca-k8s-csi--node--driver--vhc8p-eth0" Jan 16 09:07:02.800941 containerd[1464]: 2025-01-16 09:07:02.767 [INFO][4534] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" HandleID="k8s-pod-network.f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" Workload="ci--4081.3.0--f--3b05cacdca-k8s-csi--node--driver--vhc8p-eth0" Jan 16 09:07:02.800941 containerd[1464]: 2025-01-16 09:07:02.786 [INFO][4534] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:02.800941 containerd[1464]: 2025-01-16 09:07:02.792 [INFO][4528] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" Jan 16 09:07:02.800941 containerd[1464]: time="2025-01-16T09:07:02.799429143Z" level=info msg="TearDown network for sandbox \"f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea\" successfully" Jan 16 09:07:02.800941 containerd[1464]: time="2025-01-16T09:07:02.799469331Z" level=info msg="StopPodSandbox for \"f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea\" returns successfully" Jan 16 09:07:02.800941 containerd[1464]: time="2025-01-16T09:07:02.800319117Z" level=info msg="RemovePodSandbox for \"f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea\"" Jan 16 09:07:02.800941 containerd[1464]: time="2025-01-16T09:07:02.800367049Z" level=info msg="Forcibly stopping sandbox \"f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea\"" Jan 16 09:07:03.344945 containerd[1464]: 2025-01-16 09:07:03.195 [WARNING][4552] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--3b05cacdca-k8s-csi--node--driver--vhc8p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d531d993-f717-4dc6-b57d-367e3bb2fd54", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-3b05cacdca", ContainerID:"2382176be03946d0f368078f9f5ea0b12a41fed81f7e4fc03de9219e0f6b30bd", Pod:"csi-node-driver-vhc8p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie8cf5a2f262", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:03.344945 containerd[1464]: 2025-01-16 09:07:03.195 [INFO][4552] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" Jan 16 09:07:03.344945 containerd[1464]: 2025-01-16 09:07:03.195 [INFO][4552] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" iface="eth0" netns="" Jan 16 09:07:03.344945 containerd[1464]: 2025-01-16 09:07:03.195 [INFO][4552] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" Jan 16 09:07:03.344945 containerd[1464]: 2025-01-16 09:07:03.196 [INFO][4552] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" Jan 16 09:07:03.344945 containerd[1464]: 2025-01-16 09:07:03.277 [INFO][4559] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" HandleID="k8s-pod-network.f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" Workload="ci--4081.3.0--f--3b05cacdca-k8s-csi--node--driver--vhc8p-eth0" Jan 16 09:07:03.344945 containerd[1464]: 2025-01-16 09:07:03.277 [INFO][4559] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:03.344945 containerd[1464]: 2025-01-16 09:07:03.277 [INFO][4559] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:03.344945 containerd[1464]: 2025-01-16 09:07:03.304 [WARNING][4559] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" HandleID="k8s-pod-network.f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" Workload="ci--4081.3.0--f--3b05cacdca-k8s-csi--node--driver--vhc8p-eth0" Jan 16 09:07:03.344945 containerd[1464]: 2025-01-16 09:07:03.304 [INFO][4559] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" HandleID="k8s-pod-network.f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" Workload="ci--4081.3.0--f--3b05cacdca-k8s-csi--node--driver--vhc8p-eth0" Jan 16 09:07:03.344945 containerd[1464]: 2025-01-16 09:07:03.323 [INFO][4559] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:03.344945 containerd[1464]: 2025-01-16 09:07:03.326 [INFO][4552] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea" Jan 16 09:07:03.346690 containerd[1464]: time="2025-01-16T09:07:03.346439970Z" level=info msg="TearDown network for sandbox \"f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea\" successfully" Jan 16 09:07:03.383579 containerd[1464]: time="2025-01-16T09:07:03.383510947Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 09:07:03.384913 containerd[1464]: time="2025-01-16T09:07:03.384045096Z" level=info msg="RemovePodSandbox \"f526c61c8a81f0728d5aa3f67825551aa4b5a90229184d4d6c30b4a88a957fea\" returns successfully" Jan 16 09:07:03.386802 containerd[1464]: time="2025-01-16T09:07:03.386747891Z" level=info msg="StopPodSandbox for \"042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7\"" Jan 16 09:07:03.441871 systemd-networkd[1372]: vxlan.calico: Gained IPv6LL Jan 16 09:07:03.546499 kubelet[2534]: E0116 09:07:03.545240 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:07:03.869877 containerd[1464]: 2025-01-16 09:07:03.595 [WARNING][4585] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--ch8jl-eth0", GenerateName:"calico-apiserver-665b6f6bf5-", Namespace:"calico-apiserver", SelfLink:"", UID:"7aa01779-2af2-4a27-987a-9e1a693e6e72", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"665b6f6bf5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-3b05cacdca", ContainerID:"8bb7602f83b10afe41e22fb53e61eb10bfa1f73a7c76718a8f7fb7c45659934b", Pod:"calico-apiserver-665b6f6bf5-ch8jl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif276a7516bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:03.869877 containerd[1464]: 2025-01-16 09:07:03.596 [INFO][4585] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" Jan 16 09:07:03.869877 containerd[1464]: 2025-01-16 09:07:03.596 [INFO][4585] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" iface="eth0" netns="" Jan 16 09:07:03.869877 containerd[1464]: 2025-01-16 09:07:03.596 [INFO][4585] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" Jan 16 09:07:03.869877 containerd[1464]: 2025-01-16 09:07:03.596 [INFO][4585] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" Jan 16 09:07:03.869877 containerd[1464]: 2025-01-16 09:07:03.799 [INFO][4598] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" HandleID="k8s-pod-network.042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--ch8jl-eth0" Jan 16 09:07:03.869877 containerd[1464]: 2025-01-16 09:07:03.802 [INFO][4598] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:03.869877 containerd[1464]: 2025-01-16 09:07:03.802 [INFO][4598] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:03.869877 containerd[1464]: 2025-01-16 09:07:03.830 [WARNING][4598] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" HandleID="k8s-pod-network.042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--ch8jl-eth0" Jan 16 09:07:03.869877 containerd[1464]: 2025-01-16 09:07:03.830 [INFO][4598] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" HandleID="k8s-pod-network.042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--ch8jl-eth0" Jan 16 09:07:03.869877 containerd[1464]: 2025-01-16 09:07:03.848 [INFO][4598] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:03.869877 containerd[1464]: 2025-01-16 09:07:03.861 [INFO][4585] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" Jan 16 09:07:03.871423 containerd[1464]: time="2025-01-16T09:07:03.870344645Z" level=info msg="TearDown network for sandbox \"042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7\" successfully" Jan 16 09:07:03.871423 containerd[1464]: time="2025-01-16T09:07:03.870386841Z" level=info msg="StopPodSandbox for \"042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7\" returns successfully" Jan 16 09:07:03.871423 containerd[1464]: time="2025-01-16T09:07:03.871303865Z" level=info msg="RemovePodSandbox for \"042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7\"" Jan 16 09:07:03.871423 containerd[1464]: time="2025-01-16T09:07:03.871349062Z" level=info msg="Forcibly stopping sandbox \"042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7\"" Jan 16 09:07:04.172278 containerd[1464]: 2025-01-16 09:07:04.056 [WARNING][4645] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--ch8jl-eth0", GenerateName:"calico-apiserver-665b6f6bf5-", Namespace:"calico-apiserver", SelfLink:"", UID:"7aa01779-2af2-4a27-987a-9e1a693e6e72", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"665b6f6bf5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-3b05cacdca", ContainerID:"8bb7602f83b10afe41e22fb53e61eb10bfa1f73a7c76718a8f7fb7c45659934b", Pod:"calico-apiserver-665b6f6bf5-ch8jl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif276a7516bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:04.172278 containerd[1464]: 2025-01-16 09:07:04.056 [INFO][4645] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" Jan 16 09:07:04.172278 containerd[1464]: 2025-01-16 09:07:04.056 [INFO][4645] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" iface="eth0" netns="" Jan 16 09:07:04.172278 containerd[1464]: 2025-01-16 09:07:04.056 [INFO][4645] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" Jan 16 09:07:04.172278 containerd[1464]: 2025-01-16 09:07:04.058 [INFO][4645] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" Jan 16 09:07:04.172278 containerd[1464]: 2025-01-16 09:07:04.132 [INFO][4652] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" HandleID="k8s-pod-network.042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--ch8jl-eth0" Jan 16 09:07:04.172278 containerd[1464]: 2025-01-16 09:07:04.132 [INFO][4652] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:04.172278 containerd[1464]: 2025-01-16 09:07:04.134 [INFO][4652] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:04.172278 containerd[1464]: 2025-01-16 09:07:04.149 [WARNING][4652] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" HandleID="k8s-pod-network.042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--ch8jl-eth0" Jan 16 09:07:04.172278 containerd[1464]: 2025-01-16 09:07:04.149 [INFO][4652] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" HandleID="k8s-pod-network.042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--ch8jl-eth0" Jan 16 09:07:04.172278 containerd[1464]: 2025-01-16 09:07:04.156 [INFO][4652] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:04.172278 containerd[1464]: 2025-01-16 09:07:04.161 [INFO][4645] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7" Jan 16 09:07:04.175182 containerd[1464]: time="2025-01-16T09:07:04.172540849Z" level=info msg="TearDown network for sandbox \"042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7\" successfully" Jan 16 09:07:04.279021 containerd[1464]: time="2025-01-16T09:07:04.278593749Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 09:07:04.279021 containerd[1464]: time="2025-01-16T09:07:04.278701325Z" level=info msg="RemovePodSandbox \"042012fce860216abf89c314a52085527a3f63b18a150020261fdb272b0477e7\" returns successfully" Jan 16 09:07:04.280697 containerd[1464]: time="2025-01-16T09:07:04.280501597Z" level=info msg="StopPodSandbox for \"b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50\"" Jan 16 09:07:04.391499 systemd[1]: Started sshd@8-137.184.14.123:22-139.178.68.195:42508.service - OpenSSH per-connection server daemon (139.178.68.195:42508). Jan 16 09:07:04.606802 sshd[4675]: Accepted publickey for core from 139.178.68.195 port 42508 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:07:04.612411 sshd[4675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:07:04.630104 systemd-logind[1447]: New session 9 of user core. Jan 16 09:07:04.637373 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 16 09:07:04.773337 containerd[1464]: 2025-01-16 09:07:04.624 [WARNING][4670] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--3b05cacdca-k8s-calico--kube--controllers--6c7c5469d4--468p9-eth0", GenerateName:"calico-kube-controllers-6c7c5469d4-", Namespace:"calico-system", SelfLink:"", UID:"0181cb9a-b55f-410c-8edb-885fdf552f70", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c7c5469d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-3b05cacdca", ContainerID:"b4ad777a67db3e4757618cc220435acfc0fd9736b95fc7519d35367f9c5d3c24", Pod:"calico-kube-controllers-6c7c5469d4-468p9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3667017b1a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:04.773337 containerd[1464]: 2025-01-16 09:07:04.625 [INFO][4670] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" Jan 16 09:07:04.773337 containerd[1464]: 2025-01-16 09:07:04.625 [INFO][4670] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" iface="eth0" netns="" Jan 16 09:07:04.773337 containerd[1464]: 2025-01-16 09:07:04.625 [INFO][4670] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" Jan 16 09:07:04.773337 containerd[1464]: 2025-01-16 09:07:04.625 [INFO][4670] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" Jan 16 09:07:04.773337 containerd[1464]: 2025-01-16 09:07:04.713 [INFO][4679] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" HandleID="k8s-pod-network.b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--kube--controllers--6c7c5469d4--468p9-eth0" Jan 16 09:07:04.773337 containerd[1464]: 2025-01-16 09:07:04.714 [INFO][4679] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:04.773337 containerd[1464]: 2025-01-16 09:07:04.714 [INFO][4679] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:04.773337 containerd[1464]: 2025-01-16 09:07:04.745 [WARNING][4679] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" HandleID="k8s-pod-network.b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--kube--controllers--6c7c5469d4--468p9-eth0" Jan 16 09:07:04.773337 containerd[1464]: 2025-01-16 09:07:04.745 [INFO][4679] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" HandleID="k8s-pod-network.b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--kube--controllers--6c7c5469d4--468p9-eth0" Jan 16 09:07:04.773337 containerd[1464]: 2025-01-16 09:07:04.750 [INFO][4679] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:04.773337 containerd[1464]: 2025-01-16 09:07:04.753 [INFO][4670] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" Jan 16 09:07:04.775668 containerd[1464]: time="2025-01-16T09:07:04.773406200Z" level=info msg="TearDown network for sandbox \"b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50\" successfully" Jan 16 09:07:04.775668 containerd[1464]: time="2025-01-16T09:07:04.773442976Z" level=info msg="StopPodSandbox for \"b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50\" returns successfully" Jan 16 09:07:04.778204 containerd[1464]: time="2025-01-16T09:07:04.778078397Z" level=info msg="RemovePodSandbox for \"b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50\"" Jan 16 09:07:04.778204 containerd[1464]: time="2025-01-16T09:07:04.778202186Z" level=info msg="Forcibly stopping sandbox \"b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50\"" Jan 16 09:07:05.156694 containerd[1464]: 2025-01-16 09:07:05.019 [WARNING][4703] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--3b05cacdca-k8s-calico--kube--controllers--6c7c5469d4--468p9-eth0", GenerateName:"calico-kube-controllers-6c7c5469d4-", Namespace:"calico-system", SelfLink:"", UID:"0181cb9a-b55f-410c-8edb-885fdf552f70", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c7c5469d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-3b05cacdca", ContainerID:"b4ad777a67db3e4757618cc220435acfc0fd9736b95fc7519d35367f9c5d3c24", Pod:"calico-kube-controllers-6c7c5469d4-468p9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3667017b1a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:05.156694 containerd[1464]: 2025-01-16 09:07:05.020 [INFO][4703] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" Jan 16 09:07:05.156694 containerd[1464]: 2025-01-16 09:07:05.020 [INFO][4703] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" iface="eth0" netns="" Jan 16 09:07:05.156694 containerd[1464]: 2025-01-16 09:07:05.020 [INFO][4703] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" Jan 16 09:07:05.156694 containerd[1464]: 2025-01-16 09:07:05.020 [INFO][4703] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" Jan 16 09:07:05.156694 containerd[1464]: 2025-01-16 09:07:05.121 [INFO][4712] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" HandleID="k8s-pod-network.b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--kube--controllers--6c7c5469d4--468p9-eth0" Jan 16 09:07:05.156694 containerd[1464]: 2025-01-16 09:07:05.121 [INFO][4712] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:05.156694 containerd[1464]: 2025-01-16 09:07:05.121 [INFO][4712] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:05.156694 containerd[1464]: 2025-01-16 09:07:05.139 [WARNING][4712] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" HandleID="k8s-pod-network.b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--kube--controllers--6c7c5469d4--468p9-eth0" Jan 16 09:07:05.156694 containerd[1464]: 2025-01-16 09:07:05.139 [INFO][4712] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" HandleID="k8s-pod-network.b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--kube--controllers--6c7c5469d4--468p9-eth0" Jan 16 09:07:05.156694 containerd[1464]: 2025-01-16 09:07:05.145 [INFO][4712] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:05.156694 containerd[1464]: 2025-01-16 09:07:05.150 [INFO][4703] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50" Jan 16 09:07:05.156694 containerd[1464]: time="2025-01-16T09:07:05.156532939Z" level=info msg="TearDown network for sandbox \"b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50\" successfully" Jan 16 09:07:05.167380 containerd[1464]: time="2025-01-16T09:07:05.166465769Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 09:07:05.167380 containerd[1464]: time="2025-01-16T09:07:05.166546735Z" level=info msg="RemovePodSandbox \"b26f7aa84b22b39b398480b1cd5bd16154475c2cf034ba2c398eaf85c040bf50\" returns successfully" Jan 16 09:07:05.170197 containerd[1464]: time="2025-01-16T09:07:05.169695857Z" level=info msg="StopPodSandbox for \"3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147\"" Jan 16 09:07:05.536191 containerd[1464]: 2025-01-16 09:07:05.354 [WARNING][4730] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--2vcpf-eth0", GenerateName:"calico-apiserver-665b6f6bf5-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa2600c0-23e1-499d-8648-c34d81d3d9fd", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"665b6f6bf5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-3b05cacdca", ContainerID:"bfcbb128e666b1926789256dfcfd086b0867d7b2dd4e04eb75544bce677341bd", Pod:"calico-apiserver-665b6f6bf5-2vcpf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali82caee57d8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:05.536191 containerd[1464]: 2025-01-16 09:07:05.354 [INFO][4730] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" Jan 16 09:07:05.536191 containerd[1464]: 2025-01-16 09:07:05.354 [INFO][4730] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" iface="eth0" netns="" Jan 16 09:07:05.536191 containerd[1464]: 2025-01-16 09:07:05.354 [INFO][4730] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" Jan 16 09:07:05.536191 containerd[1464]: 2025-01-16 09:07:05.354 [INFO][4730] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" Jan 16 09:07:05.536191 containerd[1464]: 2025-01-16 09:07:05.464 [INFO][4737] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" HandleID="k8s-pod-network.3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--2vcpf-eth0" Jan 16 09:07:05.536191 containerd[1464]: 2025-01-16 09:07:05.470 [INFO][4737] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:05.536191 containerd[1464]: 2025-01-16 09:07:05.470 [INFO][4737] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:05.536191 containerd[1464]: 2025-01-16 09:07:05.507 [WARNING][4737] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" HandleID="k8s-pod-network.3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--2vcpf-eth0" Jan 16 09:07:05.536191 containerd[1464]: 2025-01-16 09:07:05.508 [INFO][4737] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" HandleID="k8s-pod-network.3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--2vcpf-eth0" Jan 16 09:07:05.536191 containerd[1464]: 2025-01-16 09:07:05.516 [INFO][4737] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:05.536191 containerd[1464]: 2025-01-16 09:07:05.530 [INFO][4730] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" Jan 16 09:07:05.538825 containerd[1464]: time="2025-01-16T09:07:05.537207348Z" level=info msg="TearDown network for sandbox \"3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147\" successfully" Jan 16 09:07:05.538825 containerd[1464]: time="2025-01-16T09:07:05.537257330Z" level=info msg="StopPodSandbox for \"3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147\" returns successfully" Jan 16 09:07:05.539163 containerd[1464]: time="2025-01-16T09:07:05.539079246Z" level=info msg="RemovePodSandbox for \"3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147\"" Jan 16 09:07:05.539163 containerd[1464]: time="2025-01-16T09:07:05.539143036Z" level=info msg="Forcibly stopping sandbox \"3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147\"" Jan 16 09:07:05.569192 sshd[4675]: pam_unix(sshd:session): session closed for user core Jan 16 09:07:05.578509 systemd[1]: sshd@8-137.184.14.123:22-139.178.68.195:42508.service: Deactivated successfully. Jan 16 09:07:05.587323 systemd[1]: session-9.scope: Deactivated successfully. Jan 16 09:07:05.593038 systemd-logind[1447]: Session 9 logged out. Waiting for processes to exit. Jan 16 09:07:05.595736 systemd-logind[1447]: Removed session 9. Jan 16 09:07:05.845180 containerd[1464]: 2025-01-16 09:07:05.716 [WARNING][4757] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--2vcpf-eth0", GenerateName:"calico-apiserver-665b6f6bf5-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa2600c0-23e1-499d-8648-c34d81d3d9fd", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"665b6f6bf5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-3b05cacdca", ContainerID:"bfcbb128e666b1926789256dfcfd086b0867d7b2dd4e04eb75544bce677341bd", Pod:"calico-apiserver-665b6f6bf5-2vcpf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali82caee57d8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:05.845180 containerd[1464]: 2025-01-16 09:07:05.716 [INFO][4757] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" Jan 16 09:07:05.845180 containerd[1464]: 2025-01-16 09:07:05.716 [INFO][4757] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" iface="eth0" netns="" Jan 16 09:07:05.845180 containerd[1464]: 2025-01-16 09:07:05.716 [INFO][4757] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" Jan 16 09:07:05.845180 containerd[1464]: 2025-01-16 09:07:05.716 [INFO][4757] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" Jan 16 09:07:05.845180 containerd[1464]: 2025-01-16 09:07:05.797 [INFO][4765] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" HandleID="k8s-pod-network.3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--2vcpf-eth0" Jan 16 09:07:05.845180 containerd[1464]: 2025-01-16 09:07:05.797 [INFO][4765] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:05.845180 containerd[1464]: 2025-01-16 09:07:05.798 [INFO][4765] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:05.845180 containerd[1464]: 2025-01-16 09:07:05.818 [WARNING][4765] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" HandleID="k8s-pod-network.3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--2vcpf-eth0" Jan 16 09:07:05.845180 containerd[1464]: 2025-01-16 09:07:05.818 [INFO][4765] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" HandleID="k8s-pod-network.3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" Workload="ci--4081.3.0--f--3b05cacdca-k8s-calico--apiserver--665b6f6bf5--2vcpf-eth0" Jan 16 09:07:05.845180 containerd[1464]: 2025-01-16 09:07:05.822 [INFO][4765] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:05.845180 containerd[1464]: 2025-01-16 09:07:05.842 [INFO][4757] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147" Jan 16 09:07:05.845180 containerd[1464]: time="2025-01-16T09:07:05.845135230Z" level=info msg="TearDown network for sandbox \"3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147\" successfully" Jan 16 09:07:05.880695 containerd[1464]: time="2025-01-16T09:07:05.879331386Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 09:07:05.882643 containerd[1464]: time="2025-01-16T09:07:05.882404494Z" level=info msg="RemovePodSandbox \"3cab1d0a1a8642c25d9d8d366f18e71a259253526a6cb0e4b970c5c17a172147\" returns successfully" Jan 16 09:07:06.256125 containerd[1464]: time="2025-01-16T09:07:06.255856203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:06.258163 containerd[1464]: time="2025-01-16T09:07:06.257532935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 16 09:07:06.260692 containerd[1464]: time="2025-01-16T09:07:06.260066088Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:06.266057 containerd[1464]: time="2025-01-16T09:07:06.265131688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:06.267514 containerd[1464]: time="2025-01-16T09:07:06.266832427Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 5.370404321s" Jan 16 09:07:06.267514 containerd[1464]: time="2025-01-16T09:07:06.266904768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 16 09:07:06.271188 containerd[1464]: time="2025-01-16T09:07:06.270247568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 16 09:07:06.275966 containerd[1464]: time="2025-01-16T09:07:06.275409332Z" level=info msg="CreateContainer within sandbox \"8bb7602f83b10afe41e22fb53e61eb10bfa1f73a7c76718a8f7fb7c45659934b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 16 09:07:06.312659 containerd[1464]: time="2025-01-16T09:07:06.312519424Z" level=info msg="CreateContainer within sandbox \"8bb7602f83b10afe41e22fb53e61eb10bfa1f73a7c76718a8f7fb7c45659934b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7f4c0af26b5601266d68a4333022e58f8832c75894a95c5f75c7652f782e7662\"" Jan 16 09:07:06.316885 containerd[1464]: time="2025-01-16T09:07:06.316725632Z" level=info msg="StartContainer for \"7f4c0af26b5601266d68a4333022e58f8832c75894a95c5f75c7652f782e7662\"" Jan 16 09:07:06.423394 systemd[1]: Started cri-containerd-7f4c0af26b5601266d68a4333022e58f8832c75894a95c5f75c7652f782e7662.scope - libcontainer container 7f4c0af26b5601266d68a4333022e58f8832c75894a95c5f75c7652f782e7662. Jan 16 09:07:06.538733 containerd[1464]: time="2025-01-16T09:07:06.537744941Z" level=info msg="StartContainer for \"7f4c0af26b5601266d68a4333022e58f8832c75894a95c5f75c7652f782e7662\" returns successfully" Jan 16 09:07:07.616243 kubelet[2534]: I0116 09:07:07.615430 2534 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 09:07:08.027271 containerd[1464]: time="2025-01-16T09:07:08.026441899Z" level=info msg="StopPodSandbox for \"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5\"" Jan 16 09:07:08.213356 kubelet[2534]: I0116 09:07:08.212580 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-665b6f6bf5-ch8jl" podStartSLOduration=41.1310874 podStartE2EDuration="49.212550016s" podCreationTimestamp="2025-01-16 09:06:19 +0000 UTC" firstStartedPulling="2025-01-16 09:06:58.188552794 +0000 UTC m=+57.614218038" lastFinishedPulling="2025-01-16 09:07:06.270015413 +0000 UTC m=+65.695680654" observedRunningTime="2025-01-16 09:07:06.625192929 +0000 UTC m=+66.050858228" watchObservedRunningTime="2025-01-16 09:07:08.212550016 +0000 UTC m=+67.638215278" Jan 16 09:07:08.439834 containerd[1464]: 2025-01-16 09:07:08.216 [INFO][4827] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" Jan 16 09:07:08.439834 containerd[1464]: 2025-01-16 09:07:08.216 [INFO][4827] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" iface="eth0" netns="/var/run/netns/cni-6aeffcbd-d330-b20f-e6a5-3906dabfb135" Jan 16 09:07:08.439834 containerd[1464]: 2025-01-16 09:07:08.218 [INFO][4827] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" iface="eth0" netns="/var/run/netns/cni-6aeffcbd-d330-b20f-e6a5-3906dabfb135" Jan 16 09:07:08.439834 containerd[1464]: 2025-01-16 09:07:08.221 [INFO][4827] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" iface="eth0" netns="/var/run/netns/cni-6aeffcbd-d330-b20f-e6a5-3906dabfb135" Jan 16 09:07:08.439834 containerd[1464]: 2025-01-16 09:07:08.221 [INFO][4827] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" Jan 16 09:07:08.439834 containerd[1464]: 2025-01-16 09:07:08.221 [INFO][4827] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" Jan 16 09:07:08.439834 containerd[1464]: 2025-01-16 09:07:08.385 [INFO][4838] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" HandleID="k8s-pod-network.1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" Workload="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--b9474-eth0" Jan 16 09:07:08.439834 containerd[1464]: 2025-01-16 09:07:08.385 [INFO][4838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:08.439834 containerd[1464]: 2025-01-16 09:07:08.385 [INFO][4838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:08.439834 containerd[1464]: 2025-01-16 09:07:08.413 [WARNING][4838] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" HandleID="k8s-pod-network.1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" Workload="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--b9474-eth0" Jan 16 09:07:08.439834 containerd[1464]: 2025-01-16 09:07:08.413 [INFO][4838] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" HandleID="k8s-pod-network.1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" Workload="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--b9474-eth0" Jan 16 09:07:08.439834 containerd[1464]: 2025-01-16 09:07:08.422 [INFO][4838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:08.439834 containerd[1464]: 2025-01-16 09:07:08.431 [INFO][4827] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" Jan 16 09:07:08.442770 containerd[1464]: time="2025-01-16T09:07:08.442179066Z" level=info msg="TearDown network for sandbox \"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5\" successfully" Jan 16 09:07:08.442770 containerd[1464]: time="2025-01-16T09:07:08.442221449Z" level=info msg="StopPodSandbox for \"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5\" returns successfully" Jan 16 09:07:08.447130 kubelet[2534]: E0116 09:07:08.445477 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:07:08.447329 containerd[1464]: time="2025-01-16T09:07:08.446562523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b9474,Uid:bf0a906a-5aaa-4b41-ac45-1d14d68ce2ba,Namespace:kube-system,Attempt:1,}" Jan 16 09:07:08.456181 systemd[1]: run-netns-cni\x2d6aeffcbd\x2dd330\x2db20f\x2de6a5\x2d3906dabfb135.mount: Deactivated successfully. Jan 16 09:07:08.939751 kubelet[2534]: I0116 09:07:08.936294 2534 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 09:07:08.972198 systemd-networkd[1372]: calib9a97b3066d: Link UP Jan 16 09:07:08.975811 systemd-networkd[1372]: calib9a97b3066d: Gained carrier Jan 16 09:07:09.056137 containerd[1464]: 2025-01-16 09:07:08.661 [INFO][4848] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--b9474-eth0 coredns-6f6b679f8f- kube-system bf0a906a-5aaa-4b41-ac45-1d14d68ce2ba 942 0 2025-01-16 09:06:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-f-3b05cacdca coredns-6f6b679f8f-b9474 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib9a97b3066d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1bf32ce935e63b18136f5c3564a26bb342def6f600cc9932fb2aa920d2dc24e6" Namespace="kube-system" Pod="coredns-6f6b679f8f-b9474" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--b9474-" Jan 16 09:07:09.056137 containerd[1464]: 2025-01-16 09:07:08.662 [INFO][4848] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1bf32ce935e63b18136f5c3564a26bb342def6f600cc9932fb2aa920d2dc24e6" Namespace="kube-system" Pod="coredns-6f6b679f8f-b9474" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--b9474-eth0" Jan 16 09:07:09.056137 containerd[1464]: 2025-01-16 09:07:08.787 [INFO][4858] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1bf32ce935e63b18136f5c3564a26bb342def6f600cc9932fb2aa920d2dc24e6" HandleID="k8s-pod-network.1bf32ce935e63b18136f5c3564a26bb342def6f600cc9932fb2aa920d2dc24e6" Workload="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--b9474-eth0" Jan 16 09:07:09.056137 containerd[1464]: 2025-01-16 09:07:08.813 [INFO][4858] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1bf32ce935e63b18136f5c3564a26bb342def6f600cc9932fb2aa920d2dc24e6" HandleID="k8s-pod-network.1bf32ce935e63b18136f5c3564a26bb342def6f600cc9932fb2aa920d2dc24e6" Workload="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--b9474-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000266b60), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-f-3b05cacdca", "pod":"coredns-6f6b679f8f-b9474", "timestamp":"2025-01-16 09:07:08.787096026 +0000 UTC"}, Hostname:"ci-4081.3.0-f-3b05cacdca", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 09:07:09.056137 containerd[1464]: 2025-01-16 09:07:08.813 [INFO][4858] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:07:09.056137 containerd[1464]: 2025-01-16 09:07:08.813 [INFO][4858] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:07:09.056137 containerd[1464]: 2025-01-16 09:07:08.813 [INFO][4858] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-f-3b05cacdca' Jan 16 09:07:09.056137 containerd[1464]: 2025-01-16 09:07:08.823 [INFO][4858] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1bf32ce935e63b18136f5c3564a26bb342def6f600cc9932fb2aa920d2dc24e6" host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:07:09.056137 containerd[1464]: 2025-01-16 09:07:08.843 [INFO][4858] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:07:09.056137 containerd[1464]: 2025-01-16 09:07:08.868 [INFO][4858] ipam/ipam.go 489: Trying affinity for 192.168.65.192/26 host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:07:09.056137 containerd[1464]: 2025-01-16 09:07:08.881 [INFO][4858] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.192/26 host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:07:09.056137 containerd[1464]: 2025-01-16 09:07:08.890 [INFO][4858] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.192/26 host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:07:09.056137 containerd[1464]: 2025-01-16 09:07:08.890 [INFO][4858] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.192/26 handle="k8s-pod-network.1bf32ce935e63b18136f5c3564a26bb342def6f600cc9932fb2aa920d2dc24e6" host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:07:09.056137 containerd[1464]: 2025-01-16 09:07:08.900 [INFO][4858] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1bf32ce935e63b18136f5c3564a26bb342def6f600cc9932fb2aa920d2dc24e6 Jan 16 09:07:09.056137 containerd[1464]: 2025-01-16 09:07:08.919 [INFO][4858] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.192/26 handle="k8s-pod-network.1bf32ce935e63b18136f5c3564a26bb342def6f600cc9932fb2aa920d2dc24e6" host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:07:09.056137 containerd[1464]: 2025-01-16 09:07:08.939 [INFO][4858] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.198/26] block=192.168.65.192/26 handle="k8s-pod-network.1bf32ce935e63b18136f5c3564a26bb342def6f600cc9932fb2aa920d2dc24e6" host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:07:09.056137 containerd[1464]: 2025-01-16 09:07:08.939 [INFO][4858] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.198/26] handle="k8s-pod-network.1bf32ce935e63b18136f5c3564a26bb342def6f600cc9932fb2aa920d2dc24e6" host="ci-4081.3.0-f-3b05cacdca" Jan 16 09:07:09.056137 containerd[1464]: 2025-01-16 09:07:08.939 [INFO][4858] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:07:09.056137 containerd[1464]: 2025-01-16 09:07:08.939 [INFO][4858] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.198/26] IPv6=[] ContainerID="1bf32ce935e63b18136f5c3564a26bb342def6f600cc9932fb2aa920d2dc24e6" HandleID="k8s-pod-network.1bf32ce935e63b18136f5c3564a26bb342def6f600cc9932fb2aa920d2dc24e6" Workload="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--b9474-eth0" Jan 16 09:07:09.058702 containerd[1464]: 2025-01-16 09:07:08.946 [INFO][4848] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1bf32ce935e63b18136f5c3564a26bb342def6f600cc9932fb2aa920d2dc24e6" Namespace="kube-system" Pod="coredns-6f6b679f8f-b9474" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--b9474-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--b9474-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bf0a906a-5aaa-4b41-ac45-1d14d68ce2ba", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-3b05cacdca", ContainerID:"", Pod:"coredns-6f6b679f8f-b9474", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib9a97b3066d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:09.058702 containerd[1464]: 2025-01-16 09:07:08.946 [INFO][4848] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.198/32] ContainerID="1bf32ce935e63b18136f5c3564a26bb342def6f600cc9932fb2aa920d2dc24e6" Namespace="kube-system" Pod="coredns-6f6b679f8f-b9474" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--b9474-eth0" Jan 16 09:07:09.058702 containerd[1464]: 2025-01-16 09:07:08.946 [INFO][4848] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9a97b3066d ContainerID="1bf32ce935e63b18136f5c3564a26bb342def6f600cc9932fb2aa920d2dc24e6" Namespace="kube-system" Pod="coredns-6f6b679f8f-b9474" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--b9474-eth0" Jan 16 09:07:09.058702 containerd[1464]: 2025-01-16 09:07:08.975 [INFO][4848] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1bf32ce935e63b18136f5c3564a26bb342def6f600cc9932fb2aa920d2dc24e6" Namespace="kube-system" Pod="coredns-6f6b679f8f-b9474" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--b9474-eth0" Jan 16 09:07:09.058702 containerd[1464]: 2025-01-16 09:07:08.986 [INFO][4848] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1bf32ce935e63b18136f5c3564a26bb342def6f600cc9932fb2aa920d2dc24e6" Namespace="kube-system" Pod="coredns-6f6b679f8f-b9474" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--b9474-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--b9474-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bf0a906a-5aaa-4b41-ac45-1d14d68ce2ba", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-3b05cacdca", ContainerID:"1bf32ce935e63b18136f5c3564a26bb342def6f600cc9932fb2aa920d2dc24e6", Pod:"coredns-6f6b679f8f-b9474", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib9a97b3066d", MAC:"4e:b6:1e:b5:7a:fe", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:07:09.060541 containerd[1464]: 2025-01-16 09:07:09.042 [INFO][4848] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1bf32ce935e63b18136f5c3564a26bb342def6f600cc9932fb2aa920d2dc24e6" Namespace="kube-system" Pod="coredns-6f6b679f8f-b9474" WorkloadEndpoint="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--b9474-eth0" Jan 16 09:07:09.269288 containerd[1464]: time="2025-01-16T09:07:09.264697415Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:07:09.269288 containerd[1464]: time="2025-01-16T09:07:09.264809046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:07:09.269288 containerd[1464]: time="2025-01-16T09:07:09.264846113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:07:09.269288 containerd[1464]: time="2025-01-16T09:07:09.264997972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:07:09.403156 systemd[1]: run-containerd-runc-k8s.io-1bf32ce935e63b18136f5c3564a26bb342def6f600cc9932fb2aa920d2dc24e6-runc.fMNWIp.mount: Deactivated successfully. Jan 16 09:07:09.439438 systemd[1]: Started cri-containerd-1bf32ce935e63b18136f5c3564a26bb342def6f600cc9932fb2aa920d2dc24e6.scope - libcontainer container 1bf32ce935e63b18136f5c3564a26bb342def6f600cc9932fb2aa920d2dc24e6. Jan 16 09:07:09.564337 containerd[1464]: time="2025-01-16T09:07:09.564176242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b9474,Uid:bf0a906a-5aaa-4b41-ac45-1d14d68ce2ba,Namespace:kube-system,Attempt:1,} returns sandbox id \"1bf32ce935e63b18136f5c3564a26bb342def6f600cc9932fb2aa920d2dc24e6\"" Jan 16 09:07:09.579350 kubelet[2534]: E0116 09:07:09.579286 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:07:09.693564 containerd[1464]: time="2025-01-16T09:07:09.686964998Z" level=info msg="CreateContainer within sandbox \"1bf32ce935e63b18136f5c3564a26bb342def6f600cc9932fb2aa920d2dc24e6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 09:07:09.780392 containerd[1464]: time="2025-01-16T09:07:09.780327357Z" level=info msg="CreateContainer within sandbox \"1bf32ce935e63b18136f5c3564a26bb342def6f600cc9932fb2aa920d2dc24e6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"811236cda733189891a24bbf5e2d81f3bbbf8969f84a4ec45cc6a291b9d9875c\"" Jan 16 09:07:09.782803 containerd[1464]: time="2025-01-16T09:07:09.782747083Z" level=info msg="StartContainer for \"811236cda733189891a24bbf5e2d81f3bbbf8969f84a4ec45cc6a291b9d9875c\"" Jan 16 09:07:09.907383 systemd[1]: Started cri-containerd-811236cda733189891a24bbf5e2d81f3bbbf8969f84a4ec45cc6a291b9d9875c.scope - libcontainer container 811236cda733189891a24bbf5e2d81f3bbbf8969f84a4ec45cc6a291b9d9875c. Jan 16 09:07:10.064480 containerd[1464]: time="2025-01-16T09:07:10.062945651Z" level=info msg="StartContainer for \"811236cda733189891a24bbf5e2d81f3bbbf8969f84a4ec45cc6a291b9d9875c\" returns successfully" Jan 16 09:07:10.588148 containerd[1464]: time="2025-01-16T09:07:10.587150095Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:10.590432 containerd[1464]: time="2025-01-16T09:07:10.590232443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 16 09:07:10.595012 containerd[1464]: time="2025-01-16T09:07:10.591610025Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:10.595012 containerd[1464]: time="2025-01-16T09:07:10.594952322Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:10.599198 containerd[1464]: time="2025-01-16T09:07:10.597845834Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 4.327532828s" Jan 16 09:07:10.599198 containerd[1464]: time="2025-01-16T09:07:10.598018205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 16 09:07:10.603049 containerd[1464]: time="2025-01-16T09:07:10.602127866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 16 09:07:10.603723 systemd[1]: Started sshd@9-137.184.14.123:22-139.178.68.195:57372.service - OpenSSH per-connection server daemon (139.178.68.195:57372). Jan 16 09:07:10.637237 containerd[1464]: time="2025-01-16T09:07:10.633544764Z" level=info msg="CreateContainer within sandbox \"b4ad777a67db3e4757618cc220435acfc0fd9736b95fc7519d35367f9c5d3c24\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 16 09:07:10.707918 kubelet[2534]: E0116 09:07:10.706913 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:07:10.738800 containerd[1464]: time="2025-01-16T09:07:10.737915989Z" level=info msg="CreateContainer within sandbox \"b4ad777a67db3e4757618cc220435acfc0fd9736b95fc7519d35367f9c5d3c24\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f66bd1d6595711a73a59e51d1b96cbc8a0941bfdc24b87a82fa95de2d66a986b\"" Jan 16 09:07:10.740204 containerd[1464]: time="2025-01-16T09:07:10.739073142Z" level=info msg="StartContainer for \"f66bd1d6595711a73a59e51d1b96cbc8a0941bfdc24b87a82fa95de2d66a986b\"" Jan 16 09:07:10.786466 kubelet[2534]: I0116 09:07:10.783469 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-b9474" podStartSLOduration=66.783433787 podStartE2EDuration="1m6.783433787s" podCreationTimestamp="2025-01-16 09:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 09:07:10.780489125 +0000 UTC m=+70.206154389" watchObservedRunningTime="2025-01-16 09:07:10.783433787 +0000 UTC m=+70.209099053" Jan 16 09:07:10.807169 systemd-networkd[1372]: calib9a97b3066d: Gained IPv6LL Jan 16 09:07:10.932489 sshd[4970]: Accepted publickey for core from 139.178.68.195 port 57372 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:07:10.935887 sshd[4970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:07:10.958769 systemd[1]: Started cri-containerd-f66bd1d6595711a73a59e51d1b96cbc8a0941bfdc24b87a82fa95de2d66a986b.scope - libcontainer container f66bd1d6595711a73a59e51d1b96cbc8a0941bfdc24b87a82fa95de2d66a986b. Jan 16 09:07:10.972290 systemd-logind[1447]: New session 10 of user core. Jan 16 09:07:10.982390 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 16 09:07:11.029026 kubelet[2534]: E0116 09:07:11.027454 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:07:11.241095 containerd[1464]: time="2025-01-16T09:07:11.240859587Z" level=info msg="StartContainer for \"f66bd1d6595711a73a59e51d1b96cbc8a0941bfdc24b87a82fa95de2d66a986b\" returns successfully" Jan 16 09:07:11.276305 containerd[1464]: time="2025-01-16T09:07:11.274209743Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:11.304519 containerd[1464]: time="2025-01-16T09:07:11.304376075Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 16 09:07:11.308147 containerd[1464]: time="2025-01-16T09:07:11.307493551Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 705.318964ms" Jan 16 09:07:11.308147 containerd[1464]: time="2025-01-16T09:07:11.307587385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 16 09:07:11.313653 containerd[1464]: time="2025-01-16T09:07:11.312825227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 16 09:07:11.319380 containerd[1464]: time="2025-01-16T09:07:11.318797857Z" level=info msg="CreateContainer within sandbox \"bfcbb128e666b1926789256dfcfd086b0867d7b2dd4e04eb75544bce677341bd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 16 09:07:11.386828 containerd[1464]: time="2025-01-16T09:07:11.386531882Z" level=info msg="CreateContainer within sandbox \"bfcbb128e666b1926789256dfcfd086b0867d7b2dd4e04eb75544bce677341bd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2d9f752cefb9b3f3a928ccf5701bffb3bc5bc1d3955353496ecedc7a96fcc564\"" Jan 16 09:07:11.390339 containerd[1464]: time="2025-01-16T09:07:11.390292451Z" level=info msg="StartContainer for \"2d9f752cefb9b3f3a928ccf5701bffb3bc5bc1d3955353496ecedc7a96fcc564\"" Jan 16 09:07:11.535712 systemd[1]: Started cri-containerd-2d9f752cefb9b3f3a928ccf5701bffb3bc5bc1d3955353496ecedc7a96fcc564.scope - libcontainer container 2d9f752cefb9b3f3a928ccf5701bffb3bc5bc1d3955353496ecedc7a96fcc564. Jan 16 09:07:11.725288 kubelet[2534]: E0116 09:07:11.722698 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:07:11.869694 kubelet[2534]: I0116 09:07:11.869273 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6c7c5469d4-468p9" podStartSLOduration=40.513524036 podStartE2EDuration="51.86924078s" podCreationTimestamp="2025-01-16 09:06:20 +0000 UTC" firstStartedPulling="2025-01-16 09:06:59.24549949 +0000 UTC m=+58.671164747" lastFinishedPulling="2025-01-16 09:07:10.601216224 +0000 UTC m=+70.026881491" observedRunningTime="2025-01-16 09:07:11.866674589 +0000 UTC m=+71.292339918" watchObservedRunningTime="2025-01-16 09:07:11.86924078 +0000 UTC m=+71.294906045" Jan 16 09:07:11.878870 containerd[1464]: time="2025-01-16T09:07:11.878435396Z" level=info msg="StartContainer for \"2d9f752cefb9b3f3a928ccf5701bffb3bc5bc1d3955353496ecedc7a96fcc564\" returns successfully" Jan 16 09:07:12.502965 sshd[4970]: pam_unix(sshd:session): session closed for user core Jan 16 09:07:12.522433 systemd[1]: sshd@9-137.184.14.123:22-139.178.68.195:57372.service: Deactivated successfully. Jan 16 09:07:12.529520 systemd[1]: session-10.scope: Deactivated successfully. Jan 16 09:07:12.535959 systemd-logind[1447]: Session 10 logged out. Waiting for processes to exit. Jan 16 09:07:12.553350 systemd[1]: Started sshd@10-137.184.14.123:22-139.178.68.195:57376.service - OpenSSH per-connection server daemon (139.178.68.195:57376). Jan 16 09:07:12.564430 systemd-logind[1447]: Removed session 10. Jan 16 09:07:12.640475 sshd[5108]: Accepted publickey for core from 139.178.68.195 port 57376 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:07:12.644815 sshd[5108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:07:12.655880 systemd-logind[1447]: New session 11 of user core. Jan 16 09:07:12.659352 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 16 09:07:12.745522 kubelet[2534]: E0116 09:07:12.745466 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:07:13.291358 sshd[5108]: pam_unix(sshd:session): session closed for user core Jan 16 09:07:13.324969 systemd[1]: sshd@10-137.184.14.123:22-139.178.68.195:57376.service: Deactivated successfully. Jan 16 09:07:13.331544 systemd[1]: session-11.scope: Deactivated successfully. Jan 16 09:07:13.339729 systemd-logind[1447]: Session 11 logged out. Waiting for processes to exit. Jan 16 09:07:13.345831 systemd[1]: Started sshd@11-137.184.14.123:22-139.178.68.195:57386.service - OpenSSH per-connection server daemon (139.178.68.195:57386). Jan 16 09:07:13.359196 systemd-logind[1447]: Removed session 11. Jan 16 09:07:13.489035 sshd[5125]: Accepted publickey for core from 139.178.68.195 port 57386 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:07:13.496039 sshd[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:07:13.511685 systemd-logind[1447]: New session 12 of user core. Jan 16 09:07:13.518300 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 16 09:07:14.097593 sshd[5125]: pam_unix(sshd:session): session closed for user core Jan 16 09:07:14.116232 systemd[1]: sshd@11-137.184.14.123:22-139.178.68.195:57386.service: Deactivated successfully. Jan 16 09:07:14.116598 systemd-logind[1447]: Session 12 logged out. Waiting for processes to exit. Jan 16 09:07:14.127382 systemd[1]: session-12.scope: Deactivated successfully. Jan 16 09:07:14.143140 systemd-logind[1447]: Removed session 12. Jan 16 09:07:14.210720 containerd[1464]: time="2025-01-16T09:07:14.210581813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:14.212936 containerd[1464]: time="2025-01-16T09:07:14.212691520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 16 09:07:14.216612 containerd[1464]: time="2025-01-16T09:07:14.216477758Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:14.234428 containerd[1464]: time="2025-01-16T09:07:14.234335000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:07:14.236045 containerd[1464]: time="2025-01-16T09:07:14.235916055Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.92302541s" Jan 16 09:07:14.236320 containerd[1464]: time="2025-01-16T09:07:14.236282854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 16 09:07:14.257115 containerd[1464]: time="2025-01-16T09:07:14.256628414Z" level=info msg="CreateContainer within sandbox \"2382176be03946d0f368078f9f5ea0b12a41fed81f7e4fc03de9219e0f6b30bd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 16 09:07:14.327967 containerd[1464]: time="2025-01-16T09:07:14.327556647Z" level=info msg="CreateContainer within sandbox \"2382176be03946d0f368078f9f5ea0b12a41fed81f7e4fc03de9219e0f6b30bd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"076f62fb709efaad36898ec06aa52a8b3bbd77feb4fa5fb62d36f86e1086d562\"" Jan 16 09:07:14.330841 containerd[1464]: time="2025-01-16T09:07:14.330583660Z" level=info msg="StartContainer for \"076f62fb709efaad36898ec06aa52a8b3bbd77feb4fa5fb62d36f86e1086d562\"" Jan 16 09:07:14.441737 systemd[1]: run-containerd-runc-k8s.io-076f62fb709efaad36898ec06aa52a8b3bbd77feb4fa5fb62d36f86e1086d562-runc.hXpifs.mount: Deactivated successfully. Jan 16 09:07:14.460362 systemd[1]: Started cri-containerd-076f62fb709efaad36898ec06aa52a8b3bbd77feb4fa5fb62d36f86e1086d562.scope - libcontainer container 076f62fb709efaad36898ec06aa52a8b3bbd77feb4fa5fb62d36f86e1086d562. Jan 16 09:07:14.599814 containerd[1464]: time="2025-01-16T09:07:14.599493655Z" level=info msg="StartContainer for \"076f62fb709efaad36898ec06aa52a8b3bbd77feb4fa5fb62d36f86e1086d562\" returns successfully" Jan 16 09:07:14.785723 kubelet[2534]: I0116 09:07:14.783957 2534 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 09:07:14.853866 kubelet[2534]: I0116 09:07:14.853393 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-vhc8p" podStartSLOduration=38.432932326 podStartE2EDuration="54.853370589s" podCreationTimestamp="2025-01-16 09:06:20 +0000 UTC" firstStartedPulling="2025-01-16 09:06:57.819701632 +0000 UTC m=+57.245366911" lastFinishedPulling="2025-01-16 09:07:14.240139911 +0000 UTC m=+73.665805174" observedRunningTime="2025-01-16 09:07:14.847469667 +0000 UTC m=+74.273134929" watchObservedRunningTime="2025-01-16 09:07:14.853370589 +0000 UTC m=+74.279035851" Jan 16 09:07:14.853866 kubelet[2534]: I0116 09:07:14.853556 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-665b6f6bf5-2vcpf" podStartSLOduration=44.487308277 podStartE2EDuration="55.853548435s" podCreationTimestamp="2025-01-16 09:06:19 +0000 UTC" firstStartedPulling="2025-01-16 09:06:59.945314254 +0000 UTC m=+59.370979509" lastFinishedPulling="2025-01-16 09:07:11.311554409 +0000 UTC m=+70.737219667" observedRunningTime="2025-01-16 09:07:12.799043667 +0000 UTC m=+72.224708934" watchObservedRunningTime="2025-01-16 09:07:14.853548435 +0000 UTC m=+74.279213697" Jan 16 09:07:15.734833 kubelet[2534]: I0116 09:07:15.734730 2534 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 16 09:07:15.734833 kubelet[2534]: I0116 09:07:15.734842 2534 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 16 09:07:16.413274 kubelet[2534]: E0116 09:07:16.413198 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:07:17.025565 kubelet[2534]: E0116 09:07:17.025125 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:07:19.126103 systemd[1]: Started sshd@12-137.184.14.123:22-139.178.68.195:48078.service - OpenSSH per-connection server daemon (139.178.68.195:48078). Jan 16 09:07:19.331894 sshd[5209]: Accepted publickey for core from 139.178.68.195 port 48078 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:07:19.335938 sshd[5209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:07:19.357134 systemd-logind[1447]: New session 13 of user core. Jan 16 09:07:19.363647 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 16 09:07:20.329047 sshd[5209]: pam_unix(sshd:session): session closed for user core Jan 16 09:07:20.341064 systemd[1]: sshd@12-137.184.14.123:22-139.178.68.195:48078.service: Deactivated successfully. Jan 16 09:07:20.348637 systemd[1]: session-13.scope: Deactivated successfully. Jan 16 09:07:20.351855 systemd-logind[1447]: Session 13 logged out. Waiting for processes to exit. Jan 16 09:07:20.355859 systemd-logind[1447]: Removed session 13. Jan 16 09:07:25.357498 systemd[1]: Started sshd@13-137.184.14.123:22-139.178.68.195:53204.service - OpenSSH per-connection server daemon (139.178.68.195:53204). Jan 16 09:07:25.520021 sshd[5235]: Accepted publickey for core from 139.178.68.195 port 53204 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:07:25.523856 sshd[5235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:07:25.537190 systemd-logind[1447]: New session 14 of user core. Jan 16 09:07:25.545408 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 16 09:07:26.101008 sshd[5235]: pam_unix(sshd:session): session closed for user core Jan 16 09:07:26.120571 systemd[1]: sshd@13-137.184.14.123:22-139.178.68.195:53204.service: Deactivated successfully. Jan 16 09:07:26.134700 systemd[1]: session-14.scope: Deactivated successfully. Jan 16 09:07:26.139369 systemd-logind[1447]: Session 14 logged out. Waiting for processes to exit. Jan 16 09:07:26.143236 systemd-logind[1447]: Removed session 14. Jan 16 09:07:27.024033 kubelet[2534]: E0116 09:07:27.023937 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:07:31.123569 systemd[1]: Started sshd@14-137.184.14.123:22-139.178.68.195:53210.service - OpenSSH per-connection server daemon (139.178.68.195:53210). Jan 16 09:07:31.189227 sshd[5251]: Accepted publickey for core from 139.178.68.195 port 53210 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:07:31.191872 sshd[5251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:07:31.204735 systemd-logind[1447]: New session 15 of user core. Jan 16 09:07:31.212738 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 16 09:07:31.457488 sshd[5251]: pam_unix(sshd:session): session closed for user core Jan 16 09:07:31.463768 systemd[1]: sshd@14-137.184.14.123:22-139.178.68.195:53210.service: Deactivated successfully. Jan 16 09:07:31.468921 systemd[1]: session-15.scope: Deactivated successfully. Jan 16 09:07:31.472473 systemd-logind[1447]: Session 15 logged out. Waiting for processes to exit. Jan 16 09:07:31.475008 systemd-logind[1447]: Removed session 15. Jan 16 09:07:36.502321 systemd[1]: Started sshd@15-137.184.14.123:22-139.178.68.195:60466.service - OpenSSH per-connection server daemon (139.178.68.195:60466). Jan 16 09:07:36.598345 sshd[5265]: Accepted publickey for core from 139.178.68.195 port 60466 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:07:36.599678 sshd[5265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:07:36.615176 systemd-logind[1447]: New session 16 of user core. Jan 16 09:07:36.624595 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 16 09:07:37.061020 sshd[5265]: pam_unix(sshd:session): session closed for user core Jan 16 09:07:37.073795 systemd[1]: sshd@15-137.184.14.123:22-139.178.68.195:60466.service: Deactivated successfully. Jan 16 09:07:37.079685 systemd[1]: session-16.scope: Deactivated successfully. Jan 16 09:07:37.081848 systemd-logind[1447]: Session 16 logged out. Waiting for processes to exit. Jan 16 09:07:37.090661 systemd[1]: Started sshd@16-137.184.14.123:22-139.178.68.195:60482.service - OpenSSH per-connection server daemon (139.178.68.195:60482). Jan 16 09:07:37.093700 systemd-logind[1447]: Removed session 16. Jan 16 09:07:37.261547 sshd[5278]: Accepted publickey for core from 139.178.68.195 port 60482 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:07:37.264751 sshd[5278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:07:37.275516 systemd-logind[1447]: New session 17 of user core. Jan 16 09:07:37.282407 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 16 09:07:37.903117 sshd[5278]: pam_unix(sshd:session): session closed for user core Jan 16 09:07:37.915232 systemd[1]: sshd@16-137.184.14.123:22-139.178.68.195:60482.service: Deactivated successfully. Jan 16 09:07:37.918741 systemd[1]: session-17.scope: Deactivated successfully. Jan 16 09:07:37.922585 systemd-logind[1447]: Session 17 logged out. Waiting for processes to exit. Jan 16 09:07:37.930558 systemd[1]: Started sshd@17-137.184.14.123:22-139.178.68.195:60496.service - OpenSSH per-connection server daemon (139.178.68.195:60496). Jan 16 09:07:37.938688 systemd-logind[1447]: Removed session 17. Jan 16 09:07:38.002935 sshd[5289]: Accepted publickey for core from 139.178.68.195 port 60496 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:07:38.006130 sshd[5289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:07:38.020431 systemd-logind[1447]: New session 18 of user core. Jan 16 09:07:38.048025 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 16 09:07:39.833935 systemd[1]: run-containerd-runc-k8s.io-f66bd1d6595711a73a59e51d1b96cbc8a0941bfdc24b87a82fa95de2d66a986b-runc.bax9nT.mount: Deactivated successfully. Jan 16 09:07:40.074020 kubelet[2534]: E0116 09:07:40.040121 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:07:41.879495 sshd[5289]: pam_unix(sshd:session): session closed for user core Jan 16 09:07:41.901717 systemd[1]: sshd@17-137.184.14.123:22-139.178.68.195:60496.service: Deactivated successfully. Jan 16 09:07:41.912921 systemd[1]: session-18.scope: Deactivated successfully. Jan 16 09:07:41.919865 systemd-logind[1447]: Session 18 logged out. Waiting for processes to exit. Jan 16 09:07:41.934584 systemd[1]: Started sshd@18-137.184.14.123:22-139.178.68.195:60508.service - OpenSSH per-connection server daemon (139.178.68.195:60508). Jan 16 09:07:41.952719 systemd-logind[1447]: Removed session 18. Jan 16 09:07:42.181888 sshd[5326]: Accepted publickey for core from 139.178.68.195 port 60508 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:07:42.188492 sshd[5326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:07:42.200668 systemd-logind[1447]: New session 19 of user core. Jan 16 09:07:42.210326 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 16 09:07:43.608048 sshd[5326]: pam_unix(sshd:session): session closed for user core Jan 16 09:07:43.624321 systemd[1]: sshd@18-137.184.14.123:22-139.178.68.195:60508.service: Deactivated successfully. Jan 16 09:07:43.630637 systemd[1]: session-19.scope: Deactivated successfully. Jan 16 09:07:43.636628 systemd-logind[1447]: Session 19 logged out. Waiting for processes to exit. Jan 16 09:07:43.642217 systemd[1]: Started sshd@19-137.184.14.123:22-139.178.68.195:60524.service - OpenSSH per-connection server daemon (139.178.68.195:60524). Jan 16 09:07:43.652426 systemd-logind[1447]: Removed session 19. Jan 16 09:07:43.753813 sshd[5339]: Accepted publickey for core from 139.178.68.195 port 60524 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:07:43.758043 sshd[5339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:07:43.774509 systemd-logind[1447]: New session 20 of user core. Jan 16 09:07:43.776652 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 16 09:07:44.025409 sshd[5339]: pam_unix(sshd:session): session closed for user core Jan 16 09:07:44.036321 systemd[1]: sshd@19-137.184.14.123:22-139.178.68.195:60524.service: Deactivated successfully. Jan 16 09:07:44.043893 systemd[1]: session-20.scope: Deactivated successfully. Jan 16 09:07:44.046662 systemd-logind[1447]: Session 20 logged out. Waiting for processes to exit. Jan 16 09:07:44.049740 systemd-logind[1447]: Removed session 20. Jan 16 09:07:49.050429 systemd[1]: Started sshd@20-137.184.14.123:22-139.178.68.195:55168.service - OpenSSH per-connection server daemon (139.178.68.195:55168). Jan 16 09:07:49.151222 sshd[5379]: Accepted publickey for core from 139.178.68.195 port 55168 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:07:49.154623 sshd[5379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:07:49.166230 systemd-logind[1447]: New session 21 of user core. Jan 16 09:07:49.172470 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 16 09:07:49.391547 sshd[5379]: pam_unix(sshd:session): session closed for user core Jan 16 09:07:49.398525 systemd[1]: sshd@20-137.184.14.123:22-139.178.68.195:55168.service: Deactivated successfully. Jan 16 09:07:49.405767 systemd[1]: session-21.scope: Deactivated successfully. Jan 16 09:07:49.413863 systemd-logind[1447]: Session 21 logged out. Waiting for processes to exit. Jan 16 09:07:49.417552 systemd-logind[1447]: Removed session 21. Jan 16 09:07:54.428858 systemd[1]: Started sshd@21-137.184.14.123:22-139.178.68.195:55176.service - OpenSSH per-connection server daemon (139.178.68.195:55176). Jan 16 09:07:54.537159 sshd[5397]: Accepted publickey for core from 139.178.68.195 port 55176 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:07:54.542634 sshd[5397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:07:54.571119 systemd-logind[1447]: New session 22 of user core. Jan 16 09:07:54.577406 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 16 09:07:54.827437 sshd[5397]: pam_unix(sshd:session): session closed for user core Jan 16 09:07:54.834902 systemd[1]: sshd@21-137.184.14.123:22-139.178.68.195:55176.service: Deactivated successfully. Jan 16 09:07:54.842233 systemd[1]: session-22.scope: Deactivated successfully. Jan 16 09:07:54.851657 systemd-logind[1447]: Session 22 logged out. Waiting for processes to exit. Jan 16 09:07:54.853498 systemd-logind[1447]: Removed session 22. Jan 16 09:07:59.866646 systemd[1]: Started sshd@22-137.184.14.123:22-139.178.68.195:53686.service - OpenSSH per-connection server daemon (139.178.68.195:53686). Jan 16 09:07:59.931873 sshd[5410]: Accepted publickey for core from 139.178.68.195 port 53686 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:07:59.936312 sshd[5410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:07:59.951032 systemd-logind[1447]: New session 23 of user core. Jan 16 09:07:59.969371 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 16 09:08:00.288444 sshd[5410]: pam_unix(sshd:session): session closed for user core Jan 16 09:08:00.297477 systemd[1]: sshd@22-137.184.14.123:22-139.178.68.195:53686.service: Deactivated successfully. Jan 16 09:08:00.303853 systemd[1]: session-23.scope: Deactivated successfully. Jan 16 09:08:00.311345 systemd-logind[1447]: Session 23 logged out. Waiting for processes to exit. Jan 16 09:08:00.314315 systemd-logind[1447]: Removed session 23. Jan 16 09:08:04.080122 kubelet[2534]: E0116 09:08:04.072453 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:08:05.318139 systemd[1]: Started sshd@23-137.184.14.123:22-139.178.68.195:38534.service - OpenSSH per-connection server daemon (139.178.68.195:38534). Jan 16 09:08:05.376079 sshd[5425]: Accepted publickey for core from 139.178.68.195 port 38534 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:08:05.379537 sshd[5425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:08:05.392951 systemd-logind[1447]: New session 24 of user core. Jan 16 09:08:05.397407 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 16 09:08:05.664846 sshd[5425]: pam_unix(sshd:session): session closed for user core Jan 16 09:08:05.671622 systemd[1]: sshd@23-137.184.14.123:22-139.178.68.195:38534.service: Deactivated successfully. Jan 16 09:08:05.676649 systemd[1]: session-24.scope: Deactivated successfully. Jan 16 09:08:05.679829 systemd-logind[1447]: Session 24 logged out. Waiting for processes to exit. Jan 16 09:08:05.682668 systemd-logind[1447]: Removed session 24. Jan 16 09:08:05.954119 containerd[1464]: time="2025-01-16T09:08:05.954026756Z" level=info msg="StopPodSandbox for \"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5\"" Jan 16 09:08:06.804061 containerd[1464]: 2025-01-16 09:08:06.413 [WARNING][5450] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--b9474-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bf0a906a-5aaa-4b41-ac45-1d14d68ce2ba", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-3b05cacdca", ContainerID:"1bf32ce935e63b18136f5c3564a26bb342def6f600cc9932fb2aa920d2dc24e6", Pod:"coredns-6f6b679f8f-b9474", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib9a97b3066d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:08:06.804061 containerd[1464]: 2025-01-16 09:08:06.416 [INFO][5450] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" Jan 16 09:08:06.804061 containerd[1464]: 2025-01-16 09:08:06.416 [INFO][5450] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" iface="eth0" netns="" Jan 16 09:08:06.804061 containerd[1464]: 2025-01-16 09:08:06.416 [INFO][5450] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" Jan 16 09:08:06.804061 containerd[1464]: 2025-01-16 09:08:06.416 [INFO][5450] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" Jan 16 09:08:06.804061 containerd[1464]: 2025-01-16 09:08:06.765 [INFO][5456] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" HandleID="k8s-pod-network.1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" Workload="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--b9474-eth0" Jan 16 09:08:06.804061 containerd[1464]: 2025-01-16 09:08:06.769 [INFO][5456] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:08:06.804061 containerd[1464]: 2025-01-16 09:08:06.769 [INFO][5456] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:08:06.804061 containerd[1464]: 2025-01-16 09:08:06.790 [WARNING][5456] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" HandleID="k8s-pod-network.1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" Workload="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--b9474-eth0" Jan 16 09:08:06.804061 containerd[1464]: 2025-01-16 09:08:06.790 [INFO][5456] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" HandleID="k8s-pod-network.1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" Workload="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--b9474-eth0" Jan 16 09:08:06.804061 containerd[1464]: 2025-01-16 09:08:06.795 [INFO][5456] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:08:06.804061 containerd[1464]: 2025-01-16 09:08:06.799 [INFO][5450] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" Jan 16 09:08:06.819801 containerd[1464]: time="2025-01-16T09:08:06.809863531Z" level=info msg="TearDown network for sandbox \"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5\" successfully" Jan 16 09:08:06.819801 containerd[1464]: time="2025-01-16T09:08:06.809925715Z" level=info msg="StopPodSandbox for \"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5\" returns successfully" Jan 16 09:08:06.819801 containerd[1464]: time="2025-01-16T09:08:06.811868352Z" level=info msg="RemovePodSandbox for \"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5\"" Jan 16 09:08:06.819801 containerd[1464]: time="2025-01-16T09:08:06.811918428Z" level=info msg="Forcibly stopping sandbox \"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5\"" Jan 16 09:08:06.988614 containerd[1464]: 2025-01-16 09:08:06.886 [WARNING][5474] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--b9474-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bf0a906a-5aaa-4b41-ac45-1d14d68ce2ba", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 9, 6, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-3b05cacdca", ContainerID:"1bf32ce935e63b18136f5c3564a26bb342def6f600cc9932fb2aa920d2dc24e6", Pod:"coredns-6f6b679f8f-b9474", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib9a97b3066d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:08:06.988614 containerd[1464]: 2025-01-16 09:08:06.887 [INFO][5474] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" Jan 16 09:08:06.988614 containerd[1464]: 2025-01-16 09:08:06.887 [INFO][5474] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" iface="eth0" netns="" Jan 16 09:08:06.988614 containerd[1464]: 2025-01-16 09:08:06.887 [INFO][5474] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" Jan 16 09:08:06.988614 containerd[1464]: 2025-01-16 09:08:06.887 [INFO][5474] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" Jan 16 09:08:06.988614 containerd[1464]: 2025-01-16 09:08:06.954 [INFO][5480] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" HandleID="k8s-pod-network.1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" Workload="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--b9474-eth0" Jan 16 09:08:06.988614 containerd[1464]: 2025-01-16 09:08:06.954 [INFO][5480] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:08:06.988614 containerd[1464]: 2025-01-16 09:08:06.954 [INFO][5480] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:08:06.988614 containerd[1464]: 2025-01-16 09:08:06.970 [WARNING][5480] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" HandleID="k8s-pod-network.1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" Workload="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--b9474-eth0" Jan 16 09:08:06.988614 containerd[1464]: 2025-01-16 09:08:06.970 [INFO][5480] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" HandleID="k8s-pod-network.1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" Workload="ci--4081.3.0--f--3b05cacdca-k8s-coredns--6f6b679f8f--b9474-eth0" Jan 16 09:08:06.988614 containerd[1464]: 2025-01-16 09:08:06.975 [INFO][5480] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:08:06.988614 containerd[1464]: 2025-01-16 09:08:06.984 [INFO][5474] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5" Jan 16 09:08:06.991053 containerd[1464]: time="2025-01-16T09:08:06.988671879Z" level=info msg="TearDown network for sandbox \"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5\" successfully" Jan 16 09:08:07.015631 containerd[1464]: time="2025-01-16T09:08:07.006441040Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 09:08:07.015631 containerd[1464]: time="2025-01-16T09:08:07.006561369Z" level=info msg="RemovePodSandbox \"1e7a6af302f5000bffb639a4022486ac28089a40ffa06dda93102618328d1ea5\" returns successfully" Jan 16 09:08:10.693532 systemd[1]: Started sshd@24-137.184.14.123:22-139.178.68.195:38542.service - OpenSSH per-connection server daemon (139.178.68.195:38542). Jan 16 09:08:10.902938 sshd[5508]: Accepted publickey for core from 139.178.68.195 port 38542 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:08:10.906327 sshd[5508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:08:10.933538 systemd-logind[1447]: New session 25 of user core. Jan 16 09:08:10.942362 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 16 09:08:11.157476 sshd[5508]: pam_unix(sshd:session): session closed for user core Jan 16 09:08:11.165587 systemd[1]: sshd@24-137.184.14.123:22-139.178.68.195:38542.service: Deactivated successfully. Jan 16 09:08:11.170462 systemd[1]: session-25.scope: Deactivated successfully. Jan 16 09:08:11.172594 systemd-logind[1447]: Session 25 logged out. Waiting for processes to exit. Jan 16 09:08:11.176588 systemd-logind[1447]: Removed session 25.