Jan 16 08:58:42.916598 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 16 08:58:42.916627 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 16 08:58:42.916639 kernel: BIOS-provided physical RAM map: Jan 16 08:58:42.916646 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 16 08:58:42.916652 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 16 08:58:42.916659 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 16 08:58:42.916667 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 16 08:58:42.916673 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 16 08:58:42.916680 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 16 08:58:42.916690 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 16 08:58:42.916696 kernel: NX (Execute Disable) protection: active Jan 16 08:58:42.916703 kernel: APIC: Static calls initialized Jan 16 08:58:42.916716 kernel: SMBIOS 2.8 present. Jan 16 08:58:42.916723 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 16 08:58:42.916732 kernel: Hypervisor detected: KVM Jan 16 08:58:42.916743 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 16 08:58:42.916753 kernel: kvm-clock: using sched offset of 3057875299 cycles Jan 16 08:58:42.916761 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 16 08:58:42.916769 kernel: tsc: Detected 2494.140 MHz processor Jan 16 08:58:42.916778 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 16 08:58:42.916786 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 16 08:58:42.916794 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 16 08:58:42.916801 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 16 08:58:42.916809 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 16 08:58:42.916820 kernel: ACPI: Early table checksum verification disabled Jan 16 08:58:42.916827 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 16 08:58:42.916835 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 08:58:42.916843 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 08:58:42.916872 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 08:58:42.916883 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 16 08:58:42.916894 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 08:58:42.916905 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 08:58:42.916916 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 08:58:42.916930 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 08:58:42.916942 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 16 08:58:42.916952 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 16 08:58:42.916963 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 16 08:58:42.916974 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 16 08:58:42.916984 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 16 08:58:42.916994 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 16 08:58:42.917011 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 16 08:58:42.917026 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 16 08:58:42.917036 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 16 08:58:42.917048 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 16 08:58:42.917059 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 16 08:58:42.917075 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 16 08:58:42.917087 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 16 08:58:42.917103 kernel: Zone ranges: Jan 16 08:58:42.917115 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 16 08:58:42.917127 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 16 08:58:42.917140 kernel: Normal empty Jan 16 08:58:42.917150 kernel: Movable zone start for each node Jan 16 08:58:42.917158 kernel: Early memory node ranges Jan 16 08:58:42.917167 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 16 08:58:42.917175 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 16 08:58:42.917183 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 16 08:58:42.917191 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 16 08:58:42.917202 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 16 08:58:42.917214 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 16 08:58:42.917227 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 16 08:58:42.917236 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 16 08:58:42.917251 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 16 08:58:42.917259 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 16 08:58:42.917268 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 16 08:58:42.917276 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 16 08:58:42.917284 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 16 08:58:42.917295 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 16 08:58:42.917303 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 16 08:58:42.917311 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 16 08:58:42.917319 kernel: TSC deadline timer available Jan 16 08:58:42.917328 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 16 08:58:42.917336 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 16 08:58:42.917344 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 16 08:58:42.917354 kernel: Booting paravirtualized kernel on KVM Jan 16 08:58:42.917363 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 16 08:58:42.917374 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 16 08:58:42.917385 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 16 08:58:42.917395 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 16 08:58:42.917404 kernel: pcpu-alloc: [0] 0 1 Jan 16 08:58:42.917412 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 16 08:58:42.917421 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 16 08:58:42.917430 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 16 08:58:42.917439 kernel: random: crng init done Jan 16 08:58:42.917450 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 16 08:58:42.917458 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 16 08:58:42.917466 kernel: Fallback order for Node 0: 0 Jan 16 08:58:42.917474 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 16 08:58:42.917483 kernel: Policy zone: DMA32 Jan 16 08:58:42.917491 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 16 08:58:42.917500 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Jan 16 08:58:42.917513 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 16 08:58:42.917525 kernel: Kernel/User page tables isolation: enabled Jan 16 08:58:42.917541 kernel: ftrace: allocating 37918 entries in 149 pages Jan 16 08:58:42.917555 kernel: ftrace: allocated 149 pages with 4 groups Jan 16 08:58:42.917563 kernel: Dynamic Preempt: voluntary Jan 16 08:58:42.917572 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 16 08:58:42.917581 kernel: rcu: RCU event tracing is enabled. Jan 16 08:58:42.917589 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 16 08:58:42.917598 kernel: Trampoline variant of Tasks RCU enabled. Jan 16 08:58:42.917606 kernel: Rude variant of Tasks RCU enabled. Jan 16 08:58:42.917614 kernel: Tracing variant of Tasks RCU enabled. Jan 16 08:58:42.917626 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 16 08:58:42.917634 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 16 08:58:42.917643 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 16 08:58:42.917651 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 16 08:58:42.917662 kernel: Console: colour VGA+ 80x25 Jan 16 08:58:42.917670 kernel: printk: console [tty0] enabled Jan 16 08:58:42.917679 kernel: printk: console [ttyS0] enabled Jan 16 08:58:42.917687 kernel: ACPI: Core revision 20230628 Jan 16 08:58:42.917695 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 16 08:58:42.917706 kernel: APIC: Switch to symmetric I/O mode setup Jan 16 08:58:42.917714 kernel: x2apic enabled Jan 16 08:58:42.917723 kernel: APIC: Switched APIC routing to: physical x2apic Jan 16 08:58:42.917731 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 16 08:58:42.917739 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jan 16 08:58:42.917748 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Jan 16 08:58:42.917756 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 16 08:58:42.917765 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 16 08:58:42.917784 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 16 08:58:42.917793 kernel: Spectre V2 : Mitigation: Retpolines Jan 16 08:58:42.917801 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 16 08:58:42.917810 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 16 08:58:42.917822 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 16 08:58:42.917830 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 16 08:58:42.917839 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 16 08:58:42.919905 kernel: MDS: Mitigation: Clear CPU buffers Jan 16 08:58:42.919930 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 16 08:58:42.919962 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 16 08:58:42.919977 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 16 08:58:42.919991 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 16 08:58:42.920004 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 16 08:58:42.920018 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 16 08:58:42.920031 kernel: Freeing SMP alternatives memory: 32K Jan 16 08:58:42.920044 kernel: pid_max: default: 32768 minimum: 301 Jan 16 08:58:42.920057 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 16 08:58:42.920074 kernel: landlock: Up and running. Jan 16 08:58:42.920087 kernel: SELinux: Initializing. Jan 16 08:58:42.920101 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 16 08:58:42.920115 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 16 08:58:42.920130 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 16 08:58:42.920144 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 08:58:42.920157 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 08:58:42.920169 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 08:58:42.920178 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 16 08:58:42.920190 kernel: signal: max sigframe size: 1776 Jan 16 08:58:42.920199 kernel: rcu: Hierarchical SRCU implementation. Jan 16 08:58:42.920210 kernel: rcu: Max phase no-delay instances is 400. Jan 16 08:58:42.920225 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 16 08:58:42.920239 kernel: smp: Bringing up secondary CPUs ... Jan 16 08:58:42.920253 kernel: smpboot: x86: Booting SMP configuration: Jan 16 08:58:42.920267 kernel: .... node #0, CPUs: #1 Jan 16 08:58:42.920281 kernel: smp: Brought up 1 node, 2 CPUs Jan 16 08:58:42.920300 kernel: smpboot: Max logical packages: 1 Jan 16 08:58:42.920318 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Jan 16 08:58:42.920332 kernel: devtmpfs: initialized Jan 16 08:58:42.920343 kernel: x86/mm: Memory block size: 128MB Jan 16 08:58:42.920352 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 16 08:58:42.920364 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 16 08:58:42.920373 kernel: pinctrl core: initialized pinctrl subsystem Jan 16 08:58:42.920381 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 16 08:58:42.920395 kernel: audit: initializing netlink subsys (disabled) Jan 16 08:58:42.920407 kernel: audit: type=2000 audit(1737017922.069:1): state=initialized audit_enabled=0 res=1 Jan 16 08:58:42.920425 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 16 08:58:42.920438 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 16 08:58:42.920447 kernel: cpuidle: using governor menu Jan 16 08:58:42.920456 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 16 08:58:42.920465 kernel: dca service started, version 1.12.1 Jan 16 08:58:42.920474 kernel: PCI: Using configuration type 1 for base access Jan 16 08:58:42.920483 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 16 08:58:42.920492 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 16 08:58:42.920501 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 16 08:58:42.920513 kernel: ACPI: Added _OSI(Module Device) Jan 16 08:58:42.920522 kernel: ACPI: Added _OSI(Processor Device) Jan 16 08:58:42.920531 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 16 08:58:42.920540 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 16 08:58:42.920549 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 16 08:58:42.920558 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 16 08:58:42.920566 kernel: ACPI: Interpreter enabled Jan 16 08:58:42.920575 kernel: ACPI: PM: (supports S0 S5) Jan 16 08:58:42.920584 kernel: ACPI: Using IOAPIC for interrupt routing Jan 16 08:58:42.920595 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 16 08:58:42.920604 kernel: PCI: Using E820 reservations for host bridge windows Jan 16 08:58:42.920613 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 16 08:58:42.920622 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 16 08:58:42.920840 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 16 08:58:42.922070 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 16 08:58:42.922197 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 16 08:58:42.922216 kernel: acpiphp: Slot [3] registered Jan 16 08:58:42.922226 kernel: acpiphp: Slot [4] registered Jan 16 08:58:42.922235 kernel: acpiphp: Slot [5] registered Jan 16 08:58:42.922244 kernel: acpiphp: Slot [6] registered Jan 16 08:58:42.922253 kernel: acpiphp: Slot [7] registered Jan 16 08:58:42.922262 kernel: acpiphp: Slot [8] registered Jan 16 08:58:42.922270 kernel: acpiphp: Slot [9] registered Jan 16 08:58:42.922279 kernel: acpiphp: Slot [10] registered Jan 16 08:58:42.922288 kernel: acpiphp: Slot [11] registered Jan 16 08:58:42.922297 kernel: acpiphp: Slot [12] registered Jan 16 08:58:42.922308 kernel: acpiphp: Slot [13] registered Jan 16 08:58:42.922317 kernel: acpiphp: Slot [14] registered Jan 16 08:58:42.922326 kernel: acpiphp: Slot [15] registered Jan 16 08:58:42.922334 kernel: acpiphp: Slot [16] registered Jan 16 08:58:42.922343 kernel: acpiphp: Slot [17] registered Jan 16 08:58:42.922352 kernel: acpiphp: Slot [18] registered Jan 16 08:58:42.922360 kernel: acpiphp: Slot [19] registered Jan 16 08:58:42.922369 kernel: acpiphp: Slot [20] registered Jan 16 08:58:42.922378 kernel: acpiphp: Slot [21] registered Jan 16 08:58:42.922389 kernel: acpiphp: Slot [22] registered Jan 16 08:58:42.922398 kernel: acpiphp: Slot [23] registered Jan 16 08:58:42.922407 kernel: acpiphp: Slot [24] registered Jan 16 08:58:42.922415 kernel: acpiphp: Slot [25] registered Jan 16 08:58:42.922424 kernel: acpiphp: Slot [26] registered Jan 16 08:58:42.922446 kernel: acpiphp: Slot [27] registered Jan 16 08:58:42.922459 kernel: acpiphp: Slot [28] registered Jan 16 08:58:42.922472 kernel: acpiphp: Slot [29] registered Jan 16 08:58:42.922485 kernel: acpiphp: Slot [30] registered Jan 16 08:58:42.922497 kernel: acpiphp: Slot [31] registered Jan 16 08:58:42.922509 kernel: PCI host bridge to bus 0000:00 Jan 16 08:58:42.922655 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 16 08:58:42.922758 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 16 08:58:42.922842 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 16 08:58:42.923930 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 16 08:58:42.924020 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 16 08:58:42.924103 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 16 08:58:42.924263 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 16 08:58:42.924404 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 16 08:58:42.924535 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 16 08:58:42.924661 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 16 08:58:42.924794 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 16 08:58:42.925439 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 16 08:58:42.925568 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 16 08:58:42.925668 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 16 08:58:42.925772 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 16 08:58:42.925953 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 16 08:58:42.926070 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 16 08:58:42.926202 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 16 08:58:42.926304 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 16 08:58:42.926409 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 16 08:58:42.926517 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 16 08:58:42.926623 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 16 08:58:42.926730 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 16 08:58:42.926825 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 16 08:58:42.926943 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 16 08:58:42.927094 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 16 08:58:42.927245 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 16 08:58:42.927369 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 16 08:58:42.927469 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 16 08:58:42.927587 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 16 08:58:42.927702 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 16 08:58:42.927840 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 16 08:58:42.928036 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 16 08:58:42.928160 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 16 08:58:42.928256 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 16 08:58:42.928369 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 16 08:58:42.928462 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 16 08:58:42.928565 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 16 08:58:42.928660 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 16 08:58:42.928759 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 16 08:58:42.928874 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 16 08:58:42.929001 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 16 08:58:42.929116 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 16 08:58:42.929211 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 16 08:58:42.929313 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 16 08:58:42.929419 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 16 08:58:42.929520 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 16 08:58:42.929652 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 16 08:58:42.929670 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 16 08:58:42.929685 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 16 08:58:42.929699 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 16 08:58:42.929713 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 16 08:58:42.929728 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 16 08:58:42.929747 kernel: iommu: Default domain type: Translated Jan 16 08:58:42.929761 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 16 08:58:42.929774 kernel: PCI: Using ACPI for IRQ routing Jan 16 08:58:42.929788 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 16 08:58:42.929802 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 16 08:58:42.929816 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 16 08:58:42.932372 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 16 08:58:42.932486 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 16 08:58:42.932582 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 16 08:58:42.932600 kernel: vgaarb: loaded Jan 16 08:58:42.932610 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 16 08:58:42.932619 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 16 08:58:42.932628 kernel: clocksource: Switched to clocksource kvm-clock Jan 16 08:58:42.932638 kernel: VFS: Disk quotas dquot_6.6.0 Jan 16 08:58:42.932647 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 16 08:58:42.932656 kernel: pnp: PnP ACPI init Jan 16 08:58:42.932665 kernel: pnp: PnP ACPI: found 4 devices Jan 16 08:58:42.932675 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 16 08:58:42.932687 kernel: NET: Registered PF_INET protocol family Jan 16 08:58:42.932695 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 16 08:58:42.932705 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 16 08:58:42.932714 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 16 08:58:42.932722 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 16 08:58:42.932731 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 16 08:58:42.932740 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 16 08:58:42.932749 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 16 08:58:42.932761 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 16 08:58:42.932770 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 16 08:58:42.932779 kernel: NET: Registered PF_XDP protocol family Jan 16 08:58:42.932881 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 16 08:58:42.932965 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 16 08:58:42.933047 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 16 08:58:42.933131 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 16 08:58:42.933219 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 16 08:58:42.933353 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 16 08:58:42.933459 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 16 08:58:42.933478 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 16 08:58:42.933613 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 28334 usecs Jan 16 08:58:42.933633 kernel: PCI: CLS 0 bytes, default 64 Jan 16 08:58:42.933648 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 16 08:58:42.933662 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jan 16 08:58:42.933676 kernel: Initialise system trusted keyrings Jan 16 08:58:42.933691 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 16 08:58:42.933710 kernel: Key type asymmetric registered Jan 16 08:58:42.933724 kernel: Asymmetric key parser 'x509' registered Jan 16 08:58:42.933739 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 16 08:58:42.933753 kernel: io scheduler mq-deadline registered Jan 16 08:58:42.933766 kernel: io scheduler kyber registered Jan 16 08:58:42.933780 kernel: io scheduler bfq registered Jan 16 08:58:42.933794 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 16 08:58:42.933809 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 16 08:58:42.933823 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 16 08:58:42.933840 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 16 08:58:42.935919 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 16 08:58:42.935941 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 16 08:58:42.935956 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 16 08:58:42.935972 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 16 08:58:42.935987 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 16 08:58:42.936222 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 16 08:58:42.936358 kernel: rtc_cmos 00:03: registered as rtc0 Jan 16 08:58:42.936387 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 16 08:58:42.936514 kernel: rtc_cmos 00:03: setting system clock to 2025-01-16T08:58:42 UTC (1737017922) Jan 16 08:58:42.936646 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 16 08:58:42.936665 kernel: intel_pstate: CPU model not supported Jan 16 08:58:42.936680 kernel: NET: Registered PF_INET6 protocol family Jan 16 08:58:42.936695 kernel: Segment Routing with IPv6 Jan 16 08:58:42.936711 kernel: In-situ OAM (IOAM) with IPv6 Jan 16 08:58:42.936726 kernel: NET: Registered PF_PACKET protocol family Jan 16 08:58:42.936741 kernel: Key type dns_resolver registered Jan 16 08:58:42.936762 kernel: IPI shorthand broadcast: enabled Jan 16 08:58:42.936775 kernel: sched_clock: Marking stable (877004525, 91533653)->(983858427, -15320249) Jan 16 08:58:42.936790 kernel: registered taskstats version 1 Jan 16 08:58:42.936805 kernel: Loading compiled-in X.509 certificates Jan 16 08:58:42.936821 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 16 08:58:42.936835 kernel: Key type .fscrypt registered Jan 16 08:58:42.937889 kernel: Key type fscrypt-provisioning registered Jan 16 08:58:42.937905 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 16 08:58:42.937920 kernel: ima: Allocated hash algorithm: sha1 Jan 16 08:58:42.937929 kernel: ima: No architecture policies found Jan 16 08:58:42.937938 kernel: clk: Disabling unused clocks Jan 16 08:58:42.937962 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 16 08:58:42.937971 kernel: Write protecting the kernel read-only data: 36864k Jan 16 08:58:42.937998 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 16 08:58:42.938011 kernel: Run /init as init process Jan 16 08:58:42.938020 kernel: with arguments: Jan 16 08:58:42.938030 kernel: /init Jan 16 08:58:42.938042 kernel: with environment: Jan 16 08:58:42.938051 kernel: HOME=/ Jan 16 08:58:42.938060 kernel: TERM=linux Jan 16 08:58:42.938069 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 16 08:58:42.938082 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 16 08:58:42.938095 systemd[1]: Detected virtualization kvm. Jan 16 08:58:42.938105 systemd[1]: Detected architecture x86-64. Jan 16 08:58:42.938116 systemd[1]: Running in initrd. Jan 16 08:58:42.938134 systemd[1]: No hostname configured, using default hostname. Jan 16 08:58:42.938143 systemd[1]: Hostname set to . Jan 16 08:58:42.938153 systemd[1]: Initializing machine ID from VM UUID. Jan 16 08:58:42.938169 systemd[1]: Queued start job for default target initrd.target. Jan 16 08:58:42.938185 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 08:58:42.938200 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 08:58:42.938216 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 16 08:58:42.938230 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 08:58:42.938243 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 16 08:58:42.938254 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 16 08:58:42.938265 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 16 08:58:42.938275 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 16 08:58:42.938286 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 08:58:42.938304 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 08:58:42.938319 systemd[1]: Reached target paths.target - Path Units. Jan 16 08:58:42.938336 systemd[1]: Reached target slices.target - Slice Units. Jan 16 08:58:42.938350 systemd[1]: Reached target swap.target - Swaps. Jan 16 08:58:42.938367 systemd[1]: Reached target timers.target - Timer Units. Jan 16 08:58:42.938381 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 08:58:42.938394 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 08:58:42.938413 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 16 08:58:42.938429 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 16 08:58:42.938456 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 08:58:42.938470 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 08:58:42.938485 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 08:58:42.938499 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 08:58:42.938514 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 16 08:58:42.938529 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 08:58:42.938541 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 16 08:58:42.938555 systemd[1]: Starting systemd-fsck-usr.service... Jan 16 08:58:42.938565 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 08:58:42.938575 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 08:58:42.938585 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 08:58:42.938595 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 16 08:58:42.938605 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 08:58:42.938615 systemd[1]: Finished systemd-fsck-usr.service. Jan 16 08:58:42.938668 systemd-journald[184]: Collecting audit messages is disabled. Jan 16 08:58:42.938696 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 08:58:42.938708 systemd-journald[184]: Journal started Jan 16 08:58:42.938730 systemd-journald[184]: Runtime Journal (/run/log/journal/da253e18ee1b4e768f5d0551817b82d7) is 4.9M, max 39.3M, 34.4M free. Jan 16 08:58:42.941889 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 08:58:42.910246 systemd-modules-load[185]: Inserted module 'overlay' Jan 16 08:58:42.952061 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 16 08:58:42.952139 kernel: Bridge firewalling registered Jan 16 08:58:42.951691 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 16 08:58:42.952035 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 08:58:42.980587 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 08:58:42.981382 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 08:58:42.985152 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 08:58:42.994168 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 08:58:42.996016 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 08:58:42.998007 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 08:58:42.998900 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 08:58:43.018417 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 08:58:43.019652 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 08:58:43.020766 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 08:58:43.026050 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 16 08:58:43.028022 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 08:58:43.043752 dracut-cmdline[217]: dracut-dracut-053 Jan 16 08:58:43.051875 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 16 08:58:43.065044 systemd-resolved[218]: Positive Trust Anchors: Jan 16 08:58:43.065058 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 08:58:43.065096 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 08:58:43.067911 systemd-resolved[218]: Defaulting to hostname 'linux'. Jan 16 08:58:43.069717 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 08:58:43.070724 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 08:58:43.143910 kernel: SCSI subsystem initialized Jan 16 08:58:43.154926 kernel: Loading iSCSI transport class v2.0-870. Jan 16 08:58:43.165886 kernel: iscsi: registered transport (tcp) Jan 16 08:58:43.188019 kernel: iscsi: registered transport (qla4xxx) Jan 16 08:58:43.188121 kernel: QLogic iSCSI HBA Driver Jan 16 08:58:43.240492 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 16 08:58:43.246100 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 16 08:58:43.274150 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 16 08:58:43.274226 kernel: device-mapper: uevent: version 1.0.3 Jan 16 08:58:43.275376 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 16 08:58:43.319893 kernel: raid6: avx2x4 gen() 16129 MB/s Jan 16 08:58:43.346889 kernel: raid6: avx2x2 gen() 17014 MB/s Jan 16 08:58:43.352958 kernel: raid6: avx2x1 gen() 12722 MB/s Jan 16 08:58:43.353048 kernel: raid6: using algorithm avx2x2 gen() 17014 MB/s Jan 16 08:58:43.371016 kernel: raid6: .... xor() 20072 MB/s, rmw enabled Jan 16 08:58:43.371092 kernel: raid6: using avx2x2 recovery algorithm Jan 16 08:58:43.392884 kernel: xor: automatically using best checksumming function avx Jan 16 08:58:43.554892 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 16 08:58:43.568567 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 16 08:58:43.574107 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 08:58:43.603363 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jan 16 08:58:43.608822 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 08:58:43.617095 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 16 08:58:43.633601 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Jan 16 08:58:43.672219 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 08:58:43.679159 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 08:58:43.750500 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 08:58:43.760113 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 16 08:58:43.780026 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 16 08:58:43.786770 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 08:58:43.787362 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 08:58:43.788551 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 08:58:43.795533 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 16 08:58:43.817832 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 16 08:58:43.857878 kernel: libata version 3.00 loaded. Jan 16 08:58:43.862128 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 16 08:58:43.912453 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 16 08:58:43.912689 kernel: cryptd: max_cpu_qlen set to 1000 Jan 16 08:58:43.912704 kernel: scsi host0: Virtio SCSI HBA Jan 16 08:58:43.912917 kernel: scsi host1: ata_piix Jan 16 08:58:43.913090 kernel: scsi host2: ata_piix Jan 16 08:58:43.913229 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 16 08:58:43.913251 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 16 08:58:43.913263 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 16 08:58:43.913385 kernel: AVX2 version of gcm_enc/dec engaged. Jan 16 08:58:43.913399 kernel: AES CTR mode by8 optimization enabled Jan 16 08:58:43.913417 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 16 08:58:43.913435 kernel: GPT:9289727 != 125829119 Jan 16 08:58:43.913447 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 16 08:58:43.913459 kernel: GPT:9289727 != 125829119 Jan 16 08:58:43.913475 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 16 08:58:43.913486 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 08:58:43.900888 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 08:58:43.901008 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 08:58:43.901615 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 08:58:43.901992 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 08:58:43.902128 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 08:58:43.902613 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 08:58:43.909530 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 08:58:43.922278 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 16 08:58:43.924516 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Jan 16 08:58:43.941445 kernel: ACPI: bus type USB registered Jan 16 08:58:43.941518 kernel: usbcore: registered new interface driver usbfs Jan 16 08:58:43.941532 kernel: usbcore: registered new interface driver hub Jan 16 08:58:43.942652 kernel: usbcore: registered new device driver usb Jan 16 08:58:43.972843 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 08:58:43.983151 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 08:58:44.002631 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 08:58:44.085514 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 16 08:58:44.093696 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 16 08:58:44.093916 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 16 08:58:44.094040 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 16 08:58:44.094156 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (448) Jan 16 08:58:44.094171 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (450) Jan 16 08:58:44.094184 kernel: hub 1-0:1.0: USB hub found Jan 16 08:58:44.094322 kernel: hub 1-0:1.0: 2 ports detected Jan 16 08:58:44.087625 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 16 08:58:44.099777 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 16 08:58:44.109408 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 16 08:58:44.113121 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 16 08:58:44.113564 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 16 08:58:44.120102 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 16 08:58:44.126170 disk-uuid[546]: Primary Header is updated. Jan 16 08:58:44.126170 disk-uuid[546]: Secondary Entries is updated. Jan 16 08:58:44.126170 disk-uuid[546]: Secondary Header is updated. Jan 16 08:58:44.131983 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 08:58:44.135953 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 08:58:44.139884 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 08:58:45.142943 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 08:58:45.143803 disk-uuid[547]: The operation has completed successfully. Jan 16 08:58:45.194701 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 16 08:58:45.194924 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 16 08:58:45.204159 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 16 08:58:45.213541 sh[560]: Success Jan 16 08:58:45.232070 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 16 08:58:45.295867 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 16 08:58:45.303170 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 16 08:58:45.308232 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 16 08:58:45.328093 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 16 08:58:45.328167 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 16 08:58:45.328930 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 16 08:58:45.330417 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 16 08:58:45.330468 kernel: BTRFS info (device dm-0): using free space tree Jan 16 08:58:45.339690 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 16 08:58:45.340763 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 16 08:58:45.360234 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 16 08:58:45.365075 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 16 08:58:45.373914 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 16 08:58:45.373995 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 08:58:45.374013 kernel: BTRFS info (device vda6): using free space tree Jan 16 08:58:45.377880 kernel: BTRFS info (device vda6): auto enabling async discard Jan 16 08:58:45.390456 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 16 08:58:45.392788 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 16 08:58:45.397157 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 16 08:58:45.407658 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 16 08:58:45.486379 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 08:58:45.511075 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 08:58:45.535044 systemd-networkd[746]: lo: Link UP Jan 16 08:58:45.535055 systemd-networkd[746]: lo: Gained carrier Jan 16 08:58:45.539663 systemd-networkd[746]: Enumeration completed Jan 16 08:58:45.540196 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 16 08:58:45.540201 systemd-networkd[746]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 16 08:58:45.540991 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 08:58:45.541608 systemd-networkd[746]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 08:58:45.545026 ignition[646]: Ignition 2.19.0 Jan 16 08:58:45.541612 systemd-networkd[746]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 08:58:45.545035 ignition[646]: Stage: fetch-offline Jan 16 08:58:45.542838 systemd-networkd[746]: eth0: Link UP Jan 16 08:58:45.545095 ignition[646]: no configs at "/usr/lib/ignition/base.d" Jan 16 08:58:45.542843 systemd-networkd[746]: eth0: Gained carrier Jan 16 08:58:45.545106 ignition[646]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 08:58:45.542863 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 16 08:58:45.545569 ignition[646]: parsed url from cmdline: "" Jan 16 08:58:45.544339 systemd[1]: Reached target network.target - Network. Jan 16 08:58:45.545574 ignition[646]: no config URL provided Jan 16 08:58:45.546501 systemd-networkd[746]: eth1: Link UP Jan 16 08:58:45.545584 ignition[646]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 08:58:45.546506 systemd-networkd[746]: eth1: Gained carrier Jan 16 08:58:45.545594 ignition[646]: no config at "/usr/lib/ignition/user.ign" Jan 16 08:58:45.546521 systemd-networkd[746]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 08:58:45.545601 ignition[646]: failed to fetch config: resource requires networking Jan 16 08:58:45.550434 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 08:58:45.547066 ignition[646]: Ignition finished successfully Jan 16 08:58:45.560287 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 16 08:58:45.560976 systemd-networkd[746]: eth1: DHCPv4 address 10.124.0.28/20 acquired from 169.254.169.253 Jan 16 08:58:45.565156 systemd-networkd[746]: eth0: DHCPv4 address 147.182.202.230/20, gateway 147.182.192.1 acquired from 169.254.169.253 Jan 16 08:58:45.584960 ignition[754]: Ignition 2.19.0 Jan 16 08:58:45.585774 ignition[754]: Stage: fetch Jan 16 08:58:45.586102 ignition[754]: no configs at "/usr/lib/ignition/base.d" Jan 16 08:58:45.586119 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 08:58:45.586288 ignition[754]: parsed url from cmdline: "" Jan 16 08:58:45.586294 ignition[754]: no config URL provided Jan 16 08:58:45.586301 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 08:58:45.586313 ignition[754]: no config at "/usr/lib/ignition/user.ign" Jan 16 08:58:45.586409 ignition[754]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 16 08:58:45.601819 ignition[754]: GET result: OK Jan 16 08:58:45.602088 ignition[754]: parsing config with SHA512: 9161b35bec8989e5317b5ef77a7898e4747a1a78ec8510d90905fb3c8a8dc584d49b0af8e5298df0c73464d1676cc878431c076b330a8238056e25bf6d980b11 Jan 16 08:58:45.611669 unknown[754]: fetched base config from "system" Jan 16 08:58:45.611698 unknown[754]: fetched base config from "system" Jan 16 08:58:45.611711 unknown[754]: fetched user config from "digitalocean" Jan 16 08:58:45.612560 ignition[754]: fetch: fetch complete Jan 16 08:58:45.612574 ignition[754]: fetch: fetch passed Jan 16 08:58:45.615737 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 16 08:58:45.612653 ignition[754]: Ignition finished successfully Jan 16 08:58:45.624202 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 16 08:58:45.645691 ignition[762]: Ignition 2.19.0 Jan 16 08:58:45.645707 ignition[762]: Stage: kargs Jan 16 08:58:45.645985 ignition[762]: no configs at "/usr/lib/ignition/base.d" Jan 16 08:58:45.645998 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 08:58:45.647057 ignition[762]: kargs: kargs passed Jan 16 08:58:45.648306 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 16 08:58:45.647124 ignition[762]: Ignition finished successfully Jan 16 08:58:45.655125 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 16 08:58:45.675260 ignition[768]: Ignition 2.19.0 Jan 16 08:58:45.675274 ignition[768]: Stage: disks Jan 16 08:58:45.675500 ignition[768]: no configs at "/usr/lib/ignition/base.d" Jan 16 08:58:45.675516 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 08:58:45.676396 ignition[768]: disks: disks passed Jan 16 08:58:45.677505 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 16 08:58:45.676455 ignition[768]: Ignition finished successfully Jan 16 08:58:45.682124 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 16 08:58:45.682743 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 16 08:58:45.683556 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 08:58:45.684322 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 08:58:45.685092 systemd[1]: Reached target basic.target - Basic System. Jan 16 08:58:45.694143 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 16 08:58:45.712357 systemd-fsck[776]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 16 08:58:45.715596 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 16 08:58:45.724036 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 16 08:58:45.864006 kernel: EXT4-fs (vda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 16 08:58:45.865958 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 16 08:58:45.868070 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 16 08:58:45.879144 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 08:58:45.884193 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 16 08:58:45.889216 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 16 08:58:45.896885 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (784) Jan 16 08:58:45.900794 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 16 08:58:45.900925 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 08:58:45.900941 kernel: BTRFS info (device vda6): using free space tree Jan 16 08:58:45.901213 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 16 08:58:45.904162 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 16 08:58:45.904251 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 08:58:45.910943 kernel: BTRFS info (device vda6): auto enabling async discard Jan 16 08:58:45.924398 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 08:58:45.927344 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 16 08:58:45.941066 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 16 08:58:45.987453 coreos-metadata[787]: Jan 16 08:58:45.987 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 08:58:45.997896 coreos-metadata[786]: Jan 16 08:58:45.997 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 08:58:45.999455 coreos-metadata[787]: Jan 16 08:58:45.999 INFO Fetch successful Jan 16 08:58:46.005680 coreos-metadata[787]: Jan 16 08:58:46.005 INFO wrote hostname ci-4081.3.0-9-2d52908736 to /sysroot/etc/hostname Jan 16 08:58:46.006457 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 08:58:46.009352 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Jan 16 08:58:46.011578 coreos-metadata[786]: Jan 16 08:58:46.010 INFO Fetch successful Jan 16 08:58:46.014862 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Jan 16 08:58:46.018259 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 16 08:58:46.018395 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 16 08:58:46.021606 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Jan 16 08:58:46.027138 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Jan 16 08:58:46.120111 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 16 08:58:46.125041 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 16 08:58:46.127351 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 16 08:58:46.140871 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 16 08:58:46.160729 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 16 08:58:46.169428 ignition[907]: INFO : Ignition 2.19.0 Jan 16 08:58:46.170142 ignition[907]: INFO : Stage: mount Jan 16 08:58:46.171823 ignition[907]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 08:58:46.171823 ignition[907]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 08:58:46.171823 ignition[907]: INFO : mount: mount passed Jan 16 08:58:46.171823 ignition[907]: INFO : Ignition finished successfully Jan 16 08:58:46.174677 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 16 08:58:46.182995 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 16 08:58:46.327489 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 16 08:58:46.340182 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 08:58:46.348874 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (917) Jan 16 08:58:46.351249 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 16 08:58:46.351317 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 08:58:46.351338 kernel: BTRFS info (device vda6): using free space tree Jan 16 08:58:46.356257 kernel: BTRFS info (device vda6): auto enabling async discard Jan 16 08:58:46.356524 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 08:58:46.388246 ignition[935]: INFO : Ignition 2.19.0 Jan 16 08:58:46.388246 ignition[935]: INFO : Stage: files Jan 16 08:58:46.389651 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 08:58:46.389651 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 08:58:46.391742 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Jan 16 08:58:46.393000 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 16 08:58:46.393000 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 16 08:58:46.396027 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 16 08:58:46.396728 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 16 08:58:46.396728 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 16 08:58:46.396594 unknown[935]: wrote ssh authorized keys file for user: core Jan 16 08:58:46.399360 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 16 08:58:46.399360 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 16 08:58:46.434138 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 16 08:58:46.532573 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 16 08:58:46.533724 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 16 08:58:46.533724 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 16 08:58:46.533724 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 16 08:58:46.533724 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 16 08:58:46.533724 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 08:58:46.538190 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 08:58:46.538190 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 08:58:46.538190 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 08:58:46.538190 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 08:58:46.538190 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 08:58:46.538190 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 16 08:58:46.538190 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 16 08:58:46.538190 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 16 08:58:46.538190 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 16 08:58:46.894030 systemd-networkd[746]: eth1: Gained IPv6LL Jan 16 08:58:47.060647 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 16 08:58:47.300662 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 16 08:58:47.300662 ignition[935]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 16 08:58:47.302113 ignition[935]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 08:58:47.302113 ignition[935]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 08:58:47.302113 ignition[935]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 16 08:58:47.302113 ignition[935]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 16 08:58:47.302113 ignition[935]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 16 08:58:47.302113 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 16 08:58:47.305827 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 16 08:58:47.305827 ignition[935]: INFO : files: files passed Jan 16 08:58:47.305827 ignition[935]: INFO : Ignition finished successfully Jan 16 08:58:47.303621 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 16 08:58:47.312384 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 16 08:58:47.314053 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 16 08:58:47.316928 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 16 08:58:47.317498 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 16 08:58:47.335298 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 08:58:47.336668 initrd-setup-root-after-ignition[964]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 16 08:58:47.337918 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 08:58:47.340144 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 08:58:47.340744 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 16 08:58:47.345059 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 16 08:58:47.377889 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 16 08:58:47.378000 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 16 08:58:47.379432 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 16 08:58:47.380386 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 16 08:58:47.381316 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 16 08:58:47.388133 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 16 08:58:47.408612 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 08:58:47.414068 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 16 08:58:47.428516 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 16 08:58:47.429059 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 08:58:47.430128 systemd[1]: Stopped target timers.target - Timer Units. Jan 16 08:58:47.431120 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 16 08:58:47.431247 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 08:58:47.432248 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 16 08:58:47.432719 systemd[1]: Stopped target basic.target - Basic System. Jan 16 08:58:47.433615 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 16 08:58:47.434471 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 08:58:47.435211 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 16 08:58:47.436108 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 16 08:58:47.436976 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 08:58:47.437911 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 16 08:58:47.439030 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 16 08:58:47.440107 systemd[1]: Stopped target swap.target - Swaps. Jan 16 08:58:47.440908 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 16 08:58:47.441038 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 16 08:58:47.442047 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 16 08:58:47.442603 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 08:58:47.443320 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 16 08:58:47.443427 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 08:58:47.444132 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 16 08:58:47.444251 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 16 08:58:47.445436 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 16 08:58:47.445562 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 08:58:47.446059 systemd[1]: ignition-files.service: Deactivated successfully. Jan 16 08:58:47.446152 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 16 08:58:47.446919 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 16 08:58:47.447012 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 08:58:47.458182 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 16 08:58:47.460535 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 16 08:58:47.460714 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 08:58:47.463061 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 16 08:58:47.464313 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 16 08:58:47.464457 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 08:58:47.466238 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 16 08:58:47.466374 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 08:58:47.479832 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 16 08:58:47.479941 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 16 08:58:47.484879 ignition[988]: INFO : Ignition 2.19.0 Jan 16 08:58:47.484879 ignition[988]: INFO : Stage: umount Jan 16 08:58:47.484879 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 08:58:47.484879 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 08:58:47.487282 ignition[988]: INFO : umount: umount passed Jan 16 08:58:47.487282 ignition[988]: INFO : Ignition finished successfully Jan 16 08:58:47.487614 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 16 08:58:47.487730 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 16 08:58:47.489277 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 16 08:58:47.489325 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 16 08:58:47.492709 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 16 08:58:47.492776 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 16 08:58:47.494462 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 16 08:58:47.494519 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 16 08:58:47.495079 systemd[1]: Stopped target network.target - Network. Jan 16 08:58:47.496511 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 16 08:58:47.496571 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 08:58:47.497209 systemd[1]: Stopped target paths.target - Path Units. Jan 16 08:58:47.499361 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 16 08:58:47.504926 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 08:58:47.505370 systemd[1]: Stopped target slices.target - Slice Units. Jan 16 08:58:47.506132 systemd[1]: Stopped target sockets.target - Socket Units. Jan 16 08:58:47.506954 systemd[1]: iscsid.socket: Deactivated successfully. Jan 16 08:58:47.507003 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 08:58:47.507338 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 16 08:58:47.507370 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 08:58:47.507717 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 16 08:58:47.507767 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 16 08:58:47.508144 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 16 08:58:47.508184 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 16 08:58:47.508652 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 16 08:58:47.509564 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 16 08:58:47.511316 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 16 08:58:47.511839 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 16 08:58:47.511949 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 16 08:58:47.513076 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 16 08:58:47.513165 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 16 08:58:47.514148 systemd-networkd[746]: eth0: DHCPv6 lease lost Jan 16 08:58:47.518993 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 16 08:58:47.519129 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 16 08:58:47.519938 systemd-networkd[746]: eth1: DHCPv6 lease lost Jan 16 08:58:47.521147 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 16 08:58:47.521260 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 08:58:47.524045 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 16 08:58:47.524573 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 16 08:58:47.525743 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 16 08:58:47.525807 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 16 08:58:47.529991 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 16 08:58:47.530429 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 16 08:58:47.530518 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 08:58:47.531032 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 16 08:58:47.531079 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 16 08:58:47.531455 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 16 08:58:47.531495 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 16 08:58:47.532042 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 08:58:47.540691 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 16 08:58:47.540927 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 08:58:47.544516 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 16 08:58:47.544662 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 16 08:58:47.545309 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 16 08:58:47.545371 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 08:58:47.546080 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 16 08:58:47.546161 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 16 08:58:47.547505 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 16 08:58:47.547570 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 16 08:58:47.548453 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 08:58:47.548506 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 08:58:47.554225 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 16 08:58:47.554883 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 16 08:58:47.554992 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 08:58:47.558609 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 08:58:47.558694 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 08:58:47.568660 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 16 08:58:47.568781 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 16 08:58:47.574983 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 16 08:58:47.575091 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 16 08:58:47.576248 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 16 08:58:47.582134 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 16 08:58:47.591607 systemd[1]: Switching root. Jan 16 08:58:47.614678 systemd-journald[184]: Journal stopped Jan 16 08:58:48.677550 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 16 08:58:48.677622 kernel: SELinux: policy capability network_peer_controls=1 Jan 16 08:58:48.677642 kernel: SELinux: policy capability open_perms=1 Jan 16 08:58:48.677658 kernel: SELinux: policy capability extended_socket_class=1 Jan 16 08:58:48.677669 kernel: SELinux: policy capability always_check_network=0 Jan 16 08:58:48.677688 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 16 08:58:48.677700 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 16 08:58:48.677711 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 16 08:58:48.677723 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 16 08:58:48.677734 kernel: audit: type=1403 audit(1737017927.783:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 16 08:58:48.677751 systemd[1]: Successfully loaded SELinux policy in 37.550ms. Jan 16 08:58:48.677777 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.020ms. Jan 16 08:58:48.677798 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 16 08:58:48.677818 systemd[1]: Detected virtualization kvm. Jan 16 08:58:48.677837 systemd[1]: Detected architecture x86-64. Jan 16 08:58:48.678950 systemd[1]: Detected first boot. Jan 16 08:58:48.678985 systemd[1]: Hostname set to . Jan 16 08:58:48.679003 systemd[1]: Initializing machine ID from VM UUID. Jan 16 08:58:48.679017 zram_generator::config[1031]: No configuration found. Jan 16 08:58:48.679048 systemd[1]: Populated /etc with preset unit settings. Jan 16 08:58:48.679065 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 16 08:58:48.679082 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 16 08:58:48.679100 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 16 08:58:48.679120 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 16 08:58:48.679141 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 16 08:58:48.679157 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 16 08:58:48.679170 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 16 08:58:48.679186 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 16 08:58:48.679203 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 16 08:58:48.679223 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 16 08:58:48.679241 systemd[1]: Created slice user.slice - User and Session Slice. Jan 16 08:58:48.679262 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 08:58:48.679283 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 08:58:48.679302 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 16 08:58:48.679321 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 16 08:58:48.679339 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 16 08:58:48.679361 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 08:58:48.679379 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 16 08:58:48.679398 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 08:58:48.679416 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 16 08:58:48.679440 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 16 08:58:48.679459 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 16 08:58:48.679479 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 16 08:58:48.679506 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 08:58:48.679525 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 08:58:48.679541 systemd[1]: Reached target slices.target - Slice Units. Jan 16 08:58:48.679558 systemd[1]: Reached target swap.target - Swaps. Jan 16 08:58:48.679575 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 16 08:58:48.679594 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 16 08:58:48.679610 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 08:58:48.679626 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 08:58:48.679657 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 08:58:48.679678 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 16 08:58:48.679697 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 16 08:58:48.679721 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 16 08:58:48.679739 systemd[1]: Mounting media.mount - External Media Directory... Jan 16 08:58:48.679755 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:58:48.679772 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 16 08:58:48.679789 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 16 08:58:48.679807 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 16 08:58:48.679837 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 16 08:58:48.680995 systemd[1]: Reached target machines.target - Containers. Jan 16 08:58:48.681043 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 16 08:58:48.681069 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 08:58:48.681092 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 08:58:48.681119 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 16 08:58:48.681158 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 08:58:48.681184 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 08:58:48.681216 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 08:58:48.681255 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 16 08:58:48.681278 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 08:58:48.681298 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 16 08:58:48.681335 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 16 08:58:48.681367 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 16 08:58:48.681396 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 16 08:58:48.681413 systemd[1]: Stopped systemd-fsck-usr.service. Jan 16 08:58:48.681436 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 08:58:48.681466 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 08:58:48.681488 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 16 08:58:48.681509 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 16 08:58:48.681528 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 08:58:48.681551 systemd[1]: verity-setup.service: Deactivated successfully. Jan 16 08:58:48.681570 systemd[1]: Stopped verity-setup.service. Jan 16 08:58:48.681597 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:58:48.681623 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 16 08:58:48.681642 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 16 08:58:48.681665 systemd[1]: Mounted media.mount - External Media Directory. Jan 16 08:58:48.681682 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 16 08:58:48.681702 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 16 08:58:48.681718 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 16 08:58:48.681736 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 08:58:48.681757 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 16 08:58:48.681775 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 16 08:58:48.681795 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 08:58:48.681819 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 08:58:48.681840 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 08:58:48.681887 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 08:58:48.681910 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 08:58:48.681924 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 16 08:58:48.681936 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 16 08:58:48.681950 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 08:58:48.682018 systemd-journald[1100]: Collecting audit messages is disabled. Jan 16 08:58:48.682063 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 16 08:58:48.682079 kernel: ACPI: bus type drm_connector registered Jan 16 08:58:48.682093 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 16 08:58:48.682105 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 16 08:58:48.682118 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 08:58:48.682131 systemd-journald[1100]: Journal started Jan 16 08:58:48.682157 systemd-journald[1100]: Runtime Journal (/run/log/journal/da253e18ee1b4e768f5d0551817b82d7) is 4.9M, max 39.3M, 34.4M free. Jan 16 08:58:48.358003 systemd[1]: Queued start job for default target multi-user.target. Jan 16 08:58:48.381559 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 16 08:58:48.382241 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 16 08:58:48.685931 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 16 08:58:48.695882 kernel: loop: module loaded Jan 16 08:58:48.701196 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 16 08:58:48.707875 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 16 08:58:48.711059 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 08:58:48.711139 kernel: fuse: init (API version 7.39) Jan 16 08:58:48.720897 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 16 08:58:48.725505 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 08:58:48.728897 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 16 08:58:48.737883 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 08:58:48.748886 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 16 08:58:48.752014 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 08:58:48.752776 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 08:58:48.753312 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 08:58:48.754098 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 16 08:58:48.754252 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 16 08:58:48.760674 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 16 08:58:48.762280 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 08:58:48.763955 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 08:58:48.764804 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 16 08:58:48.772828 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 16 08:58:48.816356 kernel: loop0: detected capacity change from 0 to 211296 Jan 16 08:58:48.817466 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 16 08:58:48.832049 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 16 08:58:48.835705 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 16 08:58:48.843070 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 16 08:58:48.843563 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 08:58:48.847672 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 16 08:58:48.851479 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 16 08:58:48.867694 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 16 08:58:48.869607 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 16 08:58:48.878092 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 08:58:48.893649 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 16 08:58:48.902923 systemd-journald[1100]: Time spent on flushing to /var/log/journal/da253e18ee1b4e768f5d0551817b82d7 is 57.551ms for 994 entries. Jan 16 08:58:48.902923 systemd-journald[1100]: System Journal (/var/log/journal/da253e18ee1b4e768f5d0551817b82d7) is 8.0M, max 195.6M, 187.6M free. Jan 16 08:58:48.989805 systemd-journald[1100]: Received client request to flush runtime journal. Jan 16 08:58:48.990047 kernel: loop1: detected capacity change from 0 to 142488 Jan 16 08:58:48.990081 kernel: loop2: detected capacity change from 0 to 8 Jan 16 08:58:48.972711 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 08:58:48.983886 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 16 08:58:48.993524 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 16 08:58:49.003407 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 16 08:58:49.012160 kernel: loop3: detected capacity change from 0 to 140768 Jan 16 08:58:49.020217 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 08:58:49.061992 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 16 08:58:49.069879 kernel: loop4: detected capacity change from 0 to 211296 Jan 16 08:58:49.103989 kernel: loop5: detected capacity change from 0 to 142488 Jan 16 08:58:49.116728 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Jan 16 08:58:49.117320 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Jan 16 08:58:49.135225 kernel: loop6: detected capacity change from 0 to 8 Jan 16 08:58:49.133941 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 08:58:49.138979 kernel: loop7: detected capacity change from 0 to 140768 Jan 16 08:58:49.170198 (sd-merge)[1175]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 16 08:58:49.172733 (sd-merge)[1175]: Merged extensions into '/usr'. Jan 16 08:58:49.181377 systemd[1]: Reloading requested from client PID 1132 ('systemd-sysext') (unit systemd-sysext.service)... Jan 16 08:58:49.181398 systemd[1]: Reloading... Jan 16 08:58:49.365889 zram_generator::config[1204]: No configuration found. Jan 16 08:58:49.468069 ldconfig[1125]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 16 08:58:49.588069 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 08:58:49.637658 systemd[1]: Reloading finished in 455 ms. Jan 16 08:58:49.663071 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 16 08:58:49.666932 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 16 08:58:49.676895 systemd[1]: Starting ensure-sysext.service... Jan 16 08:58:49.683073 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 08:58:49.690490 systemd[1]: Reloading requested from client PID 1247 ('systemctl') (unit ensure-sysext.service)... Jan 16 08:58:49.690648 systemd[1]: Reloading... Jan 16 08:58:49.756616 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 16 08:58:49.757225 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 16 08:58:49.760819 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 16 08:58:49.761301 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Jan 16 08:58:49.761401 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Jan 16 08:58:49.768462 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 08:58:49.768480 systemd-tmpfiles[1248]: Skipping /boot Jan 16 08:58:49.790881 zram_generator::config[1274]: No configuration found. Jan 16 08:58:49.797481 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 08:58:49.797499 systemd-tmpfiles[1248]: Skipping /boot Jan 16 08:58:49.927497 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 08:58:49.978832 systemd[1]: Reloading finished in 287 ms. Jan 16 08:58:49.993941 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 08:58:50.015121 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 16 08:58:50.019084 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 16 08:58:50.025776 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 16 08:58:50.037167 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 08:58:50.045067 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 16 08:58:50.059270 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 16 08:58:50.061250 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 16 08:58:50.072235 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 08:58:50.075727 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:58:50.076200 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 08:58:50.087203 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 08:58:50.091252 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 08:58:50.097120 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 08:58:50.097671 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 08:58:50.097799 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:58:50.101036 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:58:50.101269 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 08:58:50.101443 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 08:58:50.101524 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:58:50.112492 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:58:50.112750 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 08:58:50.123217 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 08:58:50.123766 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 08:58:50.124967 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:58:50.128954 systemd[1]: Finished ensure-sysext.service. Jan 16 08:58:50.131326 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 16 08:58:50.133418 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 08:58:50.134303 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 08:58:50.142203 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 08:58:50.150103 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 08:58:50.154111 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 08:58:50.163818 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 16 08:58:50.164933 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 08:58:50.165501 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 16 08:58:50.167153 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 08:58:50.167285 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 08:58:50.174565 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 08:58:50.176562 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 08:58:50.176723 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 08:58:50.184014 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 16 08:58:50.190279 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 16 08:58:50.201141 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 16 08:58:50.208505 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Jan 16 08:58:50.224667 augenrules[1360]: No rules Jan 16 08:58:50.228260 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 16 08:58:50.229424 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 16 08:58:50.268955 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 08:58:50.281037 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 08:58:50.320989 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 16 08:58:50.321587 systemd[1]: Reached target time-set.target - System Time Set. Jan 16 08:58:50.323153 systemd-resolved[1322]: Positive Trust Anchors: Jan 16 08:58:50.323166 systemd-resolved[1322]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 08:58:50.323201 systemd-resolved[1322]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 08:58:50.338126 systemd-resolved[1322]: Using system hostname 'ci-4081.3.0-9-2d52908736'. Jan 16 08:58:50.341594 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 08:58:50.342086 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 08:58:50.381283 systemd-networkd[1373]: lo: Link UP Jan 16 08:58:50.381292 systemd-networkd[1373]: lo: Gained carrier Jan 16 08:58:50.383494 systemd-networkd[1373]: Enumeration completed Jan 16 08:58:50.383611 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 08:58:50.384706 systemd[1]: Reached target network.target - Network. Jan 16 08:58:50.392597 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 16 08:58:50.393184 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 16 08:58:50.405983 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1370) Jan 16 08:58:50.406680 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 16 08:58:50.407530 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:58:50.407685 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 08:58:50.416099 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 08:58:50.420031 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 08:58:50.423008 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 08:58:50.423455 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 08:58:50.423497 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 08:58:50.423514 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:58:50.462914 kernel: ISO 9660 Extensions: RRIP_1991A Jan 16 08:58:50.455339 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 08:58:50.455526 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 08:58:50.461404 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 16 08:58:50.488544 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 08:58:50.489629 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 08:58:50.490540 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 08:58:50.491082 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 08:58:50.492674 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 08:58:50.494090 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 08:58:50.497925 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 16 08:58:50.503877 kernel: ACPI: button: Power Button [PWRF] Jan 16 08:58:50.542871 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 16 08:58:50.564743 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 16 08:58:50.557579 systemd-networkd[1373]: eth0: Configuring with /run/systemd/network/10-62:f8:ad:c2:49:ee.network. Jan 16 08:58:50.558913 systemd-networkd[1373]: eth0: Link UP Jan 16 08:58:50.558918 systemd-networkd[1373]: eth0: Gained carrier Jan 16 08:58:50.564093 systemd-networkd[1373]: eth1: Configuring with /run/systemd/network/10-12:44:a5:8b:5d:d2.network. Jan 16 08:58:50.567055 systemd-networkd[1373]: eth1: Link UP Jan 16 08:58:50.567064 systemd-networkd[1373]: eth1: Gained carrier Jan 16 08:58:50.573529 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Jan 16 08:58:50.574152 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Jan 16 08:58:50.602018 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 16 08:58:50.609963 kernel: mousedev: PS/2 mouse device common for all mice Jan 16 08:58:50.611062 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 16 08:58:50.635805 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 16 08:58:50.635911 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 16 08:58:50.648345 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 08:58:50.653638 kernel: Console: switching to colour dummy device 80x25 Jan 16 08:58:50.653748 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 16 08:58:50.653764 kernel: [drm] features: -context_init Jan 16 08:58:50.656865 kernel: [drm] number of scanouts: 1 Jan 16 08:58:50.656925 kernel: [drm] number of cap sets: 0 Jan 16 08:58:50.659893 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 16 08:58:50.661902 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 16 08:58:50.662299 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 16 08:58:50.663769 kernel: Console: switching to colour frame buffer device 128x48 Jan 16 08:58:50.671004 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 16 08:58:50.725926 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 08:58:50.727561 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 08:58:50.740172 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 08:58:50.775077 kernel: EDAC MC: Ver: 3.0.0 Jan 16 08:58:50.798230 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 16 08:58:50.806676 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 16 08:58:50.823525 lvm[1423]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 16 08:58:50.826011 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 08:58:50.851239 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 16 08:58:50.852528 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 08:58:50.852634 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 08:58:50.852800 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 16 08:58:50.852933 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 16 08:58:50.853225 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 16 08:58:50.853379 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 16 08:58:50.853446 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 16 08:58:50.853516 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 16 08:58:50.853542 systemd[1]: Reached target paths.target - Path Units. Jan 16 08:58:50.853592 systemd[1]: Reached target timers.target - Timer Units. Jan 16 08:58:50.855106 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 16 08:58:50.858407 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 16 08:58:50.864346 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 16 08:58:50.866362 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 16 08:58:50.870572 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 16 08:58:50.873032 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 08:58:50.874013 systemd[1]: Reached target basic.target - Basic System. Jan 16 08:58:50.875204 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 16 08:58:50.875235 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 16 08:58:50.879996 systemd[1]: Starting containerd.service - containerd container runtime... Jan 16 08:58:50.884115 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 16 08:58:50.892107 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 16 08:58:50.896095 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 16 08:58:50.904057 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 16 08:58:50.913080 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 16 08:58:50.913621 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 16 08:58:50.916169 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 16 08:58:50.926374 jq[1434]: false Jan 16 08:58:50.928709 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 16 08:58:50.931750 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 16 08:58:50.943301 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 16 08:58:50.957178 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 16 08:58:50.959001 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 16 08:58:50.963017 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 16 08:58:50.968392 coreos-metadata[1432]: Jan 16 08:58:50.964 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 08:58:50.965059 systemd[1]: Starting update-engine.service - Update Engine... Jan 16 08:58:50.969530 dbus-daemon[1433]: [system] SELinux support is enabled Jan 16 08:58:50.974096 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 16 08:58:50.976172 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 16 08:58:50.977537 coreos-metadata[1432]: Jan 16 08:58:50.977 INFO Fetch successful Jan 16 08:58:50.984058 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 16 08:58:50.992440 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 16 08:58:50.992620 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 16 08:58:51.004165 systemd[1]: motdgen.service: Deactivated successfully. Jan 16 08:58:51.004810 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 16 08:58:51.016134 jq[1447]: true Jan 16 08:58:51.014796 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 16 08:58:51.015927 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 16 08:58:51.032902 extend-filesystems[1437]: Found loop4 Jan 16 08:58:51.032902 extend-filesystems[1437]: Found loop5 Jan 16 08:58:51.032902 extend-filesystems[1437]: Found loop6 Jan 16 08:58:51.032902 extend-filesystems[1437]: Found loop7 Jan 16 08:58:51.032902 extend-filesystems[1437]: Found vda Jan 16 08:58:51.032902 extend-filesystems[1437]: Found vda1 Jan 16 08:58:51.032902 extend-filesystems[1437]: Found vda2 Jan 16 08:58:51.032902 extend-filesystems[1437]: Found vda3 Jan 16 08:58:51.032902 extend-filesystems[1437]: Found usr Jan 16 08:58:51.032902 extend-filesystems[1437]: Found vda4 Jan 16 08:58:51.032902 extend-filesystems[1437]: Found vda6 Jan 16 08:58:51.032902 extend-filesystems[1437]: Found vda7 Jan 16 08:58:51.032902 extend-filesystems[1437]: Found vda9 Jan 16 08:58:51.032902 extend-filesystems[1437]: Checking size of /dev/vda9 Jan 16 08:58:51.112584 update_engine[1446]: I20250116 08:58:51.084797 1446 main.cc:92] Flatcar Update Engine starting Jan 16 08:58:51.032782 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 16 08:58:51.116042 extend-filesystems[1437]: Resized partition /dev/vda9 Jan 16 08:58:51.036920 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 16 08:58:51.117793 update_engine[1446]: I20250116 08:58:51.116534 1446 update_check_scheduler.cc:74] Next update check in 7m54s Jan 16 08:58:51.117825 extend-filesystems[1474]: resize2fs 1.47.1 (20-May-2024) Jan 16 08:58:51.051040 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 16 08:58:51.051130 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 16 08:58:51.120742 tar[1455]: linux-amd64/helm Jan 16 08:58:51.051151 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 16 08:58:51.073182 (ntainerd)[1468]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 16 08:58:51.126352 jq[1461]: true Jan 16 08:58:51.108875 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 16 08:58:51.121176 systemd[1]: Started update-engine.service - Update Engine. Jan 16 08:58:51.125085 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 16 08:58:51.134715 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 16 08:58:51.132130 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 16 08:58:51.135278 systemd-logind[1445]: New seat seat0. Jan 16 08:58:51.141808 systemd-logind[1445]: Watching system buttons on /dev/input/event1 (Power Button) Jan 16 08:58:51.141834 systemd-logind[1445]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 16 08:58:51.150448 systemd[1]: Started systemd-logind.service - User Login Management. Jan 16 08:58:51.204955 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1372) Jan 16 08:58:51.275888 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 16 08:58:51.325015 extend-filesystems[1474]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 16 08:58:51.325015 extend-filesystems[1474]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 16 08:58:51.325015 extend-filesystems[1474]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 16 08:58:51.338015 extend-filesystems[1437]: Resized filesystem in /dev/vda9 Jan 16 08:58:51.338015 extend-filesystems[1437]: Found vdb Jan 16 08:58:51.333670 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 16 08:58:51.334463 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 16 08:58:51.368662 bash[1497]: Updated "/home/core/.ssh/authorized_keys" Jan 16 08:58:51.369783 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 16 08:58:51.390243 systemd[1]: Starting sshkeys.service... Jan 16 08:58:51.418500 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 16 08:58:51.433772 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 16 08:58:51.453979 locksmithd[1478]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 16 08:58:51.518565 sshd_keygen[1469]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 16 08:58:51.562283 coreos-metadata[1509]: Jan 16 08:58:51.562 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 08:58:51.583209 coreos-metadata[1509]: Jan 16 08:58:51.582 INFO Fetch successful Jan 16 08:58:51.588677 unknown[1509]: wrote ssh authorized keys file for user: core Jan 16 08:58:51.601307 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 16 08:58:51.614377 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 16 08:58:51.630925 update-ssh-keys[1519]: Updated "/home/core/.ssh/authorized_keys" Jan 16 08:58:51.632993 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 16 08:58:51.640233 systemd[1]: Finished sshkeys.service. Jan 16 08:58:51.644677 systemd[1]: issuegen.service: Deactivated successfully. Jan 16 08:58:51.644901 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 16 08:58:51.652301 containerd[1468]: time="2025-01-16T08:58:51.650707395Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 16 08:58:51.654370 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 16 08:58:51.687433 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 16 08:58:51.697423 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 16 08:58:51.707804 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 16 08:58:51.713340 containerd[1468]: time="2025-01-16T08:58:51.712697554Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 16 08:58:51.712924 systemd[1]: Reached target getty.target - Login Prompts. Jan 16 08:58:51.716198 containerd[1468]: time="2025-01-16T08:58:51.716107776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 16 08:58:51.716675 containerd[1468]: time="2025-01-16T08:58:51.716305781Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 16 08:58:51.716675 containerd[1468]: time="2025-01-16T08:58:51.716331956Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 16 08:58:51.716675 containerd[1468]: time="2025-01-16T08:58:51.716530315Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 16 08:58:51.716675 containerd[1468]: time="2025-01-16T08:58:51.716547508Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 16 08:58:51.716675 containerd[1468]: time="2025-01-16T08:58:51.716603701Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 08:58:51.716675 containerd[1468]: time="2025-01-16T08:58:51.716615738Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 16 08:58:51.717570 containerd[1468]: time="2025-01-16T08:58:51.717290168Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 08:58:51.717570 containerd[1468]: time="2025-01-16T08:58:51.717316493Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 16 08:58:51.717570 containerd[1468]: time="2025-01-16T08:58:51.717332742Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 08:58:51.717570 containerd[1468]: time="2025-01-16T08:58:51.717343860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 16 08:58:51.717570 containerd[1468]: time="2025-01-16T08:58:51.717444954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 16 08:58:51.717745 containerd[1468]: time="2025-01-16T08:58:51.717655611Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 16 08:58:51.718743 containerd[1468]: time="2025-01-16T08:58:51.718195090Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 08:58:51.718743 containerd[1468]: time="2025-01-16T08:58:51.718225557Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 16 08:58:51.718743 containerd[1468]: time="2025-01-16T08:58:51.718331193Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 16 08:58:51.718743 containerd[1468]: time="2025-01-16T08:58:51.718382039Z" level=info msg="metadata content store policy set" policy=shared Jan 16 08:58:51.721304 containerd[1468]: time="2025-01-16T08:58:51.721250348Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 16 08:58:51.721739 containerd[1468]: time="2025-01-16T08:58:51.721478196Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 16 08:58:51.721739 containerd[1468]: time="2025-01-16T08:58:51.721501305Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 16 08:58:51.721739 containerd[1468]: time="2025-01-16T08:58:51.721517230Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 16 08:58:51.721739 containerd[1468]: time="2025-01-16T08:58:51.721532286Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 16 08:58:51.721739 containerd[1468]: time="2025-01-16T08:58:51.721679200Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 16 08:58:51.723045 containerd[1468]: time="2025-01-16T08:58:51.722416126Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 16 08:58:51.723045 containerd[1468]: time="2025-01-16T08:58:51.722557811Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 16 08:58:51.723045 containerd[1468]: time="2025-01-16T08:58:51.722576027Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 16 08:58:51.723045 containerd[1468]: time="2025-01-16T08:58:51.722588992Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 16 08:58:51.723045 containerd[1468]: time="2025-01-16T08:58:51.722614070Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 16 08:58:51.723045 containerd[1468]: time="2025-01-16T08:58:51.722628163Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 16 08:58:51.723045 containerd[1468]: time="2025-01-16T08:58:51.722640515Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 16 08:58:51.723045 containerd[1468]: time="2025-01-16T08:58:51.722654259Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 16 08:58:51.723045 containerd[1468]: time="2025-01-16T08:58:51.722668078Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 16 08:58:51.723045 containerd[1468]: time="2025-01-16T08:58:51.722681102Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 16 08:58:51.723045 containerd[1468]: time="2025-01-16T08:58:51.722693783Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 16 08:58:51.723045 containerd[1468]: time="2025-01-16T08:58:51.722705289Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 16 08:58:51.723045 containerd[1468]: time="2025-01-16T08:58:51.722723941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 16 08:58:51.723045 containerd[1468]: time="2025-01-16T08:58:51.722736397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 16 08:58:51.723361 containerd[1468]: time="2025-01-16T08:58:51.722750273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 16 08:58:51.723361 containerd[1468]: time="2025-01-16T08:58:51.722763367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 16 08:58:51.723361 containerd[1468]: time="2025-01-16T08:58:51.722776554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 16 08:58:51.723361 containerd[1468]: time="2025-01-16T08:58:51.722789061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 16 08:58:51.723361 containerd[1468]: time="2025-01-16T08:58:51.722800498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 16 08:58:51.723361 containerd[1468]: time="2025-01-16T08:58:51.722814004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 16 08:58:51.723361 containerd[1468]: time="2025-01-16T08:58:51.722832079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 16 08:58:51.723361 containerd[1468]: time="2025-01-16T08:58:51.722862688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 16 08:58:51.723361 containerd[1468]: time="2025-01-16T08:58:51.722876361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 16 08:58:51.723361 containerd[1468]: time="2025-01-16T08:58:51.722887122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 16 08:58:51.723361 containerd[1468]: time="2025-01-16T08:58:51.722902019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 16 08:58:51.723361 containerd[1468]: time="2025-01-16T08:58:51.722924663Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 16 08:58:51.723361 containerd[1468]: time="2025-01-16T08:58:51.722952830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 16 08:58:51.723361 containerd[1468]: time="2025-01-16T08:58:51.722968349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 16 08:58:51.723361 containerd[1468]: time="2025-01-16T08:58:51.722984083Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 16 08:58:51.724815 containerd[1468]: time="2025-01-16T08:58:51.724161172Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 16 08:58:51.724815 containerd[1468]: time="2025-01-16T08:58:51.724194932Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 16 08:58:51.724815 containerd[1468]: time="2025-01-16T08:58:51.724273408Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 16 08:58:51.724815 containerd[1468]: time="2025-01-16T08:58:51.724285754Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 16 08:58:51.724815 containerd[1468]: time="2025-01-16T08:58:51.724295107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 16 08:58:51.724815 containerd[1468]: time="2025-01-16T08:58:51.724307846Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 16 08:58:51.724815 containerd[1468]: time="2025-01-16T08:58:51.724317244Z" level=info msg="NRI interface is disabled by configuration." Jan 16 08:58:51.724815 containerd[1468]: time="2025-01-16T08:58:51.724327670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 16 08:58:51.725059 containerd[1468]: time="2025-01-16T08:58:51.724597511Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 16 08:58:51.725059 containerd[1468]: time="2025-01-16T08:58:51.724658229Z" level=info msg="Connect containerd service" Jan 16 08:58:51.725059 containerd[1468]: time="2025-01-16T08:58:51.724703282Z" level=info msg="using legacy CRI server" Jan 16 08:58:51.725059 containerd[1468]: time="2025-01-16T08:58:51.724714979Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 16 08:58:51.726869 containerd[1468]: time="2025-01-16T08:58:51.725560542Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 16 08:58:51.728493 containerd[1468]: time="2025-01-16T08:58:51.728464617Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 08:58:51.728946 containerd[1468]: time="2025-01-16T08:58:51.728928165Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 16 08:58:51.729053 containerd[1468]: time="2025-01-16T08:58:51.729040944Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 16 08:58:51.729161 containerd[1468]: time="2025-01-16T08:58:51.729133963Z" level=info msg="Start subscribing containerd event" Jan 16 08:58:51.729228 containerd[1468]: time="2025-01-16T08:58:51.729219049Z" level=info msg="Start recovering state" Jan 16 08:58:51.729352 containerd[1468]: time="2025-01-16T08:58:51.729339454Z" level=info msg="Start event monitor" Jan 16 08:58:51.729418 containerd[1468]: time="2025-01-16T08:58:51.729404520Z" level=info msg="Start snapshots syncer" Jan 16 08:58:51.729459 containerd[1468]: time="2025-01-16T08:58:51.729451180Z" level=info msg="Start cni network conf syncer for default" Jan 16 08:58:51.729729 containerd[1468]: time="2025-01-16T08:58:51.729702671Z" level=info msg="Start streaming server" Jan 16 08:58:51.729905 systemd[1]: Started containerd.service - containerd container runtime. Jan 16 08:58:51.732782 containerd[1468]: time="2025-01-16T08:58:51.732728228Z" level=info msg="containerd successfully booted in 0.084544s" Jan 16 08:58:51.906376 tar[1455]: linux-amd64/LICENSE Jan 16 08:58:51.906621 tar[1455]: linux-amd64/README.md Jan 16 08:58:51.932657 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 16 08:58:52.078051 systemd-networkd[1373]: eth1: Gained IPv6LL Jan 16 08:58:52.078881 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Jan 16 08:58:52.082390 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 16 08:58:52.084201 systemd[1]: Reached target network-online.target - Network is Online. Jan 16 08:58:52.099227 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 08:58:52.101983 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 16 08:58:52.132115 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 16 08:58:52.270337 systemd-networkd[1373]: eth0: Gained IPv6LL Jan 16 08:58:52.271487 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Jan 16 08:58:53.037607 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 08:58:53.040641 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 16 08:58:53.042947 systemd[1]: Startup finished in 1.012s (kernel) + 5.090s (initrd) + 5.295s (userspace) = 11.398s. Jan 16 08:58:53.048229 (kubelet)[1556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 08:58:53.805076 kubelet[1556]: E0116 08:58:53.804916 1556 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 08:58:53.809276 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 08:58:53.809449 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 08:58:53.810103 systemd[1]: kubelet.service: Consumed 1.222s CPU time. Jan 16 08:58:53.824169 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 16 08:58:53.826185 systemd[1]: Started sshd@0-147.182.202.230:22-139.178.68.195:34980.service - OpenSSH per-connection server daemon (139.178.68.195:34980). Jan 16 08:58:53.902624 sshd[1569]: Accepted publickey for core from 139.178.68.195 port 34980 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 08:58:53.905229 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:53.916551 systemd-logind[1445]: New session 1 of user core. Jan 16 08:58:53.917426 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 16 08:58:53.926293 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 16 08:58:53.941649 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 16 08:58:53.949189 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 16 08:58:53.954153 (systemd)[1573]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 16 08:58:54.066226 systemd[1573]: Queued start job for default target default.target. Jan 16 08:58:54.073204 systemd[1573]: Created slice app.slice - User Application Slice. Jan 16 08:58:54.073246 systemd[1573]: Reached target paths.target - Paths. Jan 16 08:58:54.073270 systemd[1573]: Reached target timers.target - Timers. Jan 16 08:58:54.075008 systemd[1573]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 16 08:58:54.088867 systemd[1573]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 16 08:58:54.088992 systemd[1573]: Reached target sockets.target - Sockets. Jan 16 08:58:54.089009 systemd[1573]: Reached target basic.target - Basic System. Jan 16 08:58:54.089054 systemd[1573]: Reached target default.target - Main User Target. Jan 16 08:58:54.089087 systemd[1573]: Startup finished in 126ms. Jan 16 08:58:54.089423 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 16 08:58:54.097244 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 16 08:58:54.170971 systemd[1]: Started sshd@1-147.182.202.230:22-139.178.68.195:34992.service - OpenSSH per-connection server daemon (139.178.68.195:34992). Jan 16 08:58:54.217181 sshd[1584]: Accepted publickey for core from 139.178.68.195 port 34992 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 08:58:54.219417 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:54.227331 systemd-logind[1445]: New session 2 of user core. Jan 16 08:58:54.233168 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 16 08:58:54.295443 sshd[1584]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:54.307596 systemd[1]: sshd@1-147.182.202.230:22-139.178.68.195:34992.service: Deactivated successfully. Jan 16 08:58:54.309522 systemd[1]: session-2.scope: Deactivated successfully. Jan 16 08:58:54.310398 systemd-logind[1445]: Session 2 logged out. Waiting for processes to exit. Jan 16 08:58:54.316188 systemd[1]: Started sshd@2-147.182.202.230:22-139.178.68.195:41908.service - OpenSSH per-connection server daemon (139.178.68.195:41908). Jan 16 08:58:54.316779 systemd-logind[1445]: Removed session 2. Jan 16 08:58:54.359380 sshd[1591]: Accepted publickey for core from 139.178.68.195 port 41908 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 08:58:54.360943 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:54.367052 systemd-logind[1445]: New session 3 of user core. Jan 16 08:58:54.377163 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 16 08:58:54.434792 sshd[1591]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:54.451961 systemd[1]: sshd@2-147.182.202.230:22-139.178.68.195:41908.service: Deactivated successfully. Jan 16 08:58:54.454156 systemd[1]: session-3.scope: Deactivated successfully. Jan 16 08:58:54.454963 systemd-logind[1445]: Session 3 logged out. Waiting for processes to exit. Jan 16 08:58:54.460266 systemd[1]: Started sshd@3-147.182.202.230:22-139.178.68.195:41914.service - OpenSSH per-connection server daemon (139.178.68.195:41914). Jan 16 08:58:54.462542 systemd-logind[1445]: Removed session 3. Jan 16 08:58:54.507592 sshd[1598]: Accepted publickey for core from 139.178.68.195 port 41914 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 08:58:54.509234 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:54.513723 systemd-logind[1445]: New session 4 of user core. Jan 16 08:58:54.524127 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 16 08:58:54.584559 sshd[1598]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:54.617752 systemd[1]: sshd@3-147.182.202.230:22-139.178.68.195:41914.service: Deactivated successfully. Jan 16 08:58:54.620542 systemd[1]: session-4.scope: Deactivated successfully. Jan 16 08:58:54.622522 systemd-logind[1445]: Session 4 logged out. Waiting for processes to exit. Jan 16 08:58:54.627368 systemd[1]: Started sshd@4-147.182.202.230:22-139.178.68.195:41922.service - OpenSSH per-connection server daemon (139.178.68.195:41922). Jan 16 08:58:54.629207 systemd-logind[1445]: Removed session 4. Jan 16 08:58:54.679951 sshd[1605]: Accepted publickey for core from 139.178.68.195 port 41922 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 08:58:54.681465 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:54.686282 systemd-logind[1445]: New session 5 of user core. Jan 16 08:58:54.693139 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 16 08:58:54.764592 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 16 08:58:54.764946 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 08:58:54.778008 sudo[1608]: pam_unix(sudo:session): session closed for user root Jan 16 08:58:54.782278 sshd[1605]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:54.790801 systemd[1]: sshd@4-147.182.202.230:22-139.178.68.195:41922.service: Deactivated successfully. Jan 16 08:58:54.793348 systemd[1]: session-5.scope: Deactivated successfully. Jan 16 08:58:54.796130 systemd-logind[1445]: Session 5 logged out. Waiting for processes to exit. Jan 16 08:58:54.801302 systemd[1]: Started sshd@5-147.182.202.230:22-139.178.68.195:41936.service - OpenSSH per-connection server daemon (139.178.68.195:41936). Jan 16 08:58:54.803104 systemd-logind[1445]: Removed session 5. Jan 16 08:58:54.844739 sshd[1613]: Accepted publickey for core from 139.178.68.195 port 41936 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 08:58:54.845721 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:54.852136 systemd-logind[1445]: New session 6 of user core. Jan 16 08:58:54.862168 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 16 08:58:54.922663 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 16 08:58:54.923352 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 08:58:54.927500 sudo[1617]: pam_unix(sudo:session): session closed for user root Jan 16 08:58:54.935310 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 16 08:58:54.935703 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 08:58:54.954772 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 16 08:58:54.957782 auditctl[1620]: No rules Jan 16 08:58:54.958490 systemd[1]: audit-rules.service: Deactivated successfully. Jan 16 08:58:54.958768 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 16 08:58:54.967470 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 16 08:58:55.010243 augenrules[1638]: No rules Jan 16 08:58:55.012341 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 16 08:58:55.014807 sudo[1616]: pam_unix(sudo:session): session closed for user root Jan 16 08:58:55.019508 sshd[1613]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:55.037843 systemd[1]: sshd@5-147.182.202.230:22-139.178.68.195:41936.service: Deactivated successfully. Jan 16 08:58:55.040635 systemd[1]: session-6.scope: Deactivated successfully. Jan 16 08:58:55.044288 systemd-logind[1445]: Session 6 logged out. Waiting for processes to exit. Jan 16 08:58:55.048249 systemd[1]: Started sshd@6-147.182.202.230:22-139.178.68.195:41948.service - OpenSSH per-connection server daemon (139.178.68.195:41948). Jan 16 08:58:55.051061 systemd-logind[1445]: Removed session 6. Jan 16 08:58:55.094157 sshd[1646]: Accepted publickey for core from 139.178.68.195 port 41948 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 08:58:55.096298 sshd[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:55.101448 systemd-logind[1445]: New session 7 of user core. Jan 16 08:58:55.110352 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 16 08:58:55.171521 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 16 08:58:55.171969 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 08:58:55.644206 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 16 08:58:55.653437 (dockerd)[1665]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 16 08:58:56.081041 dockerd[1665]: time="2025-01-16T08:58:56.079805844Z" level=info msg="Starting up" Jan 16 08:58:56.200124 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1937981885-merged.mount: Deactivated successfully. Jan 16 08:58:56.281204 dockerd[1665]: time="2025-01-16T08:58:56.281142419Z" level=info msg="Loading containers: start." Jan 16 08:58:56.404092 kernel: Initializing XFRM netlink socket Jan 16 08:58:56.432685 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Jan 16 08:58:56.434450 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Jan 16 08:58:56.445020 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Jan 16 08:58:56.495746 systemd-networkd[1373]: docker0: Link UP Jan 16 08:58:56.496088 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Jan 16 08:58:56.517038 dockerd[1665]: time="2025-01-16T08:58:56.516936782Z" level=info msg="Loading containers: done." Jan 16 08:58:56.535262 dockerd[1665]: time="2025-01-16T08:58:56.535201702Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 16 08:58:56.535443 dockerd[1665]: time="2025-01-16T08:58:56.535394684Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 16 08:58:56.535548 dockerd[1665]: time="2025-01-16T08:58:56.535525952Z" level=info msg="Daemon has completed initialization" Jan 16 08:58:56.565421 dockerd[1665]: time="2025-01-16T08:58:56.565301305Z" level=info msg="API listen on /run/docker.sock" Jan 16 08:58:56.566113 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 16 08:58:57.521926 containerd[1468]: time="2025-01-16T08:58:57.521796859Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\"" Jan 16 08:58:58.076548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount926732973.mount: Deactivated successfully. Jan 16 08:58:59.504305 containerd[1468]: time="2025-01-16T08:58:59.504216851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:59.506468 containerd[1468]: time="2025-01-16T08:58:59.506360759Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.13: active requests=0, bytes read=35140730" Jan 16 08:58:59.507432 containerd[1468]: time="2025-01-16T08:58:59.507113847Z" level=info msg="ImageCreate event name:\"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:59.510660 containerd[1468]: time="2025-01-16T08:58:59.510587060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:59.512403 containerd[1468]: time="2025-01-16T08:58:59.512104787Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.13\" with image id \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\", size \"35137530\" in 1.990260715s" Jan 16 08:58:59.512403 containerd[1468]: time="2025-01-16T08:58:59.512165506Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\" returns image reference \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\"" Jan 16 08:58:59.543779 containerd[1468]: time="2025-01-16T08:58:59.543724051Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\"" Jan 16 08:59:02.060710 containerd[1468]: time="2025-01-16T08:59:02.059189330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:02.060710 containerd[1468]: time="2025-01-16T08:59:02.060401538Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.13: active requests=0, bytes read=32216641" Jan 16 08:59:02.060710 containerd[1468]: time="2025-01-16T08:59:02.060633076Z" level=info msg="ImageCreate event name:\"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:02.065280 containerd[1468]: time="2025-01-16T08:59:02.065220951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:02.067191 containerd[1468]: time="2025-01-16T08:59:02.067128867Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.13\" with image id \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\", size \"33663223\" in 2.523065815s" Jan 16 08:59:02.067191 containerd[1468]: time="2025-01-16T08:59:02.067184650Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\" returns image reference \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\"" Jan 16 08:59:02.107666 containerd[1468]: time="2025-01-16T08:59:02.107594323Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\"" Jan 16 08:59:03.333009 containerd[1468]: time="2025-01-16T08:59:03.332674142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:03.334533 containerd[1468]: time="2025-01-16T08:59:03.334137338Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.13: active requests=0, bytes read=17332841" Jan 16 08:59:03.335091 containerd[1468]: time="2025-01-16T08:59:03.335039539Z" level=info msg="ImageCreate event name:\"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:03.338677 containerd[1468]: time="2025-01-16T08:59:03.338614588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:03.340498 containerd[1468]: time="2025-01-16T08:59:03.340436983Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.13\" with image id \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\", size \"18779441\" in 1.232510794s" Jan 16 08:59:03.340498 containerd[1468]: time="2025-01-16T08:59:03.340494783Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\" returns image reference \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\"" Jan 16 08:59:03.384352 containerd[1468]: time="2025-01-16T08:59:03.384297303Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\"" Jan 16 08:59:03.775950 systemd-resolved[1322]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 16 08:59:04.059812 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 16 08:59:04.069205 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 08:59:04.229089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 08:59:04.234558 (kubelet)[1904]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 08:59:04.341629 kubelet[1904]: E0116 08:59:04.340771 1904 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 08:59:04.347114 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 08:59:04.347258 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 08:59:04.665006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1267853146.mount: Deactivated successfully. Jan 16 08:59:05.176203 containerd[1468]: time="2025-01-16T08:59:05.176141741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:05.176931 containerd[1468]: time="2025-01-16T08:59:05.176885863Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.13: active requests=0, bytes read=28620941" Jan 16 08:59:05.178825 containerd[1468]: time="2025-01-16T08:59:05.178770910Z" level=info msg="ImageCreate event name:\"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:05.180530 containerd[1468]: time="2025-01-16T08:59:05.179381914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:05.180530 containerd[1468]: time="2025-01-16T08:59:05.180149613Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.13\" with image id \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\", repo tag \"registry.k8s.io/kube-proxy:v1.29.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\", size \"28619960\" in 1.795799955s" Jan 16 08:59:05.180530 containerd[1468]: time="2025-01-16T08:59:05.180180798Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\" returns image reference \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\"" Jan 16 08:59:05.208609 containerd[1468]: time="2025-01-16T08:59:05.208574633Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 16 08:59:05.688316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1915156794.mount: Deactivated successfully. Jan 16 08:59:06.445774 containerd[1468]: time="2025-01-16T08:59:06.445718947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:06.448339 containerd[1468]: time="2025-01-16T08:59:06.448274197Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 16 08:59:06.449761 containerd[1468]: time="2025-01-16T08:59:06.449704356Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:06.453422 containerd[1468]: time="2025-01-16T08:59:06.453363465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:06.455172 containerd[1468]: time="2025-01-16T08:59:06.454963469Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.246210707s" Jan 16 08:59:06.455172 containerd[1468]: time="2025-01-16T08:59:06.455014898Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 16 08:59:06.488350 containerd[1468]: time="2025-01-16T08:59:06.488302240Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 16 08:59:06.862117 systemd-resolved[1322]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 16 08:59:06.965505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4035219108.mount: Deactivated successfully. Jan 16 08:59:06.970536 containerd[1468]: time="2025-01-16T08:59:06.969285490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:06.970536 containerd[1468]: time="2025-01-16T08:59:06.970337228Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 16 08:59:06.970536 containerd[1468]: time="2025-01-16T08:59:06.970476002Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:06.972916 containerd[1468]: time="2025-01-16T08:59:06.972880770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:06.974027 containerd[1468]: time="2025-01-16T08:59:06.973987191Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 485.380523ms" Jan 16 08:59:06.974187 containerd[1468]: time="2025-01-16T08:59:06.974166503Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 16 08:59:07.011344 containerd[1468]: time="2025-01-16T08:59:07.011273868Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 16 08:59:07.511554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2639770237.mount: Deactivated successfully. Jan 16 08:59:09.214715 containerd[1468]: time="2025-01-16T08:59:09.213502616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:09.214715 containerd[1468]: time="2025-01-16T08:59:09.214602026Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jan 16 08:59:09.215420 containerd[1468]: time="2025-01-16T08:59:09.215179142Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:09.219233 containerd[1468]: time="2025-01-16T08:59:09.219186712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:09.220332 containerd[1468]: time="2025-01-16T08:59:09.220297544Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.208957278s" Jan 16 08:59:09.220464 containerd[1468]: time="2025-01-16T08:59:09.220448517Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 16 08:59:12.387781 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 08:59:12.397166 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 08:59:12.428142 systemd[1]: Reloading requested from client PID 2085 ('systemctl') (unit session-7.scope)... Jan 16 08:59:12.428161 systemd[1]: Reloading... Jan 16 08:59:12.568936 zram_generator::config[2124]: No configuration found. Jan 16 08:59:12.690641 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 08:59:12.767278 systemd[1]: Reloading finished in 338 ms. Jan 16 08:59:12.818513 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 16 08:59:12.818659 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 16 08:59:12.819492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 08:59:12.824603 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 08:59:12.945245 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 08:59:12.962384 (kubelet)[2178]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 08:59:13.019253 kubelet[2178]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 08:59:13.019253 kubelet[2178]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 16 08:59:13.019253 kubelet[2178]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 08:59:13.020528 kubelet[2178]: I0116 08:59:13.020455 2178 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 08:59:13.251897 kubelet[2178]: I0116 08:59:13.250369 2178 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 16 08:59:13.251897 kubelet[2178]: I0116 08:59:13.250743 2178 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 08:59:13.251897 kubelet[2178]: I0116 08:59:13.251306 2178 server.go:919] "Client rotation is on, will bootstrap in background" Jan 16 08:59:13.278032 kubelet[2178]: I0116 08:59:13.277991 2178 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 08:59:13.280074 kubelet[2178]: E0116 08:59:13.280044 2178 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://147.182.202.230:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 147.182.202.230:6443: connect: connection refused Jan 16 08:59:13.292040 kubelet[2178]: I0116 08:59:13.291990 2178 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 08:59:13.292434 kubelet[2178]: I0116 08:59:13.292407 2178 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 08:59:13.293677 kubelet[2178]: I0116 08:59:13.293630 2178 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 16 08:59:13.293677 kubelet[2178]: I0116 08:59:13.293682 2178 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 08:59:13.293834 kubelet[2178]: I0116 08:59:13.293694 2178 container_manager_linux.go:301] "Creating device plugin manager" Jan 16 08:59:13.293834 kubelet[2178]: I0116 08:59:13.293831 2178 state_mem.go:36] "Initialized new in-memory state store" Jan 16 08:59:13.294167 kubelet[2178]: I0116 08:59:13.293973 2178 kubelet.go:396] "Attempting to sync node with API server" Jan 16 08:59:13.294167 kubelet[2178]: I0116 08:59:13.293997 2178 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 08:59:13.294167 kubelet[2178]: I0116 08:59:13.294026 2178 kubelet.go:312] "Adding apiserver pod source" Jan 16 08:59:13.294167 kubelet[2178]: I0116 08:59:13.294039 2178 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 08:59:13.295399 kubelet[2178]: W0116 08:59:13.295352 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://147.182.202.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-9-2d52908736&limit=500&resourceVersion=0": dial tcp 147.182.202.230:6443: connect: connection refused Jan 16 08:59:13.295520 kubelet[2178]: E0116 08:59:13.295509 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.182.202.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-9-2d52908736&limit=500&resourceVersion=0": dial tcp 147.182.202.230:6443: connect: connection refused Jan 16 08:59:13.296334 kubelet[2178]: W0116 08:59:13.295955 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://147.182.202.230:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.182.202.230:6443: connect: connection refused Jan 16 08:59:13.296334 kubelet[2178]: E0116 08:59:13.296000 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.182.202.230:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.182.202.230:6443: connect: connection refused Jan 16 08:59:13.296778 kubelet[2178]: I0116 08:59:13.296759 2178 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 16 08:59:13.301027 kubelet[2178]: I0116 08:59:13.300994 2178 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 08:59:13.301284 kubelet[2178]: W0116 08:59:13.301217 2178 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 16 08:59:13.305138 kubelet[2178]: I0116 08:59:13.304951 2178 server.go:1256] "Started kubelet" Jan 16 08:59:13.306150 kubelet[2178]: I0116 08:59:13.305950 2178 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 08:59:13.306966 kubelet[2178]: I0116 08:59:13.306729 2178 server.go:461] "Adding debug handlers to kubelet server" Jan 16 08:59:13.309156 kubelet[2178]: I0116 08:59:13.309133 2178 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 08:59:13.309886 kubelet[2178]: I0116 08:59:13.309487 2178 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 08:59:13.309886 kubelet[2178]: I0116 08:59:13.309666 2178 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 08:59:13.313040 kubelet[2178]: E0116 08:59:13.312578 2178 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://147.182.202.230:6443/api/v1/namespaces/default/events\": dial tcp 147.182.202.230:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-9-2d52908736.181b20a01b3748fb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-9-2d52908736,UID:ci-4081.3.0-9-2d52908736,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-9-2d52908736,},FirstTimestamp:2025-01-16 08:59:13.304922363 +0000 UTC m=+0.337509973,LastTimestamp:2025-01-16 08:59:13.304922363 +0000 UTC m=+0.337509973,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-9-2d52908736,}" Jan 16 08:59:13.316584 kubelet[2178]: I0116 08:59:13.316471 2178 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 16 08:59:13.318698 kubelet[2178]: I0116 08:59:13.318656 2178 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 16 08:59:13.318790 kubelet[2178]: I0116 08:59:13.318742 2178 reconciler_new.go:29] "Reconciler: start to sync state" Jan 16 08:59:13.319864 kubelet[2178]: W0116 08:59:13.319810 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://147.182.202.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.182.202.230:6443: connect: connection refused Jan 16 08:59:13.319976 kubelet[2178]: E0116 08:59:13.319879 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.182.202.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.182.202.230:6443: connect: connection refused Jan 16 08:59:13.319976 kubelet[2178]: E0116 08:59:13.319965 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.182.202.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-9-2d52908736?timeout=10s\": dial tcp 147.182.202.230:6443: connect: connection refused" interval="200ms" Jan 16 08:59:13.321080 kubelet[2178]: I0116 08:59:13.321051 2178 factory.go:221] Registration of the systemd container factory successfully Jan 16 08:59:13.321165 kubelet[2178]: I0116 08:59:13.321151 2178 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 08:59:13.321959 kubelet[2178]: E0116 08:59:13.321689 2178 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 08:59:13.324741 kubelet[2178]: I0116 08:59:13.324650 2178 factory.go:221] Registration of the containerd container factory successfully Jan 16 08:59:13.337397 kubelet[2178]: I0116 08:59:13.337348 2178 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 08:59:13.338946 kubelet[2178]: I0116 08:59:13.338908 2178 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 08:59:13.338946 kubelet[2178]: I0116 08:59:13.338957 2178 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 16 08:59:13.339083 kubelet[2178]: I0116 08:59:13.339001 2178 kubelet.go:2329] "Starting kubelet main sync loop" Jan 16 08:59:13.339109 kubelet[2178]: E0116 08:59:13.339097 2178 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 08:59:13.347816 kubelet[2178]: W0116 08:59:13.347525 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://147.182.202.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.182.202.230:6443: connect: connection refused Jan 16 08:59:13.347816 kubelet[2178]: E0116 08:59:13.347591 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.182.202.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.182.202.230:6443: connect: connection refused Jan 16 08:59:13.352758 kubelet[2178]: I0116 08:59:13.352723 2178 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 16 08:59:13.352758 kubelet[2178]: I0116 08:59:13.352750 2178 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 16 08:59:13.352758 kubelet[2178]: I0116 08:59:13.352769 2178 state_mem.go:36] "Initialized new in-memory state store" Jan 16 08:59:13.354395 kubelet[2178]: I0116 08:59:13.354351 2178 policy_none.go:49] "None policy: Start" Jan 16 08:59:13.355649 kubelet[2178]: I0116 08:59:13.355247 2178 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 16 08:59:13.355649 kubelet[2178]: I0116 08:59:13.355303 2178 state_mem.go:35] "Initializing new in-memory state store" Jan 16 08:59:13.363808 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 16 08:59:13.374998 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 16 08:59:13.378563 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 16 08:59:13.390571 kubelet[2178]: I0116 08:59:13.390107 2178 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 08:59:13.390571 kubelet[2178]: I0116 08:59:13.390418 2178 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 08:59:13.393411 kubelet[2178]: E0116 08:59:13.392997 2178 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-9-2d52908736\" not found" Jan 16 08:59:13.419037 kubelet[2178]: I0116 08:59:13.418466 2178 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-9-2d52908736" Jan 16 08:59:13.419037 kubelet[2178]: E0116 08:59:13.418930 2178 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.182.202.230:6443/api/v1/nodes\": dial tcp 147.182.202.230:6443: connect: connection refused" node="ci-4081.3.0-9-2d52908736" Jan 16 08:59:13.440017 kubelet[2178]: I0116 08:59:13.439964 2178 topology_manager.go:215] "Topology Admit Handler" podUID="4fa6e406990eb4712283b80c87984dc2" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-9-2d52908736" Jan 16 08:59:13.442703 kubelet[2178]: I0116 08:59:13.441654 2178 topology_manager.go:215] "Topology Admit Handler" podUID="261f74c2f89fda5f0a74aca09f52f4aa" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-9-2d52908736" Jan 16 08:59:13.443529 kubelet[2178]: I0116 08:59:13.443499 2178 topology_manager.go:215] "Topology Admit Handler" podUID="070704d290f88f1c3468bc1a0b61e3cc" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-9-2d52908736" Jan 16 08:59:13.451627 systemd[1]: Created slice kubepods-burstable-pod4fa6e406990eb4712283b80c87984dc2.slice - libcontainer container kubepods-burstable-pod4fa6e406990eb4712283b80c87984dc2.slice. Jan 16 08:59:13.475723 systemd[1]: Created slice kubepods-burstable-pod261f74c2f89fda5f0a74aca09f52f4aa.slice - libcontainer container kubepods-burstable-pod261f74c2f89fda5f0a74aca09f52f4aa.slice. Jan 16 08:59:13.481711 systemd[1]: Created slice kubepods-burstable-pod070704d290f88f1c3468bc1a0b61e3cc.slice - libcontainer container kubepods-burstable-pod070704d290f88f1c3468bc1a0b61e3cc.slice. Jan 16 08:59:13.520841 kubelet[2178]: E0116 08:59:13.520698 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.182.202.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-9-2d52908736?timeout=10s\": dial tcp 147.182.202.230:6443: connect: connection refused" interval="400ms" Jan 16 08:59:13.619535 kubelet[2178]: I0116 08:59:13.619463 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/070704d290f88f1c3468bc1a0b61e3cc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-9-2d52908736\" (UID: \"070704d290f88f1c3468bc1a0b61e3cc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-9-2d52908736" Jan 16 08:59:13.619535 kubelet[2178]: I0116 08:59:13.619510 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4fa6e406990eb4712283b80c87984dc2-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-9-2d52908736\" (UID: \"4fa6e406990eb4712283b80c87984dc2\") " pod="kube-system/kube-scheduler-ci-4081.3.0-9-2d52908736" Jan 16 08:59:13.619535 kubelet[2178]: I0116 08:59:13.619530 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/261f74c2f89fda5f0a74aca09f52f4aa-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-9-2d52908736\" (UID: \"261f74c2f89fda5f0a74aca09f52f4aa\") " pod="kube-system/kube-apiserver-ci-4081.3.0-9-2d52908736" Jan 16 08:59:13.619535 kubelet[2178]: I0116 08:59:13.619551 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/070704d290f88f1c3468bc1a0b61e3cc-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-9-2d52908736\" (UID: \"070704d290f88f1c3468bc1a0b61e3cc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-9-2d52908736" Jan 16 08:59:13.620027 kubelet[2178]: I0116 08:59:13.619571 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/070704d290f88f1c3468bc1a0b61e3cc-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-9-2d52908736\" (UID: \"070704d290f88f1c3468bc1a0b61e3cc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-9-2d52908736" Jan 16 08:59:13.620027 kubelet[2178]: I0116 08:59:13.619592 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/070704d290f88f1c3468bc1a0b61e3cc-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-9-2d52908736\" (UID: \"070704d290f88f1c3468bc1a0b61e3cc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-9-2d52908736" Jan 16 08:59:13.620027 kubelet[2178]: I0116 08:59:13.619609 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/070704d290f88f1c3468bc1a0b61e3cc-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-9-2d52908736\" (UID: \"070704d290f88f1c3468bc1a0b61e3cc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-9-2d52908736" Jan 16 08:59:13.620027 kubelet[2178]: I0116 08:59:13.619628 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/261f74c2f89fda5f0a74aca09f52f4aa-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-9-2d52908736\" (UID: \"261f74c2f89fda5f0a74aca09f52f4aa\") " pod="kube-system/kube-apiserver-ci-4081.3.0-9-2d52908736" Jan 16 08:59:13.620027 kubelet[2178]: I0116 08:59:13.619661 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/261f74c2f89fda5f0a74aca09f52f4aa-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-9-2d52908736\" (UID: \"261f74c2f89fda5f0a74aca09f52f4aa\") " pod="kube-system/kube-apiserver-ci-4081.3.0-9-2d52908736" Jan 16 08:59:13.620749 kubelet[2178]: I0116 08:59:13.620469 2178 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-9-2d52908736" Jan 16 08:59:13.621006 kubelet[2178]: E0116 08:59:13.620898 2178 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.182.202.230:6443/api/v1/nodes\": dial tcp 147.182.202.230:6443: connect: connection refused" node="ci-4081.3.0-9-2d52908736" Jan 16 08:59:13.773885 kubelet[2178]: E0116 08:59:13.773409 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:13.774363 containerd[1468]: time="2025-01-16T08:59:13.774205553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-9-2d52908736,Uid:4fa6e406990eb4712283b80c87984dc2,Namespace:kube-system,Attempt:0,}" Jan 16 08:59:13.776037 systemd-resolved[1322]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Jan 16 08:59:13.779988 kubelet[2178]: E0116 08:59:13.779642 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:13.780566 containerd[1468]: time="2025-01-16T08:59:13.780373712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-9-2d52908736,Uid:261f74c2f89fda5f0a74aca09f52f4aa,Namespace:kube-system,Attempt:0,}" Jan 16 08:59:13.784944 kubelet[2178]: E0116 08:59:13.784654 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:13.785643 containerd[1468]: time="2025-01-16T08:59:13.785339589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-9-2d52908736,Uid:070704d290f88f1c3468bc1a0b61e3cc,Namespace:kube-system,Attempt:0,}" Jan 16 08:59:13.921380 kubelet[2178]: E0116 08:59:13.921342 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.182.202.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-9-2d52908736?timeout=10s\": dial tcp 147.182.202.230:6443: connect: connection refused" interval="800ms" Jan 16 08:59:14.023096 kubelet[2178]: I0116 08:59:14.023061 2178 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-9-2d52908736" Jan 16 08:59:14.024409 kubelet[2178]: E0116 08:59:14.024304 2178 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.182.202.230:6443/api/v1/nodes\": dial tcp 147.182.202.230:6443: connect: connection refused" node="ci-4081.3.0-9-2d52908736" Jan 16 08:59:14.235732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount531369119.mount: Deactivated successfully. Jan 16 08:59:14.239651 containerd[1468]: time="2025-01-16T08:59:14.239607003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 08:59:14.241878 containerd[1468]: time="2025-01-16T08:59:14.240185793Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 16 08:59:14.242050 containerd[1468]: time="2025-01-16T08:59:14.240758015Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 08:59:14.242380 containerd[1468]: time="2025-01-16T08:59:14.242324221Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 16 08:59:14.242428 containerd[1468]: time="2025-01-16T08:59:14.241463671Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 16 08:59:14.244840 containerd[1468]: time="2025-01-16T08:59:14.244797099Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 464.344ms" Jan 16 08:59:14.247413 containerd[1468]: time="2025-01-16T08:59:14.246118128Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 08:59:14.247571 containerd[1468]: time="2025-01-16T08:59:14.247541819Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 473.258609ms" Jan 16 08:59:14.250500 containerd[1468]: time="2025-01-16T08:59:14.250179484Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 08:59:14.250618 containerd[1468]: time="2025-01-16T08:59:14.250522344Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 465.107789ms" Jan 16 08:59:14.252545 containerd[1468]: time="2025-01-16T08:59:14.252510669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 08:59:14.279276 kubelet[2178]: W0116 08:59:14.279063 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://147.182.202.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-9-2d52908736&limit=500&resourceVersion=0": dial tcp 147.182.202.230:6443: connect: connection refused Jan 16 08:59:14.279628 kubelet[2178]: E0116 08:59:14.279590 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.182.202.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-9-2d52908736&limit=500&resourceVersion=0": dial tcp 147.182.202.230:6443: connect: connection refused Jan 16 08:59:14.394951 containerd[1468]: time="2025-01-16T08:59:14.394363747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:59:14.394951 containerd[1468]: time="2025-01-16T08:59:14.394430057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:59:14.394951 containerd[1468]: time="2025-01-16T08:59:14.394440676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:59:14.394951 containerd[1468]: time="2025-01-16T08:59:14.394525139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:59:14.405187 containerd[1468]: time="2025-01-16T08:59:14.405082463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:59:14.405753 containerd[1468]: time="2025-01-16T08:59:14.405709275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:59:14.406063 containerd[1468]: time="2025-01-16T08:59:14.405951767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:59:14.406400 containerd[1468]: time="2025-01-16T08:59:14.406312754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:59:14.406471 containerd[1468]: time="2025-01-16T08:59:14.406381431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:59:14.406471 containerd[1468]: time="2025-01-16T08:59:14.406399965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:59:14.406561 containerd[1468]: time="2025-01-16T08:59:14.406472012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:59:14.406706 containerd[1468]: time="2025-01-16T08:59:14.406345141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:59:14.439114 systemd[1]: Started cri-containerd-323df62bc122ef28a1ced81b4a24de4b861bb2bf413df84bea503038eea2ba96.scope - libcontainer container 323df62bc122ef28a1ced81b4a24de4b861bb2bf413df84bea503038eea2ba96. Jan 16 08:59:14.447065 systemd[1]: Started cri-containerd-597b1afccd366004570f5b5487c593c192c668b5aacf741b26cd151fec49a414.scope - libcontainer container 597b1afccd366004570f5b5487c593c192c668b5aacf741b26cd151fec49a414. Jan 16 08:59:14.450160 systemd[1]: Started cri-containerd-7eea193e7a60b5edf408dec4522f1e21de62264a7c52929e82fd9c83c3013796.scope - libcontainer container 7eea193e7a60b5edf408dec4522f1e21de62264a7c52929e82fd9c83c3013796. Jan 16 08:59:14.536064 containerd[1468]: time="2025-01-16T08:59:14.534713729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-9-2d52908736,Uid:070704d290f88f1c3468bc1a0b61e3cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"323df62bc122ef28a1ced81b4a24de4b861bb2bf413df84bea503038eea2ba96\"" Jan 16 08:59:14.540807 kubelet[2178]: E0116 08:59:14.540547 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:14.545880 kubelet[2178]: W0116 08:59:14.544637 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://147.182.202.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.182.202.230:6443: connect: connection refused Jan 16 08:59:14.545880 kubelet[2178]: E0116 08:59:14.544723 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.182.202.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.182.202.230:6443: connect: connection refused Jan 16 08:59:14.549691 containerd[1468]: time="2025-01-16T08:59:14.549634227Z" level=info msg="CreateContainer within sandbox \"323df62bc122ef28a1ced81b4a24de4b861bb2bf413df84bea503038eea2ba96\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 16 08:59:14.551877 containerd[1468]: time="2025-01-16T08:59:14.551822070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-9-2d52908736,Uid:4fa6e406990eb4712283b80c87984dc2,Namespace:kube-system,Attempt:0,} returns sandbox id \"7eea193e7a60b5edf408dec4522f1e21de62264a7c52929e82fd9c83c3013796\"" Jan 16 08:59:14.552926 kubelet[2178]: E0116 08:59:14.552699 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:14.555832 containerd[1468]: time="2025-01-16T08:59:14.555793703Z" level=info msg="CreateContainer within sandbox \"7eea193e7a60b5edf408dec4522f1e21de62264a7c52929e82fd9c83c3013796\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 16 08:59:14.560555 containerd[1468]: time="2025-01-16T08:59:14.560471756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-9-2d52908736,Uid:261f74c2f89fda5f0a74aca09f52f4aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"597b1afccd366004570f5b5487c593c192c668b5aacf741b26cd151fec49a414\"" Jan 16 08:59:14.562057 kubelet[2178]: E0116 08:59:14.561816 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:14.566766 containerd[1468]: time="2025-01-16T08:59:14.566502404Z" level=info msg="CreateContainer within sandbox \"597b1afccd366004570f5b5487c593c192c668b5aacf741b26cd151fec49a414\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 16 08:59:14.575169 containerd[1468]: time="2025-01-16T08:59:14.575101614Z" level=info msg="CreateContainer within sandbox \"323df62bc122ef28a1ced81b4a24de4b861bb2bf413df84bea503038eea2ba96\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"50652d6ddc2f1656554976c57e671511abe7ea4a89881d75cca365f9294256d2\"" Jan 16 08:59:14.576021 containerd[1468]: time="2025-01-16T08:59:14.575972338Z" level=info msg="StartContainer for \"50652d6ddc2f1656554976c57e671511abe7ea4a89881d75cca365f9294256d2\"" Jan 16 08:59:14.581591 containerd[1468]: time="2025-01-16T08:59:14.581538256Z" level=info msg="CreateContainer within sandbox \"7eea193e7a60b5edf408dec4522f1e21de62264a7c52929e82fd9c83c3013796\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ed62649d8fa92f292bc3e54883be5ed38977dd6c7a149a26d07ec5430fe00446\"" Jan 16 08:59:14.582409 containerd[1468]: time="2025-01-16T08:59:14.582381941Z" level=info msg="StartContainer for \"ed62649d8fa92f292bc3e54883be5ed38977dd6c7a149a26d07ec5430fe00446\"" Jan 16 08:59:14.585402 containerd[1468]: time="2025-01-16T08:59:14.585228510Z" level=info msg="CreateContainer within sandbox \"597b1afccd366004570f5b5487c593c192c668b5aacf741b26cd151fec49a414\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b76a2bf3ab84ddd50eda16c265a1938de91ed273343205a6025e8b1310dc43a5\"" Jan 16 08:59:14.586542 containerd[1468]: time="2025-01-16T08:59:14.586336469Z" level=info msg="StartContainer for \"b76a2bf3ab84ddd50eda16c265a1938de91ed273343205a6025e8b1310dc43a5\"" Jan 16 08:59:14.599200 kubelet[2178]: W0116 08:59:14.598998 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://147.182.202.230:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.182.202.230:6443: connect: connection refused Jan 16 08:59:14.599200 kubelet[2178]: E0116 08:59:14.599172 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.182.202.230:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.182.202.230:6443: connect: connection refused Jan 16 08:59:14.623154 systemd[1]: Started cri-containerd-b76a2bf3ab84ddd50eda16c265a1938de91ed273343205a6025e8b1310dc43a5.scope - libcontainer container b76a2bf3ab84ddd50eda16c265a1938de91ed273343205a6025e8b1310dc43a5. Jan 16 08:59:14.633213 systemd[1]: Started cri-containerd-50652d6ddc2f1656554976c57e671511abe7ea4a89881d75cca365f9294256d2.scope - libcontainer container 50652d6ddc2f1656554976c57e671511abe7ea4a89881d75cca365f9294256d2. Jan 16 08:59:14.661107 systemd[1]: Started cri-containerd-ed62649d8fa92f292bc3e54883be5ed38977dd6c7a149a26d07ec5430fe00446.scope - libcontainer container ed62649d8fa92f292bc3e54883be5ed38977dd6c7a149a26d07ec5430fe00446. Jan 16 08:59:14.722610 kubelet[2178]: E0116 08:59:14.722562 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.182.202.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-9-2d52908736?timeout=10s\": dial tcp 147.182.202.230:6443: connect: connection refused" interval="1.6s" Jan 16 08:59:14.734157 kubelet[2178]: W0116 08:59:14.734088 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://147.182.202.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.182.202.230:6443: connect: connection refused Jan 16 08:59:14.734157 kubelet[2178]: E0116 08:59:14.734159 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.182.202.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.182.202.230:6443: connect: connection refused Jan 16 08:59:14.744596 containerd[1468]: time="2025-01-16T08:59:14.743832475Z" level=info msg="StartContainer for \"b76a2bf3ab84ddd50eda16c265a1938de91ed273343205a6025e8b1310dc43a5\" returns successfully" Jan 16 08:59:14.765894 containerd[1468]: time="2025-01-16T08:59:14.765427090Z" level=info msg="StartContainer for \"50652d6ddc2f1656554976c57e671511abe7ea4a89881d75cca365f9294256d2\" returns successfully" Jan 16 08:59:14.771424 containerd[1468]: time="2025-01-16T08:59:14.771272006Z" level=info msg="StartContainer for \"ed62649d8fa92f292bc3e54883be5ed38977dd6c7a149a26d07ec5430fe00446\" returns successfully" Jan 16 08:59:14.826361 kubelet[2178]: I0116 08:59:14.825992 2178 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-9-2d52908736" Jan 16 08:59:14.826589 kubelet[2178]: E0116 08:59:14.826490 2178 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.182.202.230:6443/api/v1/nodes\": dial tcp 147.182.202.230:6443: connect: connection refused" node="ci-4081.3.0-9-2d52908736" Jan 16 08:59:15.362245 kubelet[2178]: E0116 08:59:15.362138 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:15.364315 kubelet[2178]: E0116 08:59:15.364289 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:15.367889 kubelet[2178]: E0116 08:59:15.367864 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:16.370300 kubelet[2178]: E0116 08:59:16.370253 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:16.428251 kubelet[2178]: I0116 08:59:16.428200 2178 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-9-2d52908736" Jan 16 08:59:16.605794 kubelet[2178]: E0116 08:59:16.605741 2178 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-9-2d52908736\" not found" node="ci-4081.3.0-9-2d52908736" Jan 16 08:59:16.654435 kubelet[2178]: I0116 08:59:16.653956 2178 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-9-2d52908736" Jan 16 08:59:17.298902 kubelet[2178]: I0116 08:59:17.298472 2178 apiserver.go:52] "Watching apiserver" Jan 16 08:59:17.319564 kubelet[2178]: I0116 08:59:17.319504 2178 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 16 08:59:19.604817 systemd[1]: Reloading requested from client PID 2449 ('systemctl') (unit session-7.scope)... Jan 16 08:59:19.604833 systemd[1]: Reloading... Jan 16 08:59:19.696945 zram_generator::config[2488]: No configuration found. Jan 16 08:59:19.841105 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 08:59:19.939033 systemd[1]: Reloading finished in 333 ms. Jan 16 08:59:19.980154 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 08:59:19.981278 kubelet[2178]: I0116 08:59:19.980935 2178 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 08:59:19.995443 systemd[1]: kubelet.service: Deactivated successfully. Jan 16 08:59:19.995694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 08:59:20.001379 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 08:59:20.150660 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 08:59:20.162546 (kubelet)[2539]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 08:59:20.230986 kubelet[2539]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 08:59:20.230986 kubelet[2539]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 16 08:59:20.230986 kubelet[2539]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 08:59:20.230986 kubelet[2539]: I0116 08:59:20.230576 2539 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 08:59:20.238947 kubelet[2539]: I0116 08:59:20.238897 2539 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 16 08:59:20.239132 kubelet[2539]: I0116 08:59:20.239122 2539 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 08:59:20.239484 kubelet[2539]: I0116 08:59:20.239457 2539 server.go:919] "Client rotation is on, will bootstrap in background" Jan 16 08:59:20.243995 kubelet[2539]: I0116 08:59:20.243965 2539 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 16 08:59:20.249345 kubelet[2539]: I0116 08:59:20.248084 2539 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 08:59:20.266150 kubelet[2539]: I0116 08:59:20.266113 2539 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 08:59:20.266611 kubelet[2539]: I0116 08:59:20.266374 2539 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 08:59:20.266611 kubelet[2539]: I0116 08:59:20.266597 2539 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 16 08:59:20.266947 kubelet[2539]: I0116 08:59:20.266635 2539 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 08:59:20.266947 kubelet[2539]: I0116 08:59:20.266646 2539 container_manager_linux.go:301] "Creating device plugin manager" Jan 16 08:59:20.266947 kubelet[2539]: I0116 08:59:20.266685 2539 state_mem.go:36] "Initialized new in-memory state store" Jan 16 08:59:20.266947 kubelet[2539]: I0116 08:59:20.266798 2539 kubelet.go:396] "Attempting to sync node with API server" Jan 16 08:59:20.266947 kubelet[2539]: I0116 08:59:20.266811 2539 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 08:59:20.266947 kubelet[2539]: I0116 08:59:20.266838 2539 kubelet.go:312] "Adding apiserver pod source" Jan 16 08:59:20.266947 kubelet[2539]: I0116 08:59:20.266894 2539 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 08:59:20.272725 kubelet[2539]: I0116 08:59:20.272690 2539 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 16 08:59:20.274082 kubelet[2539]: I0116 08:59:20.274058 2539 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 08:59:20.277191 kubelet[2539]: I0116 08:59:20.277162 2539 server.go:1256] "Started kubelet" Jan 16 08:59:20.282272 kubelet[2539]: I0116 08:59:20.282196 2539 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 08:59:20.290525 kubelet[2539]: I0116 08:59:20.290468 2539 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 16 08:59:20.294191 kubelet[2539]: I0116 08:59:20.291214 2539 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 16 08:59:20.294191 kubelet[2539]: I0116 08:59:20.291378 2539 reconciler_new.go:29] "Reconciler: start to sync state" Jan 16 08:59:20.294191 kubelet[2539]: I0116 08:59:20.292396 2539 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 08:59:20.294599 kubelet[2539]: I0116 08:59:20.294577 2539 server.go:461] "Adding debug handlers to kubelet server" Jan 16 08:59:20.296012 kubelet[2539]: I0116 08:59:20.295829 2539 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 08:59:20.297322 kubelet[2539]: I0116 08:59:20.297095 2539 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 08:59:20.300294 kubelet[2539]: I0116 08:59:20.298237 2539 factory.go:221] Registration of the systemd container factory successfully Jan 16 08:59:20.300294 kubelet[2539]: I0116 08:59:20.298332 2539 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 08:59:20.304235 kubelet[2539]: I0116 08:59:20.303158 2539 factory.go:221] Registration of the containerd container factory successfully Jan 16 08:59:20.312911 kubelet[2539]: E0116 08:59:20.312482 2539 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 08:59:20.315182 kubelet[2539]: I0116 08:59:20.315136 2539 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 08:59:20.317592 kubelet[2539]: I0116 08:59:20.317559 2539 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 08:59:20.317592 kubelet[2539]: I0116 08:59:20.317600 2539 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 16 08:59:20.317749 kubelet[2539]: I0116 08:59:20.317636 2539 kubelet.go:2329] "Starting kubelet main sync loop" Jan 16 08:59:20.317749 kubelet[2539]: E0116 08:59:20.317707 2539 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 08:59:20.386150 kubelet[2539]: I0116 08:59:20.386103 2539 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 16 08:59:20.386150 kubelet[2539]: I0116 08:59:20.386126 2539 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 16 08:59:20.386782 kubelet[2539]: I0116 08:59:20.386759 2539 state_mem.go:36] "Initialized new in-memory state store" Jan 16 08:59:20.386998 kubelet[2539]: I0116 08:59:20.386979 2539 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 16 08:59:20.387053 kubelet[2539]: I0116 08:59:20.387020 2539 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 16 08:59:20.387053 kubelet[2539]: I0116 08:59:20.387032 2539 policy_none.go:49] "None policy: Start" Jan 16 08:59:20.388314 kubelet[2539]: I0116 08:59:20.388292 2539 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 16 08:59:20.388314 kubelet[2539]: I0116 08:59:20.388318 2539 state_mem.go:35] "Initializing new in-memory state store" Jan 16 08:59:20.388549 kubelet[2539]: I0116 08:59:20.388534 2539 state_mem.go:75] "Updated machine memory state" Jan 16 08:59:20.393704 kubelet[2539]: I0116 08:59:20.393678 2539 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-9-2d52908736" Jan 16 08:59:20.403184 kubelet[2539]: I0116 08:59:20.402788 2539 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 08:59:20.405078 kubelet[2539]: I0116 08:59:20.404354 2539 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 08:59:20.407872 kubelet[2539]: I0116 08:59:20.407275 2539 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-9-2d52908736" Jan 16 08:59:20.407872 kubelet[2539]: I0116 08:59:20.407338 2539 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-9-2d52908736" Jan 16 08:59:20.419877 kubelet[2539]: I0116 08:59:20.418265 2539 topology_manager.go:215] "Topology Admit Handler" podUID="261f74c2f89fda5f0a74aca09f52f4aa" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-9-2d52908736" Jan 16 08:59:20.419877 kubelet[2539]: I0116 08:59:20.418378 2539 topology_manager.go:215] "Topology Admit Handler" podUID="070704d290f88f1c3468bc1a0b61e3cc" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-9-2d52908736" Jan 16 08:59:20.419877 kubelet[2539]: I0116 08:59:20.419131 2539 topology_manager.go:215] "Topology Admit Handler" podUID="4fa6e406990eb4712283b80c87984dc2" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-9-2d52908736" Jan 16 08:59:20.440914 kubelet[2539]: W0116 08:59:20.440563 2539 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 08:59:20.440914 kubelet[2539]: W0116 08:59:20.440772 2539 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 08:59:20.440914 kubelet[2539]: W0116 08:59:20.440896 2539 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 08:59:20.595196 kubelet[2539]: I0116 08:59:20.593528 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/070704d290f88f1c3468bc1a0b61e3cc-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-9-2d52908736\" (UID: \"070704d290f88f1c3468bc1a0b61e3cc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-9-2d52908736" Jan 16 08:59:20.595196 kubelet[2539]: I0116 08:59:20.593596 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/070704d290f88f1c3468bc1a0b61e3cc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-9-2d52908736\" (UID: \"070704d290f88f1c3468bc1a0b61e3cc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-9-2d52908736" Jan 16 08:59:20.595196 kubelet[2539]: I0116 08:59:20.593629 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4fa6e406990eb4712283b80c87984dc2-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-9-2d52908736\" (UID: \"4fa6e406990eb4712283b80c87984dc2\") " pod="kube-system/kube-scheduler-ci-4081.3.0-9-2d52908736" Jan 16 08:59:20.595196 kubelet[2539]: I0116 08:59:20.593668 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/261f74c2f89fda5f0a74aca09f52f4aa-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-9-2d52908736\" (UID: \"261f74c2f89fda5f0a74aca09f52f4aa\") " pod="kube-system/kube-apiserver-ci-4081.3.0-9-2d52908736" Jan 16 08:59:20.595196 kubelet[2539]: I0116 08:59:20.593705 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/261f74c2f89fda5f0a74aca09f52f4aa-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-9-2d52908736\" (UID: \"261f74c2f89fda5f0a74aca09f52f4aa\") " pod="kube-system/kube-apiserver-ci-4081.3.0-9-2d52908736" Jan 16 08:59:20.595567 kubelet[2539]: I0116 08:59:20.593737 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/070704d290f88f1c3468bc1a0b61e3cc-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-9-2d52908736\" (UID: \"070704d290f88f1c3468bc1a0b61e3cc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-9-2d52908736" Jan 16 08:59:20.595567 kubelet[2539]: I0116 08:59:20.593763 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/261f74c2f89fda5f0a74aca09f52f4aa-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-9-2d52908736\" (UID: \"261f74c2f89fda5f0a74aca09f52f4aa\") " pod="kube-system/kube-apiserver-ci-4081.3.0-9-2d52908736" Jan 16 08:59:20.595567 kubelet[2539]: I0116 08:59:20.593789 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/070704d290f88f1c3468bc1a0b61e3cc-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-9-2d52908736\" (UID: \"070704d290f88f1c3468bc1a0b61e3cc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-9-2d52908736" Jan 16 08:59:20.595567 kubelet[2539]: I0116 08:59:20.593817 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/070704d290f88f1c3468bc1a0b61e3cc-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-9-2d52908736\" (UID: \"070704d290f88f1c3468bc1a0b61e3cc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-9-2d52908736" Jan 16 08:59:20.743131 kubelet[2539]: E0116 08:59:20.742816 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:20.745659 kubelet[2539]: E0116 08:59:20.745094 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:20.745659 kubelet[2539]: E0116 08:59:20.745597 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:21.276017 kubelet[2539]: I0116 08:59:21.275971 2539 apiserver.go:52] "Watching apiserver" Jan 16 08:59:21.292118 kubelet[2539]: I0116 08:59:21.292071 2539 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 16 08:59:21.362304 kubelet[2539]: E0116 08:59:21.362226 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:21.363172 kubelet[2539]: E0116 08:59:21.363128 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:21.414962 kubelet[2539]: W0116 08:59:21.414918 2539 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 08:59:21.415125 kubelet[2539]: E0116 08:59:21.415022 2539 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-9-2d52908736\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-9-2d52908736" Jan 16 08:59:21.415608 kubelet[2539]: E0116 08:59:21.415589 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:21.443866 kubelet[2539]: I0116 08:59:21.443820 2539 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-9-2d52908736" podStartSLOduration=1.443768092 podStartE2EDuration="1.443768092s" podCreationTimestamp="2025-01-16 08:59:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 08:59:21.429339475 +0000 UTC m=+1.259955794" watchObservedRunningTime="2025-01-16 08:59:21.443768092 +0000 UTC m=+1.274384406" Jan 16 08:59:21.470150 kubelet[2539]: I0116 08:59:21.470116 2539 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-9-2d52908736" podStartSLOduration=1.470041055 podStartE2EDuration="1.470041055s" podCreationTimestamp="2025-01-16 08:59:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 08:59:21.444364903 +0000 UTC m=+1.274981222" watchObservedRunningTime="2025-01-16 08:59:21.470041055 +0000 UTC m=+1.300657369" Jan 16 08:59:21.494478 kubelet[2539]: I0116 08:59:21.494427 2539 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-9-2d52908736" podStartSLOduration=1.494360596 podStartE2EDuration="1.494360596s" podCreationTimestamp="2025-01-16 08:59:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 08:59:21.470462053 +0000 UTC m=+1.301078373" watchObservedRunningTime="2025-01-16 08:59:21.494360596 +0000 UTC m=+1.324976917" Jan 16 08:59:22.364960 kubelet[2539]: E0116 08:59:22.364145 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:22.364960 kubelet[2539]: E0116 08:59:22.364257 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:25.582172 sudo[1649]: pam_unix(sudo:session): session closed for user root Jan 16 08:59:25.587129 sshd[1646]: pam_unix(sshd:session): session closed for user core Jan 16 08:59:25.593957 systemd-logind[1445]: Session 7 logged out. Waiting for processes to exit. Jan 16 08:59:25.595662 systemd[1]: sshd@6-147.182.202.230:22-139.178.68.195:41948.service: Deactivated successfully. Jan 16 08:59:25.599538 systemd[1]: session-7.scope: Deactivated successfully. Jan 16 08:59:25.599798 systemd[1]: session-7.scope: Consumed 5.577s CPU time, 187.6M memory peak, 0B memory swap peak. Jan 16 08:59:25.601685 systemd-logind[1445]: Removed session 7. Jan 16 08:59:27.182311 systemd-timesyncd[1345]: Contacted time server 23.150.41.122:123 (2.flatcar.pool.ntp.org). Jan 16 08:59:27.182431 systemd-timesyncd[1345]: Initial clock synchronization to Thu 2025-01-16 08:59:27.181975 UTC. Jan 16 08:59:27.182660 systemd-resolved[1322]: Clock change detected. Flushing caches. Jan 16 08:59:28.042244 kubelet[2539]: E0116 08:59:28.041759 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:28.827736 kubelet[2539]: E0116 08:59:28.827699 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:30.598241 kubelet[2539]: E0116 08:59:30.598188 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:30.832675 kubelet[2539]: E0116 08:59:30.832564 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:32.304949 kubelet[2539]: E0116 08:59:32.303267 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:35.299752 kubelet[2539]: I0116 08:59:35.299545 2539 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 16 08:59:35.301235 containerd[1468]: time="2025-01-16T08:59:35.300488473Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 16 08:59:35.301841 kubelet[2539]: I0116 08:59:35.300791 2539 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 16 08:59:35.930843 kubelet[2539]: I0116 08:59:35.930800 2539 topology_manager.go:215] "Topology Admit Handler" podUID="1f9adcb8-31ac-4b5f-911e-af1c6ed3fbb2" podNamespace="kube-system" podName="kube-proxy-sp5bc" Jan 16 08:59:35.945628 kubelet[2539]: I0116 08:59:35.944452 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f9adcb8-31ac-4b5f-911e-af1c6ed3fbb2-xtables-lock\") pod \"kube-proxy-sp5bc\" (UID: \"1f9adcb8-31ac-4b5f-911e-af1c6ed3fbb2\") " pod="kube-system/kube-proxy-sp5bc" Jan 16 08:59:35.945628 kubelet[2539]: I0116 08:59:35.944654 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f9adcb8-31ac-4b5f-911e-af1c6ed3fbb2-lib-modules\") pod \"kube-proxy-sp5bc\" (UID: \"1f9adcb8-31ac-4b5f-911e-af1c6ed3fbb2\") " pod="kube-system/kube-proxy-sp5bc" Jan 16 08:59:35.945628 kubelet[2539]: I0116 08:59:35.945591 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5xbf\" (UniqueName: \"kubernetes.io/projected/1f9adcb8-31ac-4b5f-911e-af1c6ed3fbb2-kube-api-access-g5xbf\") pod \"kube-proxy-sp5bc\" (UID: \"1f9adcb8-31ac-4b5f-911e-af1c6ed3fbb2\") " pod="kube-system/kube-proxy-sp5bc" Jan 16 08:59:35.947097 kubelet[2539]: I0116 08:59:35.946368 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1f9adcb8-31ac-4b5f-911e-af1c6ed3fbb2-kube-proxy\") pod \"kube-proxy-sp5bc\" (UID: \"1f9adcb8-31ac-4b5f-911e-af1c6ed3fbb2\") " pod="kube-system/kube-proxy-sp5bc" Jan 16 08:59:35.947586 systemd[1]: Created slice kubepods-besteffort-pod1f9adcb8_31ac_4b5f_911e_af1c6ed3fbb2.slice - libcontainer container kubepods-besteffort-pod1f9adcb8_31ac_4b5f_911e_af1c6ed3fbb2.slice. Jan 16 08:59:36.060454 kubelet[2539]: E0116 08:59:36.059764 2539 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 16 08:59:36.060454 kubelet[2539]: E0116 08:59:36.059836 2539 projected.go:200] Error preparing data for projected volume kube-api-access-g5xbf for pod kube-system/kube-proxy-sp5bc: configmap "kube-root-ca.crt" not found Jan 16 08:59:36.060454 kubelet[2539]: E0116 08:59:36.059960 2539 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1f9adcb8-31ac-4b5f-911e-af1c6ed3fbb2-kube-api-access-g5xbf podName:1f9adcb8-31ac-4b5f-911e-af1c6ed3fbb2 nodeName:}" failed. No retries permitted until 2025-01-16 08:59:36.55992295 +0000 UTC m=+15.941994530 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-g5xbf" (UniqueName: "kubernetes.io/projected/1f9adcb8-31ac-4b5f-911e-af1c6ed3fbb2-kube-api-access-g5xbf") pod "kube-proxy-sp5bc" (UID: "1f9adcb8-31ac-4b5f-911e-af1c6ed3fbb2") : configmap "kube-root-ca.crt" not found Jan 16 08:59:36.415934 kubelet[2539]: I0116 08:59:36.415891 2539 topology_manager.go:215] "Topology Admit Handler" podUID="413a5b2e-f545-4291-a175-9c1fa20fcffd" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-776j6" Jan 16 08:59:36.425758 systemd[1]: Created slice kubepods-besteffort-pod413a5b2e_f545_4291_a175_9c1fa20fcffd.slice - libcontainer container kubepods-besteffort-pod413a5b2e_f545_4291_a175_9c1fa20fcffd.slice. Jan 16 08:59:36.450566 kubelet[2539]: I0116 08:59:36.450505 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/413a5b2e-f545-4291-a175-9c1fa20fcffd-var-lib-calico\") pod \"tigera-operator-c7ccbd65-776j6\" (UID: \"413a5b2e-f545-4291-a175-9c1fa20fcffd\") " pod="tigera-operator/tigera-operator-c7ccbd65-776j6" Jan 16 08:59:36.450566 kubelet[2539]: I0116 08:59:36.450560 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jjfw\" (UniqueName: \"kubernetes.io/projected/413a5b2e-f545-4291-a175-9c1fa20fcffd-kube-api-access-9jjfw\") pod \"tigera-operator-c7ccbd65-776j6\" (UID: \"413a5b2e-f545-4291-a175-9c1fa20fcffd\") " pod="tigera-operator/tigera-operator-c7ccbd65-776j6" Jan 16 08:59:36.729986 containerd[1468]: time="2025-01-16T08:59:36.729829290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-776j6,Uid:413a5b2e-f545-4291-a175-9c1fa20fcffd,Namespace:tigera-operator,Attempt:0,}" Jan 16 08:59:36.757581 containerd[1468]: time="2025-01-16T08:59:36.757242659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:59:36.757581 containerd[1468]: time="2025-01-16T08:59:36.757333227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:59:36.757581 containerd[1468]: time="2025-01-16T08:59:36.757344645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:59:36.757581 containerd[1468]: time="2025-01-16T08:59:36.757449890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:59:36.781624 systemd[1]: Started cri-containerd-23e0e28259c35821958cad190e9817e682d3b4102cffeda83332b08338b1c328.scope - libcontainer container 23e0e28259c35821958cad190e9817e682d3b4102cffeda83332b08338b1c328. Jan 16 08:59:36.833500 containerd[1468]: time="2025-01-16T08:59:36.833454237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-776j6,Uid:413a5b2e-f545-4291-a175-9c1fa20fcffd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"23e0e28259c35821958cad190e9817e682d3b4102cffeda83332b08338b1c328\"" Jan 16 08:59:36.840615 containerd[1468]: time="2025-01-16T08:59:36.840572596Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 16 08:59:36.857920 kubelet[2539]: E0116 08:59:36.857452 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:36.859012 containerd[1468]: time="2025-01-16T08:59:36.858064462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sp5bc,Uid:1f9adcb8-31ac-4b5f-911e-af1c6ed3fbb2,Namespace:kube-system,Attempt:0,}" Jan 16 08:59:36.882499 containerd[1468]: time="2025-01-16T08:59:36.882164242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:59:36.882499 containerd[1468]: time="2025-01-16T08:59:36.882264391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:59:36.882499 containerd[1468]: time="2025-01-16T08:59:36.882280892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:59:36.883476 containerd[1468]: time="2025-01-16T08:59:36.882421911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:59:36.907666 systemd[1]: Started cri-containerd-da794340006163811f8915b1e9d18748b218c396aeda0cf2a81c723ee3e44374.scope - libcontainer container da794340006163811f8915b1e9d18748b218c396aeda0cf2a81c723ee3e44374. Jan 16 08:59:36.936148 containerd[1468]: time="2025-01-16T08:59:36.936083880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sp5bc,Uid:1f9adcb8-31ac-4b5f-911e-af1c6ed3fbb2,Namespace:kube-system,Attempt:0,} returns sandbox id \"da794340006163811f8915b1e9d18748b218c396aeda0cf2a81c723ee3e44374\"" Jan 16 08:59:36.938474 kubelet[2539]: E0116 08:59:36.937376 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:36.942146 containerd[1468]: time="2025-01-16T08:59:36.942105716Z" level=info msg="CreateContainer within sandbox \"da794340006163811f8915b1e9d18748b218c396aeda0cf2a81c723ee3e44374\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 16 08:59:36.955315 containerd[1468]: time="2025-01-16T08:59:36.955263485Z" level=info msg="CreateContainer within sandbox \"da794340006163811f8915b1e9d18748b218c396aeda0cf2a81c723ee3e44374\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"171e73b8f19f3341d635d249efb567664f74040d442b210b84d84c73e67f5987\"" Jan 16 08:59:36.956231 containerd[1468]: time="2025-01-16T08:59:36.956198093Z" level=info msg="StartContainer for \"171e73b8f19f3341d635d249efb567664f74040d442b210b84d84c73e67f5987\"" Jan 16 08:59:36.990662 systemd[1]: Started cri-containerd-171e73b8f19f3341d635d249efb567664f74040d442b210b84d84c73e67f5987.scope - libcontainer container 171e73b8f19f3341d635d249efb567664f74040d442b210b84d84c73e67f5987. Jan 16 08:59:37.025448 containerd[1468]: time="2025-01-16T08:59:37.025058590Z" level=info msg="StartContainer for \"171e73b8f19f3341d635d249efb567664f74040d442b210b84d84c73e67f5987\" returns successfully" Jan 16 08:59:37.095068 update_engine[1446]: I20250116 08:59:37.094480 1446 update_attempter.cc:509] Updating boot flags... Jan 16 08:59:37.135457 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2738) Jan 16 08:59:37.216429 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2738) Jan 16 08:59:37.302003 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2738) Jan 16 08:59:37.855047 kubelet[2539]: E0116 08:59:37.854989 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:44.894859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1664765955.mount: Deactivated successfully. Jan 16 08:59:45.458386 containerd[1468]: time="2025-01-16T08:59:45.458317118Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:45.459381 containerd[1468]: time="2025-01-16T08:59:45.459321955Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764301" Jan 16 08:59:45.460156 containerd[1468]: time="2025-01-16T08:59:45.459905855Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:45.462197 containerd[1468]: time="2025-01-16T08:59:45.462112457Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:45.462950 containerd[1468]: time="2025-01-16T08:59:45.462913213Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 8.622300072s" Jan 16 08:59:45.462950 containerd[1468]: time="2025-01-16T08:59:45.462951667Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 16 08:59:45.484197 containerd[1468]: time="2025-01-16T08:59:45.484141343Z" level=info msg="CreateContainer within sandbox \"23e0e28259c35821958cad190e9817e682d3b4102cffeda83332b08338b1c328\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 16 08:59:45.512110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2080057846.mount: Deactivated successfully. Jan 16 08:59:45.529445 containerd[1468]: time="2025-01-16T08:59:45.528055567Z" level=info msg="CreateContainer within sandbox \"23e0e28259c35821958cad190e9817e682d3b4102cffeda83332b08338b1c328\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"529414a6669b2a58b0812fed83d59cd4a264fa209285679d6cb4e64dbfd96652\"" Jan 16 08:59:45.530018 containerd[1468]: time="2025-01-16T08:59:45.529916327Z" level=info msg="StartContainer for \"529414a6669b2a58b0812fed83d59cd4a264fa209285679d6cb4e64dbfd96652\"" Jan 16 08:59:45.563662 systemd[1]: Started cri-containerd-529414a6669b2a58b0812fed83d59cd4a264fa209285679d6cb4e64dbfd96652.scope - libcontainer container 529414a6669b2a58b0812fed83d59cd4a264fa209285679d6cb4e64dbfd96652. Jan 16 08:59:45.610964 containerd[1468]: time="2025-01-16T08:59:45.610917985Z" level=info msg="StartContainer for \"529414a6669b2a58b0812fed83d59cd4a264fa209285679d6cb4e64dbfd96652\" returns successfully" Jan 16 08:59:45.898743 kubelet[2539]: I0116 08:59:45.898689 2539 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-sp5bc" podStartSLOduration=10.89863345 podStartE2EDuration="10.89863345s" podCreationTimestamp="2025-01-16 08:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 08:59:37.870757778 +0000 UTC m=+17.252829372" watchObservedRunningTime="2025-01-16 08:59:45.89863345 +0000 UTC m=+25.280705029" Jan 16 08:59:48.811426 kubelet[2539]: I0116 08:59:48.811163 2539 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-776j6" podStartSLOduration=4.182740187 podStartE2EDuration="12.811102085s" podCreationTimestamp="2025-01-16 08:59:36 +0000 UTC" firstStartedPulling="2025-01-16 08:59:36.835241534 +0000 UTC m=+16.217313092" lastFinishedPulling="2025-01-16 08:59:45.463603418 +0000 UTC m=+24.845674990" observedRunningTime="2025-01-16 08:59:45.900135187 +0000 UTC m=+25.282206765" watchObservedRunningTime="2025-01-16 08:59:48.811102085 +0000 UTC m=+28.193173665" Jan 16 08:59:48.811426 kubelet[2539]: I0116 08:59:48.811340 2539 topology_manager.go:215] "Topology Admit Handler" podUID="ec94d020-78b8-4ae7-b48a-5d7aaa5d4cf1" podNamespace="calico-system" podName="calico-typha-cdb66b5f9-f2m9k" Jan 16 08:59:48.820793 systemd[1]: Created slice kubepods-besteffort-podec94d020_78b8_4ae7_b48a_5d7aaa5d4cf1.slice - libcontainer container kubepods-besteffort-podec94d020_78b8_4ae7_b48a_5d7aaa5d4cf1.slice. Jan 16 08:59:48.923758 kubelet[2539]: I0116 08:59:48.923616 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ec94d020-78b8-4ae7-b48a-5d7aaa5d4cf1-typha-certs\") pod \"calico-typha-cdb66b5f9-f2m9k\" (UID: \"ec94d020-78b8-4ae7-b48a-5d7aaa5d4cf1\") " pod="calico-system/calico-typha-cdb66b5f9-f2m9k" Jan 16 08:59:48.923758 kubelet[2539]: I0116 08:59:48.923693 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfmss\" (UniqueName: \"kubernetes.io/projected/ec94d020-78b8-4ae7-b48a-5d7aaa5d4cf1-kube-api-access-vfmss\") pod \"calico-typha-cdb66b5f9-f2m9k\" (UID: \"ec94d020-78b8-4ae7-b48a-5d7aaa5d4cf1\") " pod="calico-system/calico-typha-cdb66b5f9-f2m9k" Jan 16 08:59:48.923758 kubelet[2539]: I0116 08:59:48.923736 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec94d020-78b8-4ae7-b48a-5d7aaa5d4cf1-tigera-ca-bundle\") pod \"calico-typha-cdb66b5f9-f2m9k\" (UID: \"ec94d020-78b8-4ae7-b48a-5d7aaa5d4cf1\") " pod="calico-system/calico-typha-cdb66b5f9-f2m9k" Jan 16 08:59:48.987434 kubelet[2539]: I0116 08:59:48.984821 2539 topology_manager.go:215] "Topology Admit Handler" podUID="0c0b8739-4893-4629-9d8a-6d5d40dd3ed6" podNamespace="calico-system" podName="calico-node-266jz" Jan 16 08:59:48.999744 systemd[1]: Created slice kubepods-besteffort-pod0c0b8739_4893_4629_9d8a_6d5d40dd3ed6.slice - libcontainer container kubepods-besteffort-pod0c0b8739_4893_4629_9d8a_6d5d40dd3ed6.slice. Jan 16 08:59:49.123076 kubelet[2539]: I0116 08:59:49.122943 2539 topology_manager.go:215] "Topology Admit Handler" podUID="77f8f711-c082-45da-b5d0-0016bf4eeb11" podNamespace="calico-system" podName="csi-node-driver-9xsdq" Jan 16 08:59:49.125063 kubelet[2539]: E0116 08:59:49.123250 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9xsdq" podUID="77f8f711-c082-45da-b5d0-0016bf4eeb11" Jan 16 08:59:49.125249 kubelet[2539]: E0116 08:59:49.125210 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:49.125890 kubelet[2539]: I0116 08:59:49.125466 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0c0b8739-4893-4629-9d8a-6d5d40dd3ed6-var-run-calico\") pod \"calico-node-266jz\" (UID: \"0c0b8739-4893-4629-9d8a-6d5d40dd3ed6\") " pod="calico-system/calico-node-266jz" Jan 16 08:59:49.125890 kubelet[2539]: I0116 08:59:49.125509 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwdrn\" (UniqueName: \"kubernetes.io/projected/0c0b8739-4893-4629-9d8a-6d5d40dd3ed6-kube-api-access-jwdrn\") pod \"calico-node-266jz\" (UID: \"0c0b8739-4893-4629-9d8a-6d5d40dd3ed6\") " pod="calico-system/calico-node-266jz" Jan 16 08:59:49.125890 kubelet[2539]: I0116 08:59:49.125536 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0c0b8739-4893-4629-9d8a-6d5d40dd3ed6-policysync\") pod \"calico-node-266jz\" (UID: \"0c0b8739-4893-4629-9d8a-6d5d40dd3ed6\") " pod="calico-system/calico-node-266jz" Jan 16 08:59:49.125890 kubelet[2539]: I0116 08:59:49.125559 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0c0b8739-4893-4629-9d8a-6d5d40dd3ed6-cni-log-dir\") pod \"calico-node-266jz\" (UID: \"0c0b8739-4893-4629-9d8a-6d5d40dd3ed6\") " pod="calico-system/calico-node-266jz" Jan 16 08:59:49.125890 kubelet[2539]: I0116 08:59:49.125577 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c0b8739-4893-4629-9d8a-6d5d40dd3ed6-lib-modules\") pod \"calico-node-266jz\" (UID: \"0c0b8739-4893-4629-9d8a-6d5d40dd3ed6\") " pod="calico-system/calico-node-266jz" Jan 16 08:59:49.126139 kubelet[2539]: I0116 08:59:49.125599 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c0b8739-4893-4629-9d8a-6d5d40dd3ed6-tigera-ca-bundle\") pod \"calico-node-266jz\" (UID: \"0c0b8739-4893-4629-9d8a-6d5d40dd3ed6\") " pod="calico-system/calico-node-266jz" Jan 16 08:59:49.126139 kubelet[2539]: I0116 08:59:49.125620 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0c0b8739-4893-4629-9d8a-6d5d40dd3ed6-flexvol-driver-host\") pod \"calico-node-266jz\" (UID: \"0c0b8739-4893-4629-9d8a-6d5d40dd3ed6\") " pod="calico-system/calico-node-266jz" Jan 16 08:59:49.126139 kubelet[2539]: I0116 08:59:49.125641 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c0b8739-4893-4629-9d8a-6d5d40dd3ed6-xtables-lock\") pod \"calico-node-266jz\" (UID: \"0c0b8739-4893-4629-9d8a-6d5d40dd3ed6\") " pod="calico-system/calico-node-266jz" Jan 16 08:59:49.126139 kubelet[2539]: I0116 08:59:49.125663 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0c0b8739-4893-4629-9d8a-6d5d40dd3ed6-cni-bin-dir\") pod \"calico-node-266jz\" (UID: \"0c0b8739-4893-4629-9d8a-6d5d40dd3ed6\") " pod="calico-system/calico-node-266jz" Jan 16 08:59:49.126139 kubelet[2539]: I0116 08:59:49.125684 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0c0b8739-4893-4629-9d8a-6d5d40dd3ed6-var-lib-calico\") pod \"calico-node-266jz\" (UID: \"0c0b8739-4893-4629-9d8a-6d5d40dd3ed6\") " pod="calico-system/calico-node-266jz" Jan 16 08:59:49.126276 kubelet[2539]: I0116 08:59:49.125703 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0c0b8739-4893-4629-9d8a-6d5d40dd3ed6-node-certs\") pod \"calico-node-266jz\" (UID: \"0c0b8739-4893-4629-9d8a-6d5d40dd3ed6\") " pod="calico-system/calico-node-266jz" Jan 16 08:59:49.126276 kubelet[2539]: I0116 08:59:49.125722 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0c0b8739-4893-4629-9d8a-6d5d40dd3ed6-cni-net-dir\") pod \"calico-node-266jz\" (UID: \"0c0b8739-4893-4629-9d8a-6d5d40dd3ed6\") " pod="calico-system/calico-node-266jz" Jan 16 08:59:49.127013 containerd[1468]: time="2025-01-16T08:59:49.126845000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cdb66b5f9-f2m9k,Uid:ec94d020-78b8-4ae7-b48a-5d7aaa5d4cf1,Namespace:calico-system,Attempt:0,}" Jan 16 08:59:49.185977 containerd[1468]: time="2025-01-16T08:59:49.185527163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:59:49.185977 containerd[1468]: time="2025-01-16T08:59:49.185627384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:59:49.185977 containerd[1468]: time="2025-01-16T08:59:49.185645428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:59:49.185977 containerd[1468]: time="2025-01-16T08:59:49.185809721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:59:49.230431 kubelet[2539]: I0116 08:59:49.227035 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/77f8f711-c082-45da-b5d0-0016bf4eeb11-socket-dir\") pod \"csi-node-driver-9xsdq\" (UID: \"77f8f711-c082-45da-b5d0-0016bf4eeb11\") " pod="calico-system/csi-node-driver-9xsdq" Jan 16 08:59:49.230431 kubelet[2539]: I0116 08:59:49.227111 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gxgl\" (UniqueName: \"kubernetes.io/projected/77f8f711-c082-45da-b5d0-0016bf4eeb11-kube-api-access-2gxgl\") pod \"csi-node-driver-9xsdq\" (UID: \"77f8f711-c082-45da-b5d0-0016bf4eeb11\") " pod="calico-system/csi-node-driver-9xsdq" Jan 16 08:59:49.230431 kubelet[2539]: I0116 08:59:49.227176 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77f8f711-c082-45da-b5d0-0016bf4eeb11-kubelet-dir\") pod \"csi-node-driver-9xsdq\" (UID: \"77f8f711-c082-45da-b5d0-0016bf4eeb11\") " pod="calico-system/csi-node-driver-9xsdq" Jan 16 08:59:49.230431 kubelet[2539]: I0116 08:59:49.227229 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/77f8f711-c082-45da-b5d0-0016bf4eeb11-varrun\") pod \"csi-node-driver-9xsdq\" (UID: \"77f8f711-c082-45da-b5d0-0016bf4eeb11\") " pod="calico-system/csi-node-driver-9xsdq" Jan 16 08:59:49.230431 kubelet[2539]: I0116 08:59:49.227381 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/77f8f711-c082-45da-b5d0-0016bf4eeb11-registration-dir\") pod \"csi-node-driver-9xsdq\" (UID: \"77f8f711-c082-45da-b5d0-0016bf4eeb11\") " pod="calico-system/csi-node-driver-9xsdq" Jan 16 08:59:49.245248 kubelet[2539]: E0116 08:59:49.244660 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:49.245248 kubelet[2539]: W0116 08:59:49.244698 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:49.245248 kubelet[2539]: E0116 08:59:49.244735 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:49.246828 kubelet[2539]: E0116 08:59:49.246768 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:49.247557 kubelet[2539]: W0116 08:59:49.247291 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:49.247557 kubelet[2539]: E0116 08:59:49.247353 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:49.250718 kubelet[2539]: E0116 08:59:49.249812 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:49.250718 kubelet[2539]: W0116 08:59:49.249861 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:49.250718 kubelet[2539]: E0116 08:59:49.249895 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:49.252582 kubelet[2539]: E0116 08:59:49.252374 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:49.252582 kubelet[2539]: W0116 08:59:49.252432 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:49.252582 kubelet[2539]: E0116 08:59:49.252465 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:49.253654 systemd[1]: Started cri-containerd-e54c176842f8a64b5aaf15c10efb775f9d84a8b6e7e208a0cf543e0bbad27f84.scope - libcontainer container e54c176842f8a64b5aaf15c10efb775f9d84a8b6e7e208a0cf543e0bbad27f84. Jan 16 08:59:49.275716 kubelet[2539]: E0116 08:59:49.275676 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:49.275716 kubelet[2539]: W0116 08:59:49.275703 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:49.275970 kubelet[2539]: E0116 08:59:49.275737 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:49.307778 kubelet[2539]: E0116 08:59:49.306881 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:49.308942 containerd[1468]: time="2025-01-16T08:59:49.308272790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-266jz,Uid:0c0b8739-4893-4629-9d8a-6d5d40dd3ed6,Namespace:calico-system,Attempt:0,}" Jan 16 08:59:49.342794 kubelet[2539]: E0116 08:59:49.342606 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:49.342794 kubelet[2539]: W0116 08:59:49.342643 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:49.342794 kubelet[2539]: E0116 08:59:49.342671 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:49.343640 kubelet[2539]: E0116 08:59:49.343283 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:49.343640 kubelet[2539]: W0116 08:59:49.343296 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:49.343640 kubelet[2539]: E0116 08:59:49.343604 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:49.344634 kubelet[2539]: E0116 08:59:49.344345 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:49.344634 kubelet[2539]: W0116 08:59:49.344361 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:49.344634 kubelet[2539]: E0116 08:59:49.344476 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:49.345315 kubelet[2539]: E0116 08:59:49.345201 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:49.345315 kubelet[2539]: W0116 08:59:49.345215 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:49.345315 kubelet[2539]: E0116 08:59:49.345308 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:49.345792 kubelet[2539]: E0116 08:59:49.345525 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:49.345792 kubelet[2539]: W0116 08:59:49.345540 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:49.345792 kubelet[2539]: E0116 08:59:49.345635 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:49.346382 kubelet[2539]: E0116 08:59:49.345918 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:49.346382 kubelet[2539]: W0116 08:59:49.345929 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:49.346382 kubelet[2539]: E0116 08:59:49.346045 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:49.346860 kubelet[2539]: E0116 08:59:49.346517 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:49.346860 kubelet[2539]: W0116 08:59:49.346532 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:49.346860 kubelet[2539]: E0116 08:59:49.346634 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:49.347512 kubelet[2539]: E0116 08:59:49.347275 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:49.347512 kubelet[2539]: W0116 08:59:49.347296 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:49.347512 kubelet[2539]: E0116 08:59:49.347335 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:49.350425 kubelet[2539]: E0116 08:59:49.349014 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:49.350425 kubelet[2539]: W0116 08:59:49.349044 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:49.350425 kubelet[2539]: E0116 08:59:49.349519 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:49.350425 kubelet[2539]: W0116 08:59:49.349536 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:49.350425 kubelet[2539]: E0116 08:59:49.350383 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:49.350425 kubelet[2539]: W0116 08:59:49.350443 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:49.351420 kubelet[2539]: E0116 08:59:49.350878 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:49.351420 kubelet[2539]: E0116 08:59:49.350899 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:49.351420 kubelet[2539]: W0116 08:59:49.350913 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:49.351420 kubelet[2539]: E0116 08:59:49.350928 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:49.351420 kubelet[2539]: E0116 08:59:49.350958 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:49.351420 kubelet[2539]: E0116 08:59:49.350978 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:49.351905 kubelet[2539]: E0116 08:59:49.351822 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:49.351905 kubelet[2539]: W0116 08:59:49.351839 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:49.352001 kubelet[2539]: E0116 08:59:49.351975 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:49.352995 kubelet[2539]: E0116 08:59:49.352761 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:49.352995 kubelet[2539]: W0116 08:59:49.352778 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:49.352995 kubelet[2539]: E0116 08:59:49.352806 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:49.356940 kubelet[2539]: E0116 08:59:49.354692 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:49.356940 kubelet[2539]: W0116 08:59:49.354809 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:49.356940 kubelet[2539]: E0116 08:59:49.354842 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:49.356940 kubelet[2539]: E0116 08:59:49.355226 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:49.356940 kubelet[2539]: W0116 08:59:49.355239 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:49.356940 kubelet[2539]: E0116 08:59:49.355472 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:49.358216 kubelet[2539]: E0116 08:59:49.357688 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:49.358216 kubelet[2539]: W0116 08:59:49.357719 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:49.358216 kubelet[2539]: E0116 08:59:49.357756 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:49.363200 kubelet[2539]: E0116 08:59:49.361664 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:49.363200 kubelet[2539]: W0116 08:59:49.361704 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:49.363200 kubelet[2539]: E0116 08:59:49.361798 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:49.363200 kubelet[2539]: E0116 08:59:49.362359 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:49.363200 kubelet[2539]: W0116 08:59:49.362377 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:49.363200 kubelet[2539]: E0116 08:59:49.362859 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:49.363200 kubelet[2539]: W0116 08:59:49.362875 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:49.363906 kubelet[2539]: E0116 08:59:49.363751 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:49.363906 kubelet[2539]: E0116 08:59:49.363837 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:49.364845 kubelet[2539]: E0116 08:59:49.364483 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:49.364845 kubelet[2539]: W0116 08:59:49.364504 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:49.365009 kubelet[2539]: E0116 08:59:49.364819 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:49.367961 kubelet[2539]: E0116 08:59:49.366948 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:49.367961 kubelet[2539]: W0116 08:59:49.366977 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:49.367961 kubelet[2539]: E0116 08:59:49.367581 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:49.367961 kubelet[2539]: W0116 08:59:49.367598 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:49.368499 kubelet[2539]: E0116 08:59:49.368462 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:49.368589 kubelet[2539]: E0116 08:59:49.368530 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:49.368876 kubelet[2539]: E0116 08:59:49.368854 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:49.369033 kubelet[2539]: W0116 08:59:49.369013 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:49.369618 kubelet[2539]: E0116 08:59:49.369593 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:49.372445 kubelet[2539]: E0116 08:59:49.371057 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:49.372896 kubelet[2539]: W0116 08:59:49.371078 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:49.372896 kubelet[2539]: E0116 08:59:49.372733 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:49.389508 containerd[1468]: time="2025-01-16T08:59:49.387044398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:59:49.389508 containerd[1468]: time="2025-01-16T08:59:49.388248133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:59:49.389508 containerd[1468]: time="2025-01-16T08:59:49.388264247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:59:49.389508 containerd[1468]: time="2025-01-16T08:59:49.388409573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:59:49.404492 kubelet[2539]: E0116 08:59:49.403789 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:49.404492 kubelet[2539]: W0116 08:59:49.403822 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:49.404492 kubelet[2539]: E0116 08:59:49.403855 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:49.432826 systemd[1]: Started cri-containerd-b8be6905f38e243832529465ebda2b2d3596544c8d2f87aef722dd2ebf3013f6.scope - libcontainer container b8be6905f38e243832529465ebda2b2d3596544c8d2f87aef722dd2ebf3013f6. Jan 16 08:59:49.496813 containerd[1468]: time="2025-01-16T08:59:49.496752278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cdb66b5f9-f2m9k,Uid:ec94d020-78b8-4ae7-b48a-5d7aaa5d4cf1,Namespace:calico-system,Attempt:0,} returns sandbox id \"e54c176842f8a64b5aaf15c10efb775f9d84a8b6e7e208a0cf543e0bbad27f84\"" Jan 16 08:59:49.499614 kubelet[2539]: E0116 08:59:49.498261 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:49.501699 containerd[1468]: time="2025-01-16T08:59:49.501644526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 16 08:59:49.519241 containerd[1468]: time="2025-01-16T08:59:49.518701054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-266jz,Uid:0c0b8739-4893-4629-9d8a-6d5d40dd3ed6,Namespace:calico-system,Attempt:0,} returns sandbox id \"b8be6905f38e243832529465ebda2b2d3596544c8d2f87aef722dd2ebf3013f6\"" Jan 16 08:59:49.523034 kubelet[2539]: E0116 08:59:49.522699 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:50.056584 systemd[1]: run-containerd-runc-k8s.io-e54c176842f8a64b5aaf15c10efb775f9d84a8b6e7e208a0cf543e0bbad27f84-runc.0mPejQ.mount: Deactivated successfully. Jan 16 08:59:50.768471 kubelet[2539]: E0116 08:59:50.767875 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9xsdq" podUID="77f8f711-c082-45da-b5d0-0016bf4eeb11" Jan 16 08:59:50.959123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3845831913.mount: Deactivated successfully. Jan 16 08:59:51.740762 containerd[1468]: time="2025-01-16T08:59:51.740683319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:51.742099 containerd[1468]: time="2025-01-16T08:59:51.742052561Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 16 08:59:51.742818 containerd[1468]: time="2025-01-16T08:59:51.742789757Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:51.745825 containerd[1468]: time="2025-01-16T08:59:51.745767552Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:51.748208 containerd[1468]: time="2025-01-16T08:59:51.748137458Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.246434863s" Jan 16 08:59:51.748208 containerd[1468]: time="2025-01-16T08:59:51.748211051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 16 08:59:51.751820 containerd[1468]: time="2025-01-16T08:59:51.751773774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 16 08:59:51.768789 containerd[1468]: time="2025-01-16T08:59:51.768746862Z" level=info msg="CreateContainer within sandbox \"e54c176842f8a64b5aaf15c10efb775f9d84a8b6e7e208a0cf543e0bbad27f84\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 16 08:59:51.782734 containerd[1468]: time="2025-01-16T08:59:51.782652130Z" level=info msg="CreateContainer within sandbox \"e54c176842f8a64b5aaf15c10efb775f9d84a8b6e7e208a0cf543e0bbad27f84\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8e531a9b49fe29d1d93bbe67d6ac659ae67dba0abe9f7c5f5183491f34ed435c\"" Jan 16 08:59:51.784167 containerd[1468]: time="2025-01-16T08:59:51.784119944Z" level=info msg="StartContainer for \"8e531a9b49fe29d1d93bbe67d6ac659ae67dba0abe9f7c5f5183491f34ed435c\"" Jan 16 08:59:51.839634 systemd[1]: Started cri-containerd-8e531a9b49fe29d1d93bbe67d6ac659ae67dba0abe9f7c5f5183491f34ed435c.scope - libcontainer container 8e531a9b49fe29d1d93bbe67d6ac659ae67dba0abe9f7c5f5183491f34ed435c. Jan 16 08:59:51.901729 containerd[1468]: time="2025-01-16T08:59:51.901670565Z" level=info msg="StartContainer for \"8e531a9b49fe29d1d93bbe67d6ac659ae67dba0abe9f7c5f5183491f34ed435c\" returns successfully" Jan 16 08:59:52.768111 kubelet[2539]: E0116 08:59:52.767602 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9xsdq" podUID="77f8f711-c082-45da-b5d0-0016bf4eeb11" Jan 16 08:59:52.907131 kubelet[2539]: E0116 08:59:52.907084 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:52.948596 kubelet[2539]: I0116 08:59:52.948549 2539 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-cdb66b5f9-f2m9k" podStartSLOduration=2.699685998 podStartE2EDuration="4.948504463s" podCreationTimestamp="2025-01-16 08:59:48 +0000 UTC" firstStartedPulling="2025-01-16 08:59:49.499793327 +0000 UTC m=+28.881864887" lastFinishedPulling="2025-01-16 08:59:51.748611794 +0000 UTC m=+31.130683352" observedRunningTime="2025-01-16 08:59:52.927606451 +0000 UTC m=+32.309678029" watchObservedRunningTime="2025-01-16 08:59:52.948504463 +0000 UTC m=+32.330576059" Jan 16 08:59:52.966673 kubelet[2539]: E0116 08:59:52.966312 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.966673 kubelet[2539]: W0116 08:59:52.966351 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.966673 kubelet[2539]: E0116 08:59:52.966378 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:52.967833 kubelet[2539]: E0116 08:59:52.967559 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.967833 kubelet[2539]: W0116 08:59:52.967622 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.967833 kubelet[2539]: E0116 08:59:52.967653 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:52.968201 kubelet[2539]: E0116 08:59:52.968147 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.968201 kubelet[2539]: W0116 08:59:52.968163 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.968201 kubelet[2539]: E0116 08:59:52.968186 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:52.969715 kubelet[2539]: E0116 08:59:52.969511 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.969715 kubelet[2539]: W0116 08:59:52.969541 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.969715 kubelet[2539]: E0116 08:59:52.969560 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:52.970577 kubelet[2539]: E0116 08:59:52.969880 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.970577 kubelet[2539]: W0116 08:59:52.970146 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.970577 kubelet[2539]: E0116 08:59:52.970195 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:52.970722 kubelet[2539]: E0116 08:59:52.970612 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.970722 kubelet[2539]: W0116 08:59:52.970625 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.970722 kubelet[2539]: E0116 08:59:52.970644 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:52.971674 kubelet[2539]: E0116 08:59:52.971128 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.971674 kubelet[2539]: W0116 08:59:52.971143 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.971674 kubelet[2539]: E0116 08:59:52.971161 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:52.971674 kubelet[2539]: E0116 08:59:52.971641 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.971674 kubelet[2539]: W0116 08:59:52.971661 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.971674 kubelet[2539]: E0116 08:59:52.971678 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:52.972491 kubelet[2539]: E0116 08:59:52.972315 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.972491 kubelet[2539]: W0116 08:59:52.972328 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.972491 kubelet[2539]: E0116 08:59:52.972341 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:52.973803 kubelet[2539]: E0116 08:59:52.973590 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.973803 kubelet[2539]: W0116 08:59:52.973606 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.973803 kubelet[2539]: E0116 08:59:52.973626 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:52.973939 kubelet[2539]: E0116 08:59:52.973845 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.973939 kubelet[2539]: W0116 08:59:52.973854 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.973939 kubelet[2539]: E0116 08:59:52.973865 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:52.974085 kubelet[2539]: E0116 08:59:52.974028 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.974085 kubelet[2539]: W0116 08:59:52.974034 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.974085 kubelet[2539]: E0116 08:59:52.974044 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:52.975268 kubelet[2539]: E0116 08:59:52.974232 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.975268 kubelet[2539]: W0116 08:59:52.974243 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.975268 kubelet[2539]: E0116 08:59:52.974253 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:52.975268 kubelet[2539]: E0116 08:59:52.974492 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.975268 kubelet[2539]: W0116 08:59:52.974501 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.975268 kubelet[2539]: E0116 08:59:52.974512 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:52.975718 kubelet[2539]: E0116 08:59:52.975443 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.975718 kubelet[2539]: W0116 08:59:52.975456 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.975718 kubelet[2539]: E0116 08:59:52.975485 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:52.976409 kubelet[2539]: E0116 08:59:52.976021 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.976409 kubelet[2539]: W0116 08:59:52.976036 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.976409 kubelet[2539]: E0116 08:59:52.976057 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:52.977099 kubelet[2539]: E0116 08:59:52.976455 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.977099 kubelet[2539]: W0116 08:59:52.976465 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.977411 kubelet[2539]: E0116 08:59:52.977311 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:52.978439 kubelet[2539]: E0116 08:59:52.977642 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.978439 kubelet[2539]: W0116 08:59:52.977657 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.978439 kubelet[2539]: E0116 08:59:52.977688 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:52.978439 kubelet[2539]: E0116 08:59:52.977981 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.978439 kubelet[2539]: W0116 08:59:52.977993 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.978439 kubelet[2539]: E0116 08:59:52.978028 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:52.978439 kubelet[2539]: E0116 08:59:52.978291 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.978439 kubelet[2539]: W0116 08:59:52.978303 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.978683 kubelet[2539]: E0116 08:59:52.978652 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:52.979110 kubelet[2539]: E0116 08:59:52.978989 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.979110 kubelet[2539]: W0116 08:59:52.979004 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.979192 kubelet[2539]: E0116 08:59:52.979129 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:52.980010 kubelet[2539]: E0116 08:59:52.979854 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.980010 kubelet[2539]: W0116 08:59:52.979869 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.980010 kubelet[2539]: E0116 08:59:52.979887 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:52.981037 kubelet[2539]: E0116 08:59:52.980927 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.981037 kubelet[2539]: W0116 08:59:52.980942 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.981037 kubelet[2539]: E0116 08:59:52.981031 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:52.981482 kubelet[2539]: E0116 08:59:52.981175 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.981482 kubelet[2539]: W0116 08:59:52.981184 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.981482 kubelet[2539]: E0116 08:59:52.981261 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:52.982919 kubelet[2539]: E0116 08:59:52.982692 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.982919 kubelet[2539]: W0116 08:59:52.982706 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.983159 kubelet[2539]: E0116 08:59:52.983126 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:52.983456 kubelet[2539]: E0116 08:59:52.983411 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.983456 kubelet[2539]: W0116 08:59:52.983424 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.983881 kubelet[2539]: E0116 08:59:52.983681 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:52.984043 kubelet[2539]: E0116 08:59:52.984026 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.984091 kubelet[2539]: W0116 08:59:52.984043 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.984343 kubelet[2539]: E0116 08:59:52.984187 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:52.985193 kubelet[2539]: E0116 08:59:52.985164 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.985193 kubelet[2539]: W0116 08:59:52.985180 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.985380 kubelet[2539]: E0116 08:59:52.985206 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:52.985515 kubelet[2539]: E0116 08:59:52.985438 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.985515 kubelet[2539]: W0116 08:59:52.985447 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.985862 kubelet[2539]: E0116 08:59:52.985528 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:52.985862 kubelet[2539]: E0116 08:59:52.985684 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.985862 kubelet[2539]: W0116 08:59:52.985691 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.985862 kubelet[2539]: E0116 08:59:52.985712 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:52.987414 kubelet[2539]: E0116 08:59:52.986745 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.987414 kubelet[2539]: W0116 08:59:52.986760 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.987414 kubelet[2539]: E0116 08:59:52.986794 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:52.988882 kubelet[2539]: E0116 08:59:52.988860 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:52.988882 kubelet[2539]: W0116 08:59:52.988877 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:52.989026 kubelet[2539]: E0116 08:59:52.988899 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:53.021957 kubelet[2539]: E0116 08:59:53.021590 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 08:59:53.021957 kubelet[2539]: W0116 08:59:53.021621 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 08:59:53.021957 kubelet[2539]: E0116 08:59:53.021658 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 08:59:53.174633 containerd[1468]: time="2025-01-16T08:59:53.173599063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:53.175277 containerd[1468]: time="2025-01-16T08:59:53.175219304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 16 08:59:53.176413 containerd[1468]: time="2025-01-16T08:59:53.176300951Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:53.183270 containerd[1468]: time="2025-01-16T08:59:53.182664435Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:53.184147 containerd[1468]: time="2025-01-16T08:59:53.184096669Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.432267481s" Jan 16 08:59:53.184147 containerd[1468]: time="2025-01-16T08:59:53.184148070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 16 08:59:53.188596 containerd[1468]: time="2025-01-16T08:59:53.188126260Z" level=info msg="CreateContainer within sandbox \"b8be6905f38e243832529465ebda2b2d3596544c8d2f87aef722dd2ebf3013f6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 16 08:59:53.253205 containerd[1468]: time="2025-01-16T08:59:53.253152505Z" level=info msg="CreateContainer within sandbox \"b8be6905f38e243832529465ebda2b2d3596544c8d2f87aef722dd2ebf3013f6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"784172eefcc1dbbd8e34c67a3e11dad8baff4c34020fd45854f9e81a7a5b5f85\"" Jan 16 08:59:53.254224 containerd[1468]: time="2025-01-16T08:59:53.253941200Z" level=info msg="StartContainer for \"784172eefcc1dbbd8e34c67a3e11dad8baff4c34020fd45854f9e81a7a5b5f85\"" Jan 16 08:59:53.324652 systemd[1]: Started cri-containerd-784172eefcc1dbbd8e34c67a3e11dad8baff4c34020fd45854f9e81a7a5b5f85.scope - libcontainer container 784172eefcc1dbbd8e34c67a3e11dad8baff4c34020fd45854f9e81a7a5b5f85. Jan 16 08:59:53.367157 containerd[1468]: time="2025-01-16T08:59:53.367086263Z" level=info msg="StartContainer for \"784172eefcc1dbbd8e34c67a3e11dad8baff4c34020fd45854f9e81a7a5b5f85\" returns successfully" Jan 16 08:59:53.383616 systemd[1]: cri-containerd-784172eefcc1dbbd8e34c67a3e11dad8baff4c34020fd45854f9e81a7a5b5f85.scope: Deactivated successfully. Jan 16 08:59:53.513087 containerd[1468]: time="2025-01-16T08:59:53.502619089Z" level=info msg="shim disconnected" id=784172eefcc1dbbd8e34c67a3e11dad8baff4c34020fd45854f9e81a7a5b5f85 namespace=k8s.io Jan 16 08:59:53.513087 containerd[1468]: time="2025-01-16T08:59:53.512452031Z" level=warning msg="cleaning up after shim disconnected" id=784172eefcc1dbbd8e34c67a3e11dad8baff4c34020fd45854f9e81a7a5b5f85 namespace=k8s.io Jan 16 08:59:53.513087 containerd[1468]: time="2025-01-16T08:59:53.512470481Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 08:59:53.756642 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-784172eefcc1dbbd8e34c67a3e11dad8baff4c34020fd45854f9e81a7a5b5f85-rootfs.mount: Deactivated successfully. Jan 16 08:59:53.911848 kubelet[2539]: E0116 08:59:53.911461 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:53.911848 kubelet[2539]: E0116 08:59:53.911475 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:53.930251 containerd[1468]: time="2025-01-16T08:59:53.930199322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 16 08:59:54.768108 kubelet[2539]: E0116 08:59:54.766831 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9xsdq" podUID="77f8f711-c082-45da-b5d0-0016bf4eeb11" Jan 16 08:59:54.915254 kubelet[2539]: E0116 08:59:54.915211 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:56.768768 kubelet[2539]: E0116 08:59:56.767740 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9xsdq" podUID="77f8f711-c082-45da-b5d0-0016bf4eeb11" Jan 16 08:59:57.275477 containerd[1468]: time="2025-01-16T08:59:57.275422999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:57.276765 containerd[1468]: time="2025-01-16T08:59:57.276690609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 16 08:59:57.277342 containerd[1468]: time="2025-01-16T08:59:57.277307112Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:57.279456 containerd[1468]: time="2025-01-16T08:59:57.279421325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:57.281073 containerd[1468]: time="2025-01-16T08:59:57.281035199Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.350572748s" Jan 16 08:59:57.281156 containerd[1468]: time="2025-01-16T08:59:57.281086992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 16 08:59:57.285205 containerd[1468]: time="2025-01-16T08:59:57.285139501Z" level=info msg="CreateContainer within sandbox \"b8be6905f38e243832529465ebda2b2d3596544c8d2f87aef722dd2ebf3013f6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 16 08:59:57.325797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount285435969.mount: Deactivated successfully. Jan 16 08:59:57.335782 containerd[1468]: time="2025-01-16T08:59:57.335670780Z" level=info msg="CreateContainer within sandbox \"b8be6905f38e243832529465ebda2b2d3596544c8d2f87aef722dd2ebf3013f6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e1d85a12f061d7203ee21660be7e0258c110f4d5d1c3a141d29f9c9ccf5a5f7b\"" Jan 16 08:59:57.337936 containerd[1468]: time="2025-01-16T08:59:57.337889249Z" level=info msg="StartContainer for \"e1d85a12f061d7203ee21660be7e0258c110f4d5d1c3a141d29f9c9ccf5a5f7b\"" Jan 16 08:59:57.406023 systemd[1]: Started cri-containerd-e1d85a12f061d7203ee21660be7e0258c110f4d5d1c3a141d29f9c9ccf5a5f7b.scope - libcontainer container e1d85a12f061d7203ee21660be7e0258c110f4d5d1c3a141d29f9c9ccf5a5f7b. Jan 16 08:59:57.463365 containerd[1468]: time="2025-01-16T08:59:57.462874424Z" level=info msg="StartContainer for \"e1d85a12f061d7203ee21660be7e0258c110f4d5d1c3a141d29f9c9ccf5a5f7b\" returns successfully" Jan 16 08:59:57.927134 kubelet[2539]: E0116 08:59:57.927100 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:58.222024 systemd[1]: cri-containerd-e1d85a12f061d7203ee21660be7e0258c110f4d5d1c3a141d29f9c9ccf5a5f7b.scope: Deactivated successfully. Jan 16 08:59:58.274988 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1d85a12f061d7203ee21660be7e0258c110f4d5d1c3a141d29f9c9ccf5a5f7b-rootfs.mount: Deactivated successfully. Jan 16 08:59:58.278572 containerd[1468]: time="2025-01-16T08:59:58.278497101Z" level=info msg="shim disconnected" id=e1d85a12f061d7203ee21660be7e0258c110f4d5d1c3a141d29f9c9ccf5a5f7b namespace=k8s.io Jan 16 08:59:58.278572 containerd[1468]: time="2025-01-16T08:59:58.278559471Z" level=warning msg="cleaning up after shim disconnected" id=e1d85a12f061d7203ee21660be7e0258c110f4d5d1c3a141d29f9c9ccf5a5f7b namespace=k8s.io Jan 16 08:59:58.278572 containerd[1468]: time="2025-01-16T08:59:58.278568324Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 08:59:58.289418 kubelet[2539]: I0116 08:59:58.289352 2539 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 16 08:59:58.325098 kubelet[2539]: I0116 08:59:58.324999 2539 topology_manager.go:215] "Topology Admit Handler" podUID="d521e8c4-e6b6-49d5-b863-b778812328d0" podNamespace="kube-system" podName="coredns-76f75df574-gz6pg" Jan 16 08:59:58.333902 kubelet[2539]: I0116 08:59:58.333754 2539 topology_manager.go:215] "Topology Admit Handler" podUID="7552fa54-b39b-428e-9b31-66fd48108761" podNamespace="kube-system" podName="coredns-76f75df574-c4lh9" Jan 16 08:59:58.334039 kubelet[2539]: I0116 08:59:58.333930 2539 topology_manager.go:215] "Topology Admit Handler" podUID="5717cdda-4a10-4088-8b75-4fff7e8b3b8d" podNamespace="calico-system" podName="calico-kube-controllers-695856fb7d-5l4ph" Jan 16 08:59:58.334072 kubelet[2539]: I0116 08:59:58.334059 2539 topology_manager.go:215] "Topology Admit Handler" podUID="dbcd6f2e-da2f-4282-a7c3-3ba835a7bf1a" podNamespace="calico-apiserver" podName="calico-apiserver-5948865d94-pzkn6" Jan 16 08:59:58.336314 systemd[1]: Created slice kubepods-burstable-podd521e8c4_e6b6_49d5_b863_b778812328d0.slice - libcontainer container kubepods-burstable-podd521e8c4_e6b6_49d5_b863_b778812328d0.slice. Jan 16 08:59:58.339675 kubelet[2539]: I0116 08:59:58.339239 2539 topology_manager.go:215] "Topology Admit Handler" podUID="7ec8f642-82a0-4595-a31f-bbaab8ff9d73" podNamespace="calico-apiserver" podName="calico-apiserver-5948865d94-2bhf5" Jan 16 08:59:58.347326 kubelet[2539]: W0116 08:59:58.346129 2539 reflector.go:539] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4081.3.0-9-2d52908736" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.0-9-2d52908736' and this object Jan 16 08:59:58.347326 kubelet[2539]: E0116 08:59:58.346170 2539 reflector.go:147] object-"calico-apiserver"/"calico-apiserver-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4081.3.0-9-2d52908736" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.0-9-2d52908736' and this object Jan 16 08:59:58.347326 kubelet[2539]: W0116 08:59:58.346211 2539 reflector.go:539] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.0-9-2d52908736" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.0-9-2d52908736' and this object Jan 16 08:59:58.347326 kubelet[2539]: E0116 08:59:58.346222 2539 reflector.go:147] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.0-9-2d52908736" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.0-9-2d52908736' and this object Jan 16 08:59:58.351387 systemd[1]: Created slice kubepods-besteffort-poddbcd6f2e_da2f_4282_a7c3_3ba835a7bf1a.slice - libcontainer container kubepods-besteffort-poddbcd6f2e_da2f_4282_a7c3_3ba835a7bf1a.slice. Jan 16 08:59:58.362890 systemd[1]: Created slice kubepods-burstable-pod7552fa54_b39b_428e_9b31_66fd48108761.slice - libcontainer container kubepods-burstable-pod7552fa54_b39b_428e_9b31_66fd48108761.slice. Jan 16 08:59:58.374214 systemd[1]: Created slice kubepods-besteffort-pod5717cdda_4a10_4088_8b75_4fff7e8b3b8d.slice - libcontainer container kubepods-besteffort-pod5717cdda_4a10_4088_8b75_4fff7e8b3b8d.slice. Jan 16 08:59:58.382988 systemd[1]: Created slice kubepods-besteffort-pod7ec8f642_82a0_4595_a31f_bbaab8ff9d73.slice - libcontainer container kubepods-besteffort-pod7ec8f642_82a0_4595_a31f_bbaab8ff9d73.slice. Jan 16 08:59:58.424856 kubelet[2539]: I0116 08:59:58.424790 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7552fa54-b39b-428e-9b31-66fd48108761-config-volume\") pod \"coredns-76f75df574-c4lh9\" (UID: \"7552fa54-b39b-428e-9b31-66fd48108761\") " pod="kube-system/coredns-76f75df574-c4lh9" Jan 16 08:59:58.424856 kubelet[2539]: I0116 08:59:58.424868 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2t42\" (UniqueName: \"kubernetes.io/projected/7ec8f642-82a0-4595-a31f-bbaab8ff9d73-kube-api-access-g2t42\") pod \"calico-apiserver-5948865d94-2bhf5\" (UID: \"7ec8f642-82a0-4595-a31f-bbaab8ff9d73\") " pod="calico-apiserver/calico-apiserver-5948865d94-2bhf5" Jan 16 08:59:58.425059 kubelet[2539]: I0116 08:59:58.424917 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d521e8c4-e6b6-49d5-b863-b778812328d0-config-volume\") pod \"coredns-76f75df574-gz6pg\" (UID: \"d521e8c4-e6b6-49d5-b863-b778812328d0\") " pod="kube-system/coredns-76f75df574-gz6pg" Jan 16 08:59:58.425059 kubelet[2539]: I0116 08:59:58.424948 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7q7m\" (UniqueName: \"kubernetes.io/projected/dbcd6f2e-da2f-4282-a7c3-3ba835a7bf1a-kube-api-access-c7q7m\") pod \"calico-apiserver-5948865d94-pzkn6\" (UID: \"dbcd6f2e-da2f-4282-a7c3-3ba835a7bf1a\") " pod="calico-apiserver/calico-apiserver-5948865d94-pzkn6" Jan 16 08:59:58.425059 kubelet[2539]: I0116 08:59:58.424989 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5717cdda-4a10-4088-8b75-4fff7e8b3b8d-tigera-ca-bundle\") pod \"calico-kube-controllers-695856fb7d-5l4ph\" (UID: \"5717cdda-4a10-4088-8b75-4fff7e8b3b8d\") " pod="calico-system/calico-kube-controllers-695856fb7d-5l4ph" Jan 16 08:59:58.425148 kubelet[2539]: I0116 08:59:58.425064 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7ec8f642-82a0-4595-a31f-bbaab8ff9d73-calico-apiserver-certs\") pod \"calico-apiserver-5948865d94-2bhf5\" (UID: \"7ec8f642-82a0-4595-a31f-bbaab8ff9d73\") " pod="calico-apiserver/calico-apiserver-5948865d94-2bhf5" Jan 16 08:59:58.425148 kubelet[2539]: I0116 08:59:58.425117 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dbcd6f2e-da2f-4282-a7c3-3ba835a7bf1a-calico-apiserver-certs\") pod \"calico-apiserver-5948865d94-pzkn6\" (UID: \"dbcd6f2e-da2f-4282-a7c3-3ba835a7bf1a\") " pod="calico-apiserver/calico-apiserver-5948865d94-pzkn6" Jan 16 08:59:58.425148 kubelet[2539]: I0116 08:59:58.425137 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9659p\" (UniqueName: \"kubernetes.io/projected/5717cdda-4a10-4088-8b75-4fff7e8b3b8d-kube-api-access-9659p\") pod \"calico-kube-controllers-695856fb7d-5l4ph\" (UID: \"5717cdda-4a10-4088-8b75-4fff7e8b3b8d\") " pod="calico-system/calico-kube-controllers-695856fb7d-5l4ph" Jan 16 08:59:58.425622 kubelet[2539]: I0116 08:59:58.425281 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pskz6\" (UniqueName: \"kubernetes.io/projected/d521e8c4-e6b6-49d5-b863-b778812328d0-kube-api-access-pskz6\") pod \"coredns-76f75df574-gz6pg\" (UID: \"d521e8c4-e6b6-49d5-b863-b778812328d0\") " pod="kube-system/coredns-76f75df574-gz6pg" Jan 16 08:59:58.425709 kubelet[2539]: I0116 08:59:58.425662 2539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb8p2\" (UniqueName: \"kubernetes.io/projected/7552fa54-b39b-428e-9b31-66fd48108761-kube-api-access-qb8p2\") pod \"coredns-76f75df574-c4lh9\" (UID: \"7552fa54-b39b-428e-9b31-66fd48108761\") " pod="kube-system/coredns-76f75df574-c4lh9" Jan 16 08:59:58.645728 kubelet[2539]: E0116 08:59:58.645682 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:58.647263 containerd[1468]: time="2025-01-16T08:59:58.646493934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gz6pg,Uid:d521e8c4-e6b6-49d5-b863-b778812328d0,Namespace:kube-system,Attempt:0,}" Jan 16 08:59:58.671771 kubelet[2539]: E0116 08:59:58.669458 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:58.671919 containerd[1468]: time="2025-01-16T08:59:58.670165263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-c4lh9,Uid:7552fa54-b39b-428e-9b31-66fd48108761,Namespace:kube-system,Attempt:0,}" Jan 16 08:59:58.683948 containerd[1468]: time="2025-01-16T08:59:58.683905527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-695856fb7d-5l4ph,Uid:5717cdda-4a10-4088-8b75-4fff7e8b3b8d,Namespace:calico-system,Attempt:0,}" Jan 16 08:59:58.783110 systemd[1]: Created slice kubepods-besteffort-pod77f8f711_c082_45da_b5d0_0016bf4eeb11.slice - libcontainer container kubepods-besteffort-pod77f8f711_c082_45da_b5d0_0016bf4eeb11.slice. Jan 16 08:59:58.791296 containerd[1468]: time="2025-01-16T08:59:58.790929593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9xsdq,Uid:77f8f711-c082-45da-b5d0-0016bf4eeb11,Namespace:calico-system,Attempt:0,}" Jan 16 08:59:58.938206 kubelet[2539]: E0116 08:59:58.937627 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:58.939580 containerd[1468]: time="2025-01-16T08:59:58.939445795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 16 08:59:59.012599 containerd[1468]: time="2025-01-16T08:59:59.012383325Z" level=error msg="Failed to destroy network for sandbox \"ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 08:59:59.014099 containerd[1468]: time="2025-01-16T08:59:59.013740956Z" level=error msg="encountered an error cleaning up failed sandbox \"ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 08:59:59.014099 containerd[1468]: time="2025-01-16T08:59:59.013833232Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9xsdq,Uid:77f8f711-c082-45da-b5d0-0016bf4eeb11,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 08:59:59.022421 containerd[1468]: time="2025-01-16T08:59:59.013743591Z" level=error msg="Failed to destroy network for sandbox \"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 08:59:59.023151 containerd[1468]: time="2025-01-16T08:59:59.022908240Z" level=error msg="encountered an error cleaning up failed sandbox \"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 08:59:59.023151 containerd[1468]: time="2025-01-16T08:59:59.022971398Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gz6pg,Uid:d521e8c4-e6b6-49d5-b863-b778812328d0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 08:59:59.023277 kubelet[2539]: E0116 08:59:59.023251 2539 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 08:59:59.023333 kubelet[2539]: E0116 08:59:59.023325 2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-gz6pg" Jan 16 08:59:59.023366 kubelet[2539]: E0116 08:59:59.023349 2539 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-gz6pg" Jan 16 08:59:59.023469 kubelet[2539]: E0116 08:59:59.023454 2539 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 08:59:59.023533 kubelet[2539]: E0116 08:59:59.023505 2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9xsdq" Jan 16 08:59:59.023533 kubelet[2539]: E0116 08:59:59.023532 2539 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9xsdq" Jan 16 08:59:59.023595 kubelet[2539]: E0116 08:59:59.023580 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9xsdq_calico-system(77f8f711-c082-45da-b5d0-0016bf4eeb11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9xsdq_calico-system(77f8f711-c082-45da-b5d0-0016bf4eeb11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9xsdq" podUID="77f8f711-c082-45da-b5d0-0016bf4eeb11" Jan 16 08:59:59.024471 kubelet[2539]: E0116 08:59:59.024440 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-gz6pg_kube-system(d521e8c4-e6b6-49d5-b863-b778812328d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-gz6pg_kube-system(d521e8c4-e6b6-49d5-b863-b778812328d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-gz6pg" podUID="d521e8c4-e6b6-49d5-b863-b778812328d0" Jan 16 08:59:59.028564 containerd[1468]: time="2025-01-16T08:59:59.028522807Z" level=error msg="Failed to destroy network for sandbox \"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 08:59:59.029212 containerd[1468]: time="2025-01-16T08:59:59.028976261Z" level=error msg="encountered an error cleaning up failed sandbox \"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 08:59:59.029212 containerd[1468]: time="2025-01-16T08:59:59.029030896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-695856fb7d-5l4ph,Uid:5717cdda-4a10-4088-8b75-4fff7e8b3b8d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 08:59:59.029371 kubelet[2539]: E0116 08:59:59.029256 2539 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 08:59:59.029371 kubelet[2539]: E0116 08:59:59.029312 2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-695856fb7d-5l4ph" Jan 16 08:59:59.029371 kubelet[2539]: E0116 08:59:59.029333 2539 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-695856fb7d-5l4ph" Jan 16 08:59:59.029686 kubelet[2539]: E0116 08:59:59.029618 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-695856fb7d-5l4ph_calico-system(5717cdda-4a10-4088-8b75-4fff7e8b3b8d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-695856fb7d-5l4ph_calico-system(5717cdda-4a10-4088-8b75-4fff7e8b3b8d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-695856fb7d-5l4ph" podUID="5717cdda-4a10-4088-8b75-4fff7e8b3b8d" Jan 16 08:59:59.032719 containerd[1468]: time="2025-01-16T08:59:59.032664469Z" level=error msg="Failed to destroy network for sandbox \"5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 08:59:59.033106 containerd[1468]: time="2025-01-16T08:59:59.033070235Z" level=error msg="encountered an error cleaning up failed sandbox \"5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 08:59:59.033165 containerd[1468]: time="2025-01-16T08:59:59.033143095Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-c4lh9,Uid:7552fa54-b39b-428e-9b31-66fd48108761,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 08:59:59.033872 kubelet[2539]: E0116 08:59:59.033853 2539 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 08:59:59.034988 kubelet[2539]: E0116 08:59:59.034146 2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-c4lh9" Jan 16 08:59:59.034988 kubelet[2539]: E0116 08:59:59.034177 2539 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-c4lh9" Jan 16 08:59:59.034988 kubelet[2539]: E0116 08:59:59.034226 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-c4lh9_kube-system(7552fa54-b39b-428e-9b31-66fd48108761)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-c4lh9_kube-system(7552fa54-b39b-428e-9b31-66fd48108761)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-c4lh9" podUID="7552fa54-b39b-428e-9b31-66fd48108761" Jan 16 08:59:59.555403 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122-shm.mount: Deactivated successfully. Jan 16 08:59:59.559068 containerd[1468]: time="2025-01-16T08:59:59.558657001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5948865d94-pzkn6,Uid:dbcd6f2e-da2f-4282-a7c3-3ba835a7bf1a,Namespace:calico-apiserver,Attempt:0,}" Jan 16 08:59:59.589223 containerd[1468]: time="2025-01-16T08:59:59.589174275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5948865d94-2bhf5,Uid:7ec8f642-82a0-4595-a31f-bbaab8ff9d73,Namespace:calico-apiserver,Attempt:0,}" Jan 16 08:59:59.671491 containerd[1468]: time="2025-01-16T08:59:59.671320975Z" level=error msg="Failed to destroy network for sandbox \"1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 08:59:59.672442 containerd[1468]: time="2025-01-16T08:59:59.671903843Z" level=error msg="encountered an error cleaning up failed sandbox \"1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 08:59:59.674528 containerd[1468]: time="2025-01-16T08:59:59.674479823Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5948865d94-pzkn6,Uid:dbcd6f2e-da2f-4282-a7c3-3ba835a7bf1a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 08:59:59.675862 kubelet[2539]: E0116 08:59:59.674998 2539 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 08:59:59.675862 kubelet[2539]: E0116 08:59:59.675076 2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5948865d94-pzkn6" Jan 16 08:59:59.675862 kubelet[2539]: E0116 08:59:59.675103 2539 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5948865d94-pzkn6" Jan 16 08:59:59.675082 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259-shm.mount: Deactivated successfully. Jan 16 08:59:59.676195 kubelet[2539]: E0116 08:59:59.675180 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5948865d94-pzkn6_calico-apiserver(dbcd6f2e-da2f-4282-a7c3-3ba835a7bf1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5948865d94-pzkn6_calico-apiserver(dbcd6f2e-da2f-4282-a7c3-3ba835a7bf1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5948865d94-pzkn6" podUID="dbcd6f2e-da2f-4282-a7c3-3ba835a7bf1a" Jan 16 08:59:59.703653 containerd[1468]: time="2025-01-16T08:59:59.703545697Z" level=error msg="Failed to destroy network for sandbox \"2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 08:59:59.704192 containerd[1468]: time="2025-01-16T08:59:59.704132680Z" level=error msg="encountered an error cleaning up failed sandbox \"2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 08:59:59.704281 containerd[1468]: time="2025-01-16T08:59:59.704253742Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5948865d94-2bhf5,Uid:7ec8f642-82a0-4595-a31f-bbaab8ff9d73,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 08:59:59.704626 kubelet[2539]: E0116 08:59:59.704602 2539 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 08:59:59.705179 kubelet[2539]: E0116 08:59:59.704779 2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5948865d94-2bhf5" Jan 16 08:59:59.705179 kubelet[2539]: E0116 08:59:59.704818 2539 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5948865d94-2bhf5" Jan 16 08:59:59.705484 kubelet[2539]: E0116 08:59:59.705417 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5948865d94-2bhf5_calico-apiserver(7ec8f642-82a0-4595-a31f-bbaab8ff9d73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5948865d94-2bhf5_calico-apiserver(7ec8f642-82a0-4595-a31f-bbaab8ff9d73)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5948865d94-2bhf5" podUID="7ec8f642-82a0-4595-a31f-bbaab8ff9d73" Jan 16 08:59:59.941251 kubelet[2539]: I0116 08:59:59.941176 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" Jan 16 08:59:59.947428 kubelet[2539]: I0116 08:59:59.944636 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" Jan 16 08:59:59.947914 containerd[1468]: time="2025-01-16T08:59:59.947866038Z" level=info msg="StopPodSandbox for \"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2\"" Jan 16 08:59:59.953916 containerd[1468]: time="2025-01-16T08:59:59.953523818Z" level=info msg="StopPodSandbox for \"5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b\"" Jan 16 08:59:59.959694 containerd[1468]: time="2025-01-16T08:59:59.959642139Z" level=info msg="Ensure that sandbox 5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b in task-service has been cleanup successfully" Jan 16 08:59:59.960283 containerd[1468]: time="2025-01-16T08:59:59.959666222Z" level=info msg="Ensure that sandbox 602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2 in task-service has been cleanup successfully" Jan 16 08:59:59.964737 kubelet[2539]: I0116 08:59:59.964694 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" Jan 16 08:59:59.967929 containerd[1468]: time="2025-01-16T08:59:59.967872809Z" level=info msg="StopPodSandbox for \"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122\"" Jan 16 08:59:59.968804 containerd[1468]: time="2025-01-16T08:59:59.968484233Z" level=info msg="Ensure that sandbox 75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122 in task-service has been cleanup successfully" Jan 16 08:59:59.971191 kubelet[2539]: I0116 08:59:59.971160 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" Jan 16 08:59:59.975013 containerd[1468]: time="2025-01-16T08:59:59.973872383Z" level=info msg="StopPodSandbox for \"1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259\"" Jan 16 08:59:59.975890 kubelet[2539]: I0116 08:59:59.975861 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" Jan 16 08:59:59.977781 containerd[1468]: time="2025-01-16T08:59:59.977321460Z" level=info msg="Ensure that sandbox 1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259 in task-service has been cleanup successfully" Jan 16 08:59:59.979574 containerd[1468]: time="2025-01-16T08:59:59.977534733Z" level=info msg="StopPodSandbox for \"ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78\"" Jan 16 08:59:59.979574 containerd[1468]: time="2025-01-16T08:59:59.978151048Z" level=info msg="Ensure that sandbox ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78 in task-service has been cleanup successfully" Jan 16 08:59:59.990769 kubelet[2539]: I0116 08:59:59.990732 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" Jan 16 08:59:59.996910 containerd[1468]: time="2025-01-16T08:59:59.996859139Z" level=info msg="StopPodSandbox for \"2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f\"" Jan 16 08:59:59.998110 containerd[1468]: time="2025-01-16T08:59:59.998061056Z" level=info msg="Ensure that sandbox 2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f in task-service has been cleanup successfully" Jan 16 09:00:00.173671 containerd[1468]: time="2025-01-16T09:00:00.173505004Z" level=error msg="StopPodSandbox for \"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2\" failed" error="failed to destroy network for sandbox \"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:00:00.174021 kubelet[2539]: E0116 09:00:00.173951 2539 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" Jan 16 09:00:00.174156 kubelet[2539]: E0116 09:00:00.174069 2539 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2"} Jan 16 09:00:00.174156 kubelet[2539]: E0116 09:00:00.174126 2539 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5717cdda-4a10-4088-8b75-4fff7e8b3b8d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 09:00:00.174302 kubelet[2539]: E0116 09:00:00.174173 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5717cdda-4a10-4088-8b75-4fff7e8b3b8d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-695856fb7d-5l4ph" podUID="5717cdda-4a10-4088-8b75-4fff7e8b3b8d" Jan 16 09:00:00.176452 containerd[1468]: time="2025-01-16T09:00:00.174764717Z" level=error msg="StopPodSandbox for \"5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b\" failed" error="failed to destroy network for sandbox \"5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:00:00.176681 kubelet[2539]: E0116 09:00:00.175166 2539 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" Jan 16 09:00:00.176681 kubelet[2539]: E0116 09:00:00.175240 2539 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b"} Jan 16 09:00:00.176681 kubelet[2539]: E0116 09:00:00.175299 2539 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7552fa54-b39b-428e-9b31-66fd48108761\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 09:00:00.176681 kubelet[2539]: E0116 09:00:00.175361 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7552fa54-b39b-428e-9b31-66fd48108761\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-c4lh9" podUID="7552fa54-b39b-428e-9b31-66fd48108761" Jan 16 09:00:00.177807 containerd[1468]: time="2025-01-16T09:00:00.177549218Z" level=error msg="StopPodSandbox for \"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122\" failed" error="failed to destroy network for sandbox \"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:00:00.178210 kubelet[2539]: E0116 09:00:00.177860 2539 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" Jan 16 09:00:00.178210 kubelet[2539]: E0116 09:00:00.177918 2539 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122"} Jan 16 09:00:00.178210 kubelet[2539]: E0116 09:00:00.177969 2539 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d521e8c4-e6b6-49d5-b863-b778812328d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 09:00:00.178210 kubelet[2539]: E0116 09:00:00.178026 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d521e8c4-e6b6-49d5-b863-b778812328d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-gz6pg" podUID="d521e8c4-e6b6-49d5-b863-b778812328d0" Jan 16 09:00:00.183678 containerd[1468]: time="2025-01-16T09:00:00.183590631Z" level=error msg="StopPodSandbox for \"1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259\" failed" error="failed to destroy network for sandbox \"1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:00:00.184218 kubelet[2539]: E0116 09:00:00.184183 2539 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" Jan 16 09:00:00.184873 kubelet[2539]: E0116 09:00:00.184242 2539 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259"} Jan 16 09:00:00.184873 kubelet[2539]: E0116 09:00:00.184297 2539 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dbcd6f2e-da2f-4282-a7c3-3ba835a7bf1a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 09:00:00.184873 kubelet[2539]: E0116 09:00:00.184352 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dbcd6f2e-da2f-4282-a7c3-3ba835a7bf1a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5948865d94-pzkn6" podUID="dbcd6f2e-da2f-4282-a7c3-3ba835a7bf1a" Jan 16 09:00:00.199554 containerd[1468]: time="2025-01-16T09:00:00.199213045Z" level=error msg="StopPodSandbox for \"ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78\" failed" error="failed to destroy network for sandbox \"ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:00:00.200266 kubelet[2539]: E0116 09:00:00.200200 2539 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" Jan 16 09:00:00.200703 kubelet[2539]: E0116 09:00:00.200281 2539 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78"} Jan 16 09:00:00.200703 kubelet[2539]: E0116 09:00:00.200358 2539 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"77f8f711-c082-45da-b5d0-0016bf4eeb11\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 09:00:00.200703 kubelet[2539]: E0116 09:00:00.200432 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"77f8f711-c082-45da-b5d0-0016bf4eeb11\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9xsdq" podUID="77f8f711-c082-45da-b5d0-0016bf4eeb11" Jan 16 09:00:00.202695 containerd[1468]: time="2025-01-16T09:00:00.202621698Z" level=error msg="StopPodSandbox for \"2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f\" failed" error="failed to destroy network for sandbox \"2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:00:00.203744 kubelet[2539]: E0116 09:00:00.203549 2539 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" Jan 16 09:00:00.203993 kubelet[2539]: E0116 09:00:00.203777 2539 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f"} Jan 16 09:00:00.203993 kubelet[2539]: E0116 09:00:00.203830 2539 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ec8f642-82a0-4595-a31f-bbaab8ff9d73\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 09:00:00.203993 kubelet[2539]: E0116 09:00:00.203924 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ec8f642-82a0-4595-a31f-bbaab8ff9d73\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5948865d94-2bhf5" podUID="7ec8f642-82a0-4595-a31f-bbaab8ff9d73" Jan 16 09:00:00.559373 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f-shm.mount: Deactivated successfully. Jan 16 09:00:10.386554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3343594223.mount: Deactivated successfully. Jan 16 09:00:10.512116 systemd[1]: Started sshd@7-147.182.202.230:22-2.57.122.190:47790.service - OpenSSH per-connection server daemon (2.57.122.190:47790). Jan 16 09:00:10.543667 containerd[1468]: time="2025-01-16T09:00:10.542329691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 16 09:00:10.547835 containerd[1468]: time="2025-01-16T09:00:10.547375618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:00:10.558800 containerd[1468]: time="2025-01-16T09:00:10.558727929Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:00:10.574003 containerd[1468]: time="2025-01-16T09:00:10.573919584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:00:10.580049 containerd[1468]: time="2025-01-16T09:00:10.578554431Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 11.637503708s" Jan 16 09:00:10.580049 containerd[1468]: time="2025-01-16T09:00:10.578633141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 16 09:00:10.648840 systemd[1]: Started sshd@8-147.182.202.230:22-88.214.25.64:28960.service - OpenSSH per-connection server daemon (88.214.25.64:28960). Jan 16 09:00:10.683481 sshd[3579]: banner exchange: Connection from 88.214.25.64 port 28960: invalid format Jan 16 09:00:10.683678 systemd[1]: sshd@8-147.182.202.230:22-88.214.25.64:28960.service: Deactivated successfully. Jan 16 09:00:10.697611 containerd[1468]: time="2025-01-16T09:00:10.697519771Z" level=info msg="CreateContainer within sandbox \"b8be6905f38e243832529465ebda2b2d3596544c8d2f87aef722dd2ebf3013f6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 16 09:00:10.745854 containerd[1468]: time="2025-01-16T09:00:10.745745224Z" level=info msg="CreateContainer within sandbox \"b8be6905f38e243832529465ebda2b2d3596544c8d2f87aef722dd2ebf3013f6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"63bfdee3c5c4a5cfb61371fab44503799018641fe60b0714ca1a21d6997be5a0\"" Jan 16 09:00:10.754829 containerd[1468]: time="2025-01-16T09:00:10.754742421Z" level=info msg="StartContainer for \"63bfdee3c5c4a5cfb61371fab44503799018641fe60b0714ca1a21d6997be5a0\"" Jan 16 09:00:10.771545 containerd[1468]: time="2025-01-16T09:00:10.771486554Z" level=info msg="StopPodSandbox for \"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122\"" Jan 16 09:00:10.789717 containerd[1468]: time="2025-01-16T09:00:10.789027934Z" level=info msg="StopPodSandbox for \"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2\"" Jan 16 09:00:10.917615 containerd[1468]: time="2025-01-16T09:00:10.917435460Z" level=error msg="StopPodSandbox for \"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122\" failed" error="failed to destroy network for sandbox \"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:00:10.919817 kubelet[2539]: E0116 09:00:10.918663 2539 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" Jan 16 09:00:10.919817 kubelet[2539]: E0116 09:00:10.918735 2539 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122"} Jan 16 09:00:10.919817 kubelet[2539]: E0116 09:00:10.918794 2539 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d521e8c4-e6b6-49d5-b863-b778812328d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 09:00:10.919817 kubelet[2539]: E0116 09:00:10.918848 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d521e8c4-e6b6-49d5-b863-b778812328d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-gz6pg" podUID="d521e8c4-e6b6-49d5-b863-b778812328d0" Jan 16 09:00:10.922524 containerd[1468]: time="2025-01-16T09:00:10.922319807Z" level=error msg="StopPodSandbox for \"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2\" failed" error="failed to destroy network for sandbox \"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 09:00:10.922853 kubelet[2539]: E0116 09:00:10.922702 2539 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" Jan 16 09:00:10.922853 kubelet[2539]: E0116 09:00:10.922762 2539 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2"} Jan 16 09:00:10.922853 kubelet[2539]: E0116 09:00:10.922831 2539 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5717cdda-4a10-4088-8b75-4fff7e8b3b8d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 09:00:10.923041 kubelet[2539]: E0116 09:00:10.922881 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5717cdda-4a10-4088-8b75-4fff7e8b3b8d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-695856fb7d-5l4ph" podUID="5717cdda-4a10-4088-8b75-4fff7e8b3b8d" Jan 16 09:00:11.030881 systemd[1]: Started sshd@9-147.182.202.230:22-88.214.25.64:31596.service - OpenSSH per-connection server daemon (88.214.25.64:31596). Jan 16 09:00:11.108948 sshd[3627]: banner exchange: Connection from 88.214.25.64 port 31596: invalid format Jan 16 09:00:11.121023 systemd[1]: sshd@9-147.182.202.230:22-88.214.25.64:31596.service: Deactivated successfully. Jan 16 09:00:11.147796 systemd[1]: Started cri-containerd-63bfdee3c5c4a5cfb61371fab44503799018641fe60b0714ca1a21d6997be5a0.scope - libcontainer container 63bfdee3c5c4a5cfb61371fab44503799018641fe60b0714ca1a21d6997be5a0. Jan 16 09:00:11.236970 containerd[1468]: time="2025-01-16T09:00:11.235929594Z" level=info msg="StartContainer for \"63bfdee3c5c4a5cfb61371fab44503799018641fe60b0714ca1a21d6997be5a0\" returns successfully" Jan 16 09:00:11.348623 sshd[3574]: Invalid user test_user from 2.57.122.190 port 47790 Jan 16 09:00:11.416780 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 16 09:00:11.423105 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 16 09:00:11.522260 sshd[3574]: Connection closed by invalid user test_user 2.57.122.190 port 47790 [preauth] Jan 16 09:00:11.524714 systemd[1]: sshd@7-147.182.202.230:22-2.57.122.190:47790.service: Deactivated successfully. Jan 16 09:00:11.664751 systemd[1]: Started sshd@10-147.182.202.230:22-88.214.25.64:34542.service - OpenSSH per-connection server daemon (88.214.25.64:34542). Jan 16 09:00:11.695944 sshd[3678]: banner exchange: Connection from 88.214.25.64 port 34542: invalid format Jan 16 09:00:11.698302 systemd[1]: sshd@10-147.182.202.230:22-88.214.25.64:34542.service: Deactivated successfully. Jan 16 09:00:12.111593 kubelet[2539]: E0116 09:00:12.111507 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:00:13.133495 kubelet[2539]: I0116 09:00:13.131891 2539 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 09:00:13.135521 kubelet[2539]: E0116 09:00:13.135365 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:00:13.717451 kernel: bpftool[3814]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 16 09:00:13.771945 containerd[1468]: time="2025-01-16T09:00:13.770875945Z" level=info msg="StopPodSandbox for \"1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259\"" Jan 16 09:00:13.771945 containerd[1468]: time="2025-01-16T09:00:13.771542518Z" level=info msg="StopPodSandbox for \"ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78\"" Jan 16 09:00:13.978417 kubelet[2539]: I0116 09:00:13.977743 2539 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-266jz" podStartSLOduration=4.899061097 podStartE2EDuration="25.953973495s" podCreationTimestamp="2025-01-16 08:59:48 +0000 UTC" firstStartedPulling="2025-01-16 08:59:49.524843765 +0000 UTC m=+28.906915322" lastFinishedPulling="2025-01-16 09:00:10.57975615 +0000 UTC m=+49.961827720" observedRunningTime="2025-01-16 09:00:12.138333106 +0000 UTC m=+51.520404688" watchObservedRunningTime="2025-01-16 09:00:13.953973495 +0000 UTC m=+53.336045073" Jan 16 09:00:14.237765 containerd[1468]: 2025-01-16 09:00:13.966 [INFO][3837] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" Jan 16 09:00:14.237765 containerd[1468]: 2025-01-16 09:00:13.966 [INFO][3837] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" iface="eth0" netns="/var/run/netns/cni-3d91bd19-c648-33a9-b48b-5713a82be494" Jan 16 09:00:14.237765 containerd[1468]: 2025-01-16 09:00:13.967 [INFO][3837] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" iface="eth0" netns="/var/run/netns/cni-3d91bd19-c648-33a9-b48b-5713a82be494" Jan 16 09:00:14.237765 containerd[1468]: 2025-01-16 09:00:13.967 [INFO][3837] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" iface="eth0" netns="/var/run/netns/cni-3d91bd19-c648-33a9-b48b-5713a82be494" Jan 16 09:00:14.237765 containerd[1468]: 2025-01-16 09:00:13.968 [INFO][3837] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" Jan 16 09:00:14.237765 containerd[1468]: 2025-01-16 09:00:13.968 [INFO][3837] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" Jan 16 09:00:14.237765 containerd[1468]: 2025-01-16 09:00:14.199 [INFO][3853] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" HandleID="k8s-pod-network.ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" Workload="ci--4081.3.0--9--2d52908736-k8s-csi--node--driver--9xsdq-eth0" Jan 16 09:00:14.237765 containerd[1468]: 2025-01-16 09:00:14.201 [INFO][3853] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:00:14.237765 containerd[1468]: 2025-01-16 09:00:14.202 [INFO][3853] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:00:14.237765 containerd[1468]: 2025-01-16 09:00:14.222 [WARNING][3853] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" HandleID="k8s-pod-network.ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" Workload="ci--4081.3.0--9--2d52908736-k8s-csi--node--driver--9xsdq-eth0" Jan 16 09:00:14.237765 containerd[1468]: 2025-01-16 09:00:14.222 [INFO][3853] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" HandleID="k8s-pod-network.ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" Workload="ci--4081.3.0--9--2d52908736-k8s-csi--node--driver--9xsdq-eth0" Jan 16 09:00:14.237765 containerd[1468]: 2025-01-16 09:00:14.226 [INFO][3853] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:00:14.237765 containerd[1468]: 2025-01-16 09:00:14.230 [INFO][3837] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" Jan 16 09:00:14.241085 containerd[1468]: time="2025-01-16T09:00:14.238432210Z" level=info msg="TearDown network for sandbox \"ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78\" successfully" Jan 16 09:00:14.241085 containerd[1468]: time="2025-01-16T09:00:14.240486852Z" level=info msg="StopPodSandbox for \"ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78\" returns successfully" Jan 16 09:00:14.248034 systemd[1]: run-netns-cni\x2d3d91bd19\x2dc648\x2d33a9\x2db48b\x2d5713a82be494.mount: Deactivated successfully. Jan 16 09:00:14.249696 containerd[1468]: time="2025-01-16T09:00:14.248087330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9xsdq,Uid:77f8f711-c082-45da-b5d0-0016bf4eeb11,Namespace:calico-system,Attempt:1,}" Jan 16 09:00:14.302635 systemd-networkd[1373]: vxlan.calico: Link UP Jan 16 09:00:14.302649 systemd-networkd[1373]: vxlan.calico: Gained carrier Jan 16 09:00:14.386182 containerd[1468]: 2025-01-16 09:00:13.952 [INFO][3844] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" Jan 16 09:00:14.386182 containerd[1468]: 2025-01-16 09:00:13.953 [INFO][3844] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" iface="eth0" netns="/var/run/netns/cni-8c44e65c-49a2-eef5-585c-3ca67ec2a477" Jan 16 09:00:14.386182 containerd[1468]: 2025-01-16 09:00:13.953 [INFO][3844] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" iface="eth0" netns="/var/run/netns/cni-8c44e65c-49a2-eef5-585c-3ca67ec2a477" Jan 16 09:00:14.386182 containerd[1468]: 2025-01-16 09:00:13.955 [INFO][3844] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" iface="eth0" netns="/var/run/netns/cni-8c44e65c-49a2-eef5-585c-3ca67ec2a477" Jan 16 09:00:14.386182 containerd[1468]: 2025-01-16 09:00:13.955 [INFO][3844] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" Jan 16 09:00:14.386182 containerd[1468]: 2025-01-16 09:00:13.955 [INFO][3844] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" Jan 16 09:00:14.386182 containerd[1468]: 2025-01-16 09:00:14.197 [INFO][3852] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" HandleID="k8s-pod-network.1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--pzkn6-eth0" Jan 16 09:00:14.386182 containerd[1468]: 2025-01-16 09:00:14.201 [INFO][3852] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:00:14.386182 containerd[1468]: 2025-01-16 09:00:14.225 [INFO][3852] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:00:14.386182 containerd[1468]: 2025-01-16 09:00:14.251 [WARNING][3852] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" HandleID="k8s-pod-network.1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--pzkn6-eth0" Jan 16 09:00:14.386182 containerd[1468]: 2025-01-16 09:00:14.251 [INFO][3852] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" HandleID="k8s-pod-network.1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--pzkn6-eth0" Jan 16 09:00:14.386182 containerd[1468]: 2025-01-16 09:00:14.257 [INFO][3852] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:00:14.386182 containerd[1468]: 2025-01-16 09:00:14.268 [INFO][3844] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" Jan 16 09:00:14.394701 containerd[1468]: time="2025-01-16T09:00:14.393199080Z" level=info msg="TearDown network for sandbox \"1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259\" successfully" Jan 16 09:00:14.398822 systemd[1]: run-netns-cni\x2d8c44e65c\x2d49a2\x2deef5\x2d585c\x2d3ca67ec2a477.mount: Deactivated successfully. Jan 16 09:00:14.403699 containerd[1468]: time="2025-01-16T09:00:14.403615232Z" level=info msg="StopPodSandbox for \"1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259\" returns successfully" Jan 16 09:00:14.406321 containerd[1468]: time="2025-01-16T09:00:14.406158351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5948865d94-pzkn6,Uid:dbcd6f2e-da2f-4282-a7c3-3ba835a7bf1a,Namespace:calico-apiserver,Attempt:1,}" Jan 16 09:00:14.758189 systemd-networkd[1373]: cali73e433c7831: Link UP Jan 16 09:00:14.762759 systemd-networkd[1373]: cali73e433c7831: Gained carrier Jan 16 09:00:14.807291 containerd[1468]: 2025-01-16 09:00:14.563 [INFO][3913] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--pzkn6-eth0 calico-apiserver-5948865d94- calico-apiserver dbcd6f2e-da2f-4282-a7c3-3ba835a7bf1a 783 0 2025-01-16 08:59:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5948865d94 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-9-2d52908736 calico-apiserver-5948865d94-pzkn6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali73e433c7831 [] []}} ContainerID="5eac6e3340bae2af589e72be908073a7c7725b1a05256d137059ee80fcfdffb4" Namespace="calico-apiserver" Pod="calico-apiserver-5948865d94-pzkn6" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--pzkn6-" Jan 16 09:00:14.807291 containerd[1468]: 2025-01-16 09:00:14.564 [INFO][3913] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5eac6e3340bae2af589e72be908073a7c7725b1a05256d137059ee80fcfdffb4" Namespace="calico-apiserver" Pod="calico-apiserver-5948865d94-pzkn6" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--pzkn6-eth0" Jan 16 09:00:14.807291 containerd[1468]: 2025-01-16 09:00:14.647 [INFO][3929] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5eac6e3340bae2af589e72be908073a7c7725b1a05256d137059ee80fcfdffb4" HandleID="k8s-pod-network.5eac6e3340bae2af589e72be908073a7c7725b1a05256d137059ee80fcfdffb4" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--pzkn6-eth0" Jan 16 09:00:14.807291 containerd[1468]: 2025-01-16 09:00:14.668 [INFO][3929] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5eac6e3340bae2af589e72be908073a7c7725b1a05256d137059ee80fcfdffb4" HandleID="k8s-pod-network.5eac6e3340bae2af589e72be908073a7c7725b1a05256d137059ee80fcfdffb4" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--pzkn6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bd740), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-9-2d52908736", "pod":"calico-apiserver-5948865d94-pzkn6", "timestamp":"2025-01-16 09:00:14.646984946 +0000 UTC"}, Hostname:"ci-4081.3.0-9-2d52908736", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 09:00:14.807291 containerd[1468]: 2025-01-16 09:00:14.668 [INFO][3929] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:00:14.807291 containerd[1468]: 2025-01-16 09:00:14.668 [INFO][3929] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:00:14.807291 containerd[1468]: 2025-01-16 09:00:14.669 [INFO][3929] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-9-2d52908736' Jan 16 09:00:14.807291 containerd[1468]: 2025-01-16 09:00:14.673 [INFO][3929] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5eac6e3340bae2af589e72be908073a7c7725b1a05256d137059ee80fcfdffb4" host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:14.807291 containerd[1468]: 2025-01-16 09:00:14.689 [INFO][3929] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:14.807291 containerd[1468]: 2025-01-16 09:00:14.704 [INFO][3929] ipam/ipam.go 489: Trying affinity for 192.168.16.64/26 host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:14.807291 containerd[1468]: 2025-01-16 09:00:14.708 [INFO][3929] ipam/ipam.go 155: Attempting to load block cidr=192.168.16.64/26 host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:14.807291 containerd[1468]: 2025-01-16 09:00:14.713 [INFO][3929] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.16.64/26 host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:14.807291 containerd[1468]: 2025-01-16 09:00:14.713 [INFO][3929] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.16.64/26 handle="k8s-pod-network.5eac6e3340bae2af589e72be908073a7c7725b1a05256d137059ee80fcfdffb4" host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:14.807291 containerd[1468]: 2025-01-16 09:00:14.717 [INFO][3929] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5eac6e3340bae2af589e72be908073a7c7725b1a05256d137059ee80fcfdffb4 Jan 16 09:00:14.807291 containerd[1468]: 2025-01-16 09:00:14.725 [INFO][3929] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.16.64/26 handle="k8s-pod-network.5eac6e3340bae2af589e72be908073a7c7725b1a05256d137059ee80fcfdffb4" host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:14.807291 containerd[1468]: 2025-01-16 09:00:14.736 [INFO][3929] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.16.65/26] block=192.168.16.64/26 handle="k8s-pod-network.5eac6e3340bae2af589e72be908073a7c7725b1a05256d137059ee80fcfdffb4" host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:14.807291 containerd[1468]: 2025-01-16 09:00:14.736 [INFO][3929] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.16.65/26] handle="k8s-pod-network.5eac6e3340bae2af589e72be908073a7c7725b1a05256d137059ee80fcfdffb4" host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:14.807291 containerd[1468]: 2025-01-16 09:00:14.736 [INFO][3929] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:00:14.807291 containerd[1468]: 2025-01-16 09:00:14.736 [INFO][3929] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.65/26] IPv6=[] ContainerID="5eac6e3340bae2af589e72be908073a7c7725b1a05256d137059ee80fcfdffb4" HandleID="k8s-pod-network.5eac6e3340bae2af589e72be908073a7c7725b1a05256d137059ee80fcfdffb4" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--pzkn6-eth0" Jan 16 09:00:14.809176 containerd[1468]: 2025-01-16 09:00:14.742 [INFO][3913] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5eac6e3340bae2af589e72be908073a7c7725b1a05256d137059ee80fcfdffb4" Namespace="calico-apiserver" Pod="calico-apiserver-5948865d94-pzkn6" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--pzkn6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--pzkn6-eth0", GenerateName:"calico-apiserver-5948865d94-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbcd6f2e-da2f-4282-a7c3-3ba835a7bf1a", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 8, 59, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5948865d94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-9-2d52908736", ContainerID:"", Pod:"calico-apiserver-5948865d94-pzkn6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali73e433c7831", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:00:14.809176 containerd[1468]: 2025-01-16 09:00:14.742 [INFO][3913] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.16.65/32] ContainerID="5eac6e3340bae2af589e72be908073a7c7725b1a05256d137059ee80fcfdffb4" Namespace="calico-apiserver" Pod="calico-apiserver-5948865d94-pzkn6" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--pzkn6-eth0" Jan 16 09:00:14.809176 containerd[1468]: 2025-01-16 09:00:14.742 [INFO][3913] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73e433c7831 ContainerID="5eac6e3340bae2af589e72be908073a7c7725b1a05256d137059ee80fcfdffb4" Namespace="calico-apiserver" Pod="calico-apiserver-5948865d94-pzkn6" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--pzkn6-eth0" Jan 16 09:00:14.809176 containerd[1468]: 2025-01-16 09:00:14.764 [INFO][3913] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5eac6e3340bae2af589e72be908073a7c7725b1a05256d137059ee80fcfdffb4" Namespace="calico-apiserver" Pod="calico-apiserver-5948865d94-pzkn6" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--pzkn6-eth0" Jan 16 09:00:14.809176 containerd[1468]: 2025-01-16 09:00:14.769 [INFO][3913] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5eac6e3340bae2af589e72be908073a7c7725b1a05256d137059ee80fcfdffb4" Namespace="calico-apiserver" Pod="calico-apiserver-5948865d94-pzkn6" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--pzkn6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--pzkn6-eth0", GenerateName:"calico-apiserver-5948865d94-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbcd6f2e-da2f-4282-a7c3-3ba835a7bf1a", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 8, 59, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5948865d94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-9-2d52908736", ContainerID:"5eac6e3340bae2af589e72be908073a7c7725b1a05256d137059ee80fcfdffb4", Pod:"calico-apiserver-5948865d94-pzkn6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali73e433c7831", MAC:"1a:4a:11:47:2f:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:00:14.809176 containerd[1468]: 2025-01-16 09:00:14.793 [INFO][3913] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5eac6e3340bae2af589e72be908073a7c7725b1a05256d137059ee80fcfdffb4" Namespace="calico-apiserver" Pod="calico-apiserver-5948865d94-pzkn6" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--pzkn6-eth0" Jan 16 09:00:14.850073 systemd-networkd[1373]: calif2c74ca7053: Link UP Jan 16 09:00:14.851965 systemd-networkd[1373]: calif2c74ca7053: Gained carrier Jan 16 09:00:14.889366 containerd[1468]: 2025-01-16 09:00:14.553 [INFO][3888] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--9--2d52908736-k8s-csi--node--driver--9xsdq-eth0 csi-node-driver- calico-system 77f8f711-c082-45da-b5d0-0016bf4eeb11 782 0 2025-01-16 08:59:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-9-2d52908736 csi-node-driver-9xsdq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif2c74ca7053 [] []}} ContainerID="7907f80fff6bf6e2c8499736fbed9dc15ef3baa46349b17d53cf0bded653e771" Namespace="calico-system" Pod="csi-node-driver-9xsdq" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-csi--node--driver--9xsdq-" Jan 16 09:00:14.889366 containerd[1468]: 2025-01-16 09:00:14.553 [INFO][3888] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7907f80fff6bf6e2c8499736fbed9dc15ef3baa46349b17d53cf0bded653e771" Namespace="calico-system" Pod="csi-node-driver-9xsdq" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-csi--node--driver--9xsdq-eth0" Jan 16 09:00:14.889366 containerd[1468]: 2025-01-16 09:00:14.657 [INFO][3925] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7907f80fff6bf6e2c8499736fbed9dc15ef3baa46349b17d53cf0bded653e771" HandleID="k8s-pod-network.7907f80fff6bf6e2c8499736fbed9dc15ef3baa46349b17d53cf0bded653e771" Workload="ci--4081.3.0--9--2d52908736-k8s-csi--node--driver--9xsdq-eth0" Jan 16 09:00:14.889366 containerd[1468]: 2025-01-16 09:00:14.679 [INFO][3925] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7907f80fff6bf6e2c8499736fbed9dc15ef3baa46349b17d53cf0bded653e771" HandleID="k8s-pod-network.7907f80fff6bf6e2c8499736fbed9dc15ef3baa46349b17d53cf0bded653e771" Workload="ci--4081.3.0--9--2d52908736-k8s-csi--node--driver--9xsdq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ff980), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-9-2d52908736", "pod":"csi-node-driver-9xsdq", "timestamp":"2025-01-16 09:00:14.657322778 +0000 UTC"}, Hostname:"ci-4081.3.0-9-2d52908736", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 09:00:14.889366 containerd[1468]: 2025-01-16 09:00:14.679 [INFO][3925] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:00:14.889366 containerd[1468]: 2025-01-16 09:00:14.737 [INFO][3925] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:00:14.889366 containerd[1468]: 2025-01-16 09:00:14.737 [INFO][3925] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-9-2d52908736' Jan 16 09:00:14.889366 containerd[1468]: 2025-01-16 09:00:14.743 [INFO][3925] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7907f80fff6bf6e2c8499736fbed9dc15ef3baa46349b17d53cf0bded653e771" host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:14.889366 containerd[1468]: 2025-01-16 09:00:14.759 [INFO][3925] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:14.889366 containerd[1468]: 2025-01-16 09:00:14.770 [INFO][3925] ipam/ipam.go 489: Trying affinity for 192.168.16.64/26 host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:14.889366 containerd[1468]: 2025-01-16 09:00:14.780 [INFO][3925] ipam/ipam.go 155: Attempting to load block cidr=192.168.16.64/26 host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:14.889366 containerd[1468]: 2025-01-16 09:00:14.787 [INFO][3925] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.16.64/26 host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:14.889366 containerd[1468]: 2025-01-16 09:00:14.788 [INFO][3925] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.16.64/26 handle="k8s-pod-network.7907f80fff6bf6e2c8499736fbed9dc15ef3baa46349b17d53cf0bded653e771" host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:14.889366 containerd[1468]: 2025-01-16 09:00:14.797 [INFO][3925] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7907f80fff6bf6e2c8499736fbed9dc15ef3baa46349b17d53cf0bded653e771 Jan 16 09:00:14.889366 containerd[1468]: 2025-01-16 09:00:14.817 [INFO][3925] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.16.64/26 handle="k8s-pod-network.7907f80fff6bf6e2c8499736fbed9dc15ef3baa46349b17d53cf0bded653e771" host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:14.889366 containerd[1468]: 2025-01-16 09:00:14.837 [INFO][3925] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.16.66/26] block=192.168.16.64/26 handle="k8s-pod-network.7907f80fff6bf6e2c8499736fbed9dc15ef3baa46349b17d53cf0bded653e771" host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:14.889366 containerd[1468]: 2025-01-16 09:00:14.837 [INFO][3925] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.16.66/26] handle="k8s-pod-network.7907f80fff6bf6e2c8499736fbed9dc15ef3baa46349b17d53cf0bded653e771" host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:14.889366 containerd[1468]: 2025-01-16 09:00:14.837 [INFO][3925] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:00:14.889366 containerd[1468]: 2025-01-16 09:00:14.837 [INFO][3925] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.66/26] IPv6=[] ContainerID="7907f80fff6bf6e2c8499736fbed9dc15ef3baa46349b17d53cf0bded653e771" HandleID="k8s-pod-network.7907f80fff6bf6e2c8499736fbed9dc15ef3baa46349b17d53cf0bded653e771" Workload="ci--4081.3.0--9--2d52908736-k8s-csi--node--driver--9xsdq-eth0" Jan 16 09:00:14.890758 containerd[1468]: 2025-01-16 09:00:14.842 [INFO][3888] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7907f80fff6bf6e2c8499736fbed9dc15ef3baa46349b17d53cf0bded653e771" Namespace="calico-system" Pod="csi-node-driver-9xsdq" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-csi--node--driver--9xsdq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--9--2d52908736-k8s-csi--node--driver--9xsdq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77f8f711-c082-45da-b5d0-0016bf4eeb11", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 8, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-9-2d52908736", ContainerID:"", Pod:"csi-node-driver-9xsdq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.16.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif2c74ca7053", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:00:14.890758 containerd[1468]: 2025-01-16 09:00:14.843 [INFO][3888] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.16.66/32] ContainerID="7907f80fff6bf6e2c8499736fbed9dc15ef3baa46349b17d53cf0bded653e771" Namespace="calico-system" Pod="csi-node-driver-9xsdq" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-csi--node--driver--9xsdq-eth0" Jan 16 09:00:14.890758 containerd[1468]: 2025-01-16 09:00:14.843 [INFO][3888] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2c74ca7053 ContainerID="7907f80fff6bf6e2c8499736fbed9dc15ef3baa46349b17d53cf0bded653e771" Namespace="calico-system" Pod="csi-node-driver-9xsdq" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-csi--node--driver--9xsdq-eth0" Jan 16 09:00:14.890758 containerd[1468]: 2025-01-16 09:00:14.853 [INFO][3888] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7907f80fff6bf6e2c8499736fbed9dc15ef3baa46349b17d53cf0bded653e771" Namespace="calico-system" Pod="csi-node-driver-9xsdq" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-csi--node--driver--9xsdq-eth0" Jan 16 09:00:14.890758 containerd[1468]: 2025-01-16 09:00:14.853 [INFO][3888] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7907f80fff6bf6e2c8499736fbed9dc15ef3baa46349b17d53cf0bded653e771" Namespace="calico-system" Pod="csi-node-driver-9xsdq" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-csi--node--driver--9xsdq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--9--2d52908736-k8s-csi--node--driver--9xsdq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77f8f711-c082-45da-b5d0-0016bf4eeb11", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 8, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-9-2d52908736", ContainerID:"7907f80fff6bf6e2c8499736fbed9dc15ef3baa46349b17d53cf0bded653e771", Pod:"csi-node-driver-9xsdq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.16.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif2c74ca7053", MAC:"3a:cf:c7:1e:55:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:00:14.890758 containerd[1468]: 2025-01-16 09:00:14.882 [INFO][3888] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7907f80fff6bf6e2c8499736fbed9dc15ef3baa46349b17d53cf0bded653e771" Namespace="calico-system" Pod="csi-node-driver-9xsdq" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-csi--node--driver--9xsdq-eth0" Jan 16 09:00:14.953185 containerd[1468]: time="2025-01-16T09:00:14.952977086Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:00:14.953185 containerd[1468]: time="2025-01-16T09:00:14.953058469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:00:14.953185 containerd[1468]: time="2025-01-16T09:00:14.953073376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:00:14.953638 containerd[1468]: time="2025-01-16T09:00:14.953376156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:00:14.966541 containerd[1468]: time="2025-01-16T09:00:14.962943571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:00:14.966541 containerd[1468]: time="2025-01-16T09:00:14.963007171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:00:14.966541 containerd[1468]: time="2025-01-16T09:00:14.963017662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:00:14.966541 containerd[1468]: time="2025-01-16T09:00:14.963124942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:00:15.009951 systemd[1]: Started cri-containerd-5eac6e3340bae2af589e72be908073a7c7725b1a05256d137059ee80fcfdffb4.scope - libcontainer container 5eac6e3340bae2af589e72be908073a7c7725b1a05256d137059ee80fcfdffb4. Jan 16 09:00:15.030367 systemd[1]: Started cri-containerd-7907f80fff6bf6e2c8499736fbed9dc15ef3baa46349b17d53cf0bded653e771.scope - libcontainer container 7907f80fff6bf6e2c8499736fbed9dc15ef3baa46349b17d53cf0bded653e771. Jan 16 09:00:15.037068 kubelet[2539]: I0116 09:00:15.037018 2539 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 09:00:15.039744 kubelet[2539]: E0116 09:00:15.039691 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:00:15.156111 containerd[1468]: time="2025-01-16T09:00:15.156021404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9xsdq,Uid:77f8f711-c082-45da-b5d0-0016bf4eeb11,Namespace:calico-system,Attempt:1,} returns sandbox id \"7907f80fff6bf6e2c8499736fbed9dc15ef3baa46349b17d53cf0bded653e771\"" Jan 16 09:00:15.181188 containerd[1468]: time="2025-01-16T09:00:15.180504667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 16 09:00:15.224464 containerd[1468]: time="2025-01-16T09:00:15.224289252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5948865d94-pzkn6,Uid:dbcd6f2e-da2f-4282-a7c3-3ba835a7bf1a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5eac6e3340bae2af589e72be908073a7c7725b1a05256d137059ee80fcfdffb4\"" Jan 16 09:00:15.346169 kubelet[2539]: E0116 09:00:15.345912 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:00:15.389892 systemd[1]: run-containerd-runc-k8s.io-63bfdee3c5c4a5cfb61371fab44503799018641fe60b0714ca1a21d6997be5a0-runc.IpAz5q.mount: Deactivated successfully. Jan 16 09:00:15.768620 containerd[1468]: time="2025-01-16T09:00:15.768553858Z" level=info msg="StopPodSandbox for \"5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b\"" Jan 16 09:00:15.769763 containerd[1468]: time="2025-01-16T09:00:15.769157328Z" level=info msg="StopPodSandbox for \"2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f\"" Jan 16 09:00:15.919261 systemd-networkd[1373]: vxlan.calico: Gained IPv6LL Jan 16 09:00:15.936331 containerd[1468]: 2025-01-16 09:00:15.859 [INFO][4150] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" Jan 16 09:00:15.936331 containerd[1468]: 2025-01-16 09:00:15.859 [INFO][4150] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" iface="eth0" netns="/var/run/netns/cni-ffa928b0-3dab-ce9d-4751-562744e6bf1e" Jan 16 09:00:15.936331 containerd[1468]: 2025-01-16 09:00:15.861 [INFO][4150] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" iface="eth0" netns="/var/run/netns/cni-ffa928b0-3dab-ce9d-4751-562744e6bf1e" Jan 16 09:00:15.936331 containerd[1468]: 2025-01-16 09:00:15.862 [INFO][4150] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" iface="eth0" netns="/var/run/netns/cni-ffa928b0-3dab-ce9d-4751-562744e6bf1e" Jan 16 09:00:15.936331 containerd[1468]: 2025-01-16 09:00:15.862 [INFO][4150] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" Jan 16 09:00:15.936331 containerd[1468]: 2025-01-16 09:00:15.864 [INFO][4150] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" Jan 16 09:00:15.936331 containerd[1468]: 2025-01-16 09:00:15.907 [INFO][4162] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" HandleID="k8s-pod-network.5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" Workload="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--c4lh9-eth0" Jan 16 09:00:15.936331 containerd[1468]: 2025-01-16 09:00:15.907 [INFO][4162] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:00:15.936331 containerd[1468]: 2025-01-16 09:00:15.907 [INFO][4162] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:00:15.936331 containerd[1468]: 2025-01-16 09:00:15.922 [WARNING][4162] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" HandleID="k8s-pod-network.5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" Workload="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--c4lh9-eth0" Jan 16 09:00:15.936331 containerd[1468]: 2025-01-16 09:00:15.922 [INFO][4162] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" HandleID="k8s-pod-network.5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" Workload="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--c4lh9-eth0" Jan 16 09:00:15.936331 containerd[1468]: 2025-01-16 09:00:15.926 [INFO][4162] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:00:15.936331 containerd[1468]: 2025-01-16 09:00:15.931 [INFO][4150] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" Jan 16 09:00:15.941504 containerd[1468]: time="2025-01-16T09:00:15.938274108Z" level=info msg="TearDown network for sandbox \"5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b\" successfully" Jan 16 09:00:15.941504 containerd[1468]: time="2025-01-16T09:00:15.938441901Z" level=info msg="StopPodSandbox for \"5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b\" returns successfully" Jan 16 09:00:15.944613 kubelet[2539]: E0116 09:00:15.943271 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:00:15.944110 systemd[1]: run-netns-cni\x2dffa928b0\x2d3dab\x2dce9d\x2d4751\x2d562744e6bf1e.mount: Deactivated successfully. Jan 16 09:00:15.946885 containerd[1468]: time="2025-01-16T09:00:15.946840781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-c4lh9,Uid:7552fa54-b39b-428e-9b31-66fd48108761,Namespace:kube-system,Attempt:1,}" Jan 16 09:00:15.962615 containerd[1468]: 2025-01-16 09:00:15.866 [INFO][4149] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" Jan 16 09:00:15.962615 containerd[1468]: 2025-01-16 09:00:15.867 [INFO][4149] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" iface="eth0" netns="/var/run/netns/cni-6d37a850-bbfa-f64c-0fd3-91816401f826" Jan 16 09:00:15.962615 containerd[1468]: 2025-01-16 09:00:15.868 [INFO][4149] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" iface="eth0" netns="/var/run/netns/cni-6d37a850-bbfa-f64c-0fd3-91816401f826" Jan 16 09:00:15.962615 containerd[1468]: 2025-01-16 09:00:15.868 [INFO][4149] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" iface="eth0" netns="/var/run/netns/cni-6d37a850-bbfa-f64c-0fd3-91816401f826" Jan 16 09:00:15.962615 containerd[1468]: 2025-01-16 09:00:15.869 [INFO][4149] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" Jan 16 09:00:15.962615 containerd[1468]: 2025-01-16 09:00:15.869 [INFO][4149] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" Jan 16 09:00:15.962615 containerd[1468]: 2025-01-16 09:00:15.930 [INFO][4163] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" HandleID="k8s-pod-network.2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--2bhf5-eth0" Jan 16 09:00:15.962615 containerd[1468]: 2025-01-16 09:00:15.931 [INFO][4163] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:00:15.962615 containerd[1468]: 2025-01-16 09:00:15.932 [INFO][4163] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:00:15.962615 containerd[1468]: 2025-01-16 09:00:15.949 [WARNING][4163] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" HandleID="k8s-pod-network.2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--2bhf5-eth0" Jan 16 09:00:15.962615 containerd[1468]: 2025-01-16 09:00:15.949 [INFO][4163] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" HandleID="k8s-pod-network.2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--2bhf5-eth0" Jan 16 09:00:15.962615 containerd[1468]: 2025-01-16 09:00:15.954 [INFO][4163] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:00:15.962615 containerd[1468]: 2025-01-16 09:00:15.957 [INFO][4149] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" Jan 16 09:00:15.963137 containerd[1468]: time="2025-01-16T09:00:15.962965044Z" level=info msg="TearDown network for sandbox \"2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f\" successfully" Jan 16 09:00:15.963137 containerd[1468]: time="2025-01-16T09:00:15.963014219Z" level=info msg="StopPodSandbox for \"2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f\" returns successfully" Jan 16 09:00:15.966105 containerd[1468]: time="2025-01-16T09:00:15.965722813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5948865d94-2bhf5,Uid:7ec8f642-82a0-4595-a31f-bbaab8ff9d73,Namespace:calico-apiserver,Attempt:1,}" Jan 16 09:00:16.198487 systemd-networkd[1373]: cali8413f25ea1e: Link UP Jan 16 09:00:16.202263 systemd-networkd[1373]: cali8413f25ea1e: Gained carrier Jan 16 09:00:16.229318 containerd[1468]: 2025-01-16 09:00:16.036 [INFO][4175] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--c4lh9-eth0 coredns-76f75df574- kube-system 7552fa54-b39b-428e-9b31-66fd48108761 801 0 2025-01-16 08:59:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-9-2d52908736 coredns-76f75df574-c4lh9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8413f25ea1e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f05a156eae561b9185cb1e6eec82674e94e25ab0bfebc2d19e01e4df3e3eae32" Namespace="kube-system" Pod="coredns-76f75df574-c4lh9" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--c4lh9-" Jan 16 09:00:16.229318 containerd[1468]: 2025-01-16 09:00:16.037 [INFO][4175] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f05a156eae561b9185cb1e6eec82674e94e25ab0bfebc2d19e01e4df3e3eae32" Namespace="kube-system" Pod="coredns-76f75df574-c4lh9" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--c4lh9-eth0" Jan 16 09:00:16.229318 containerd[1468]: 2025-01-16 09:00:16.107 [INFO][4198] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f05a156eae561b9185cb1e6eec82674e94e25ab0bfebc2d19e01e4df3e3eae32" HandleID="k8s-pod-network.f05a156eae561b9185cb1e6eec82674e94e25ab0bfebc2d19e01e4df3e3eae32" Workload="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--c4lh9-eth0" Jan 16 09:00:16.229318 containerd[1468]: 2025-01-16 09:00:16.124 [INFO][4198] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f05a156eae561b9185cb1e6eec82674e94e25ab0bfebc2d19e01e4df3e3eae32" HandleID="k8s-pod-network.f05a156eae561b9185cb1e6eec82674e94e25ab0bfebc2d19e01e4df3e3eae32" Workload="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--c4lh9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003059c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-9-2d52908736", "pod":"coredns-76f75df574-c4lh9", "timestamp":"2025-01-16 09:00:16.107001774 +0000 UTC"}, Hostname:"ci-4081.3.0-9-2d52908736", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 09:00:16.229318 containerd[1468]: 2025-01-16 09:00:16.124 [INFO][4198] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:00:16.229318 containerd[1468]: 2025-01-16 09:00:16.124 [INFO][4198] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:00:16.229318 containerd[1468]: 2025-01-16 09:00:16.124 [INFO][4198] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-9-2d52908736' Jan 16 09:00:16.229318 containerd[1468]: 2025-01-16 09:00:16.131 [INFO][4198] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f05a156eae561b9185cb1e6eec82674e94e25ab0bfebc2d19e01e4df3e3eae32" host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:16.229318 containerd[1468]: 2025-01-16 09:00:16.145 [INFO][4198] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:16.229318 containerd[1468]: 2025-01-16 09:00:16.155 [INFO][4198] ipam/ipam.go 489: Trying affinity for 192.168.16.64/26 host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:16.229318 containerd[1468]: 2025-01-16 09:00:16.161 [INFO][4198] ipam/ipam.go 155: Attempting to load block cidr=192.168.16.64/26 host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:16.229318 containerd[1468]: 2025-01-16 09:00:16.167 [INFO][4198] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.16.64/26 host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:16.229318 containerd[1468]: 2025-01-16 09:00:16.167 [INFO][4198] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.16.64/26 handle="k8s-pod-network.f05a156eae561b9185cb1e6eec82674e94e25ab0bfebc2d19e01e4df3e3eae32" host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:16.229318 containerd[1468]: 2025-01-16 09:00:16.170 [INFO][4198] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f05a156eae561b9185cb1e6eec82674e94e25ab0bfebc2d19e01e4df3e3eae32 Jan 16 09:00:16.229318 containerd[1468]: 2025-01-16 09:00:16.179 [INFO][4198] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.16.64/26 handle="k8s-pod-network.f05a156eae561b9185cb1e6eec82674e94e25ab0bfebc2d19e01e4df3e3eae32" host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:16.229318 containerd[1468]: 2025-01-16 09:00:16.190 [INFO][4198] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.16.67/26] block=192.168.16.64/26 handle="k8s-pod-network.f05a156eae561b9185cb1e6eec82674e94e25ab0bfebc2d19e01e4df3e3eae32" host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:16.229318 containerd[1468]: 2025-01-16 09:00:16.190 [INFO][4198] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.16.67/26] handle="k8s-pod-network.f05a156eae561b9185cb1e6eec82674e94e25ab0bfebc2d19e01e4df3e3eae32" host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:16.229318 containerd[1468]: 2025-01-16 09:00:16.190 [INFO][4198] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:00:16.229318 containerd[1468]: 2025-01-16 09:00:16.190 [INFO][4198] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.67/26] IPv6=[] ContainerID="f05a156eae561b9185cb1e6eec82674e94e25ab0bfebc2d19e01e4df3e3eae32" HandleID="k8s-pod-network.f05a156eae561b9185cb1e6eec82674e94e25ab0bfebc2d19e01e4df3e3eae32" Workload="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--c4lh9-eth0" Jan 16 09:00:16.231573 containerd[1468]: 2025-01-16 09:00:16.194 [INFO][4175] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f05a156eae561b9185cb1e6eec82674e94e25ab0bfebc2d19e01e4df3e3eae32" Namespace="kube-system" Pod="coredns-76f75df574-c4lh9" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--c4lh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--c4lh9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7552fa54-b39b-428e-9b31-66fd48108761", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 8, 59, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-9-2d52908736", ContainerID:"", Pod:"coredns-76f75df574-c4lh9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8413f25ea1e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:00:16.231573 containerd[1468]: 2025-01-16 09:00:16.194 [INFO][4175] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.16.67/32] ContainerID="f05a156eae561b9185cb1e6eec82674e94e25ab0bfebc2d19e01e4df3e3eae32" Namespace="kube-system" Pod="coredns-76f75df574-c4lh9" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--c4lh9-eth0" Jan 16 09:00:16.231573 containerd[1468]: 2025-01-16 09:00:16.194 [INFO][4175] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8413f25ea1e ContainerID="f05a156eae561b9185cb1e6eec82674e94e25ab0bfebc2d19e01e4df3e3eae32" Namespace="kube-system" Pod="coredns-76f75df574-c4lh9" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--c4lh9-eth0" Jan 16 09:00:16.231573 containerd[1468]: 2025-01-16 09:00:16.197 [INFO][4175] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f05a156eae561b9185cb1e6eec82674e94e25ab0bfebc2d19e01e4df3e3eae32" Namespace="kube-system" Pod="coredns-76f75df574-c4lh9" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--c4lh9-eth0" Jan 16 09:00:16.231573 containerd[1468]: 2025-01-16 09:00:16.197 [INFO][4175] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f05a156eae561b9185cb1e6eec82674e94e25ab0bfebc2d19e01e4df3e3eae32" Namespace="kube-system" Pod="coredns-76f75df574-c4lh9" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--c4lh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--c4lh9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7552fa54-b39b-428e-9b31-66fd48108761", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 8, 59, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-9-2d52908736", ContainerID:"f05a156eae561b9185cb1e6eec82674e94e25ab0bfebc2d19e01e4df3e3eae32", Pod:"coredns-76f75df574-c4lh9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8413f25ea1e", MAC:"a6:84:5f:ad:3d:93", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:00:16.231573 containerd[1468]: 2025-01-16 09:00:16.225 [INFO][4175] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f05a156eae561b9185cb1e6eec82674e94e25ab0bfebc2d19e01e4df3e3eae32" Namespace="kube-system" Pod="coredns-76f75df574-c4lh9" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--c4lh9-eth0" Jan 16 09:00:16.249520 systemd[1]: run-netns-cni\x2d6d37a850\x2dbbfa\x2df64c\x2d0fd3\x2d91816401f826.mount: Deactivated successfully. Jan 16 09:00:16.279546 containerd[1468]: time="2025-01-16T09:00:16.278785588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:00:16.279546 containerd[1468]: time="2025-01-16T09:00:16.278895809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:00:16.279546 containerd[1468]: time="2025-01-16T09:00:16.278917100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:00:16.279546 containerd[1468]: time="2025-01-16T09:00:16.279037475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:00:16.285818 systemd-networkd[1373]: caliaf9b318003d: Link UP Jan 16 09:00:16.288017 systemd-networkd[1373]: caliaf9b318003d: Gained carrier Jan 16 09:00:16.332806 systemd[1]: Started cri-containerd-f05a156eae561b9185cb1e6eec82674e94e25ab0bfebc2d19e01e4df3e3eae32.scope - libcontainer container f05a156eae561b9185cb1e6eec82674e94e25ab0bfebc2d19e01e4df3e3eae32. Jan 16 09:00:16.339542 containerd[1468]: 2025-01-16 09:00:16.068 [INFO][4184] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--2bhf5-eth0 calico-apiserver-5948865d94- calico-apiserver 7ec8f642-82a0-4595-a31f-bbaab8ff9d73 802 0 2025-01-16 08:59:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5948865d94 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-9-2d52908736 calico-apiserver-5948865d94-2bhf5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaf9b318003d [] []}} ContainerID="6b0c1d848e76d8c4fd05df48326534fd6f359015df2c7d4199302727ed63712b" Namespace="calico-apiserver" Pod="calico-apiserver-5948865d94-2bhf5" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--2bhf5-" Jan 16 09:00:16.339542 containerd[1468]: 2025-01-16 09:00:16.069 [INFO][4184] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6b0c1d848e76d8c4fd05df48326534fd6f359015df2c7d4199302727ed63712b" Namespace="calico-apiserver" Pod="calico-apiserver-5948865d94-2bhf5" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--2bhf5-eth0" Jan 16 09:00:16.339542 containerd[1468]: 2025-01-16 09:00:16.148 [INFO][4202] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b0c1d848e76d8c4fd05df48326534fd6f359015df2c7d4199302727ed63712b" HandleID="k8s-pod-network.6b0c1d848e76d8c4fd05df48326534fd6f359015df2c7d4199302727ed63712b" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--2bhf5-eth0" Jan 16 09:00:16.339542 containerd[1468]: 2025-01-16 09:00:16.170 [INFO][4202] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6b0c1d848e76d8c4fd05df48326534fd6f359015df2c7d4199302727ed63712b" HandleID="k8s-pod-network.6b0c1d848e76d8c4fd05df48326534fd6f359015df2c7d4199302727ed63712b" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--2bhf5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290830), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-9-2d52908736", "pod":"calico-apiserver-5948865d94-2bhf5", "timestamp":"2025-01-16 09:00:16.148840808 +0000 UTC"}, Hostname:"ci-4081.3.0-9-2d52908736", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 09:00:16.339542 containerd[1468]: 2025-01-16 09:00:16.171 [INFO][4202] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:00:16.339542 containerd[1468]: 2025-01-16 09:00:16.190 [INFO][4202] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:00:16.339542 containerd[1468]: 2025-01-16 09:00:16.190 [INFO][4202] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-9-2d52908736' Jan 16 09:00:16.339542 containerd[1468]: 2025-01-16 09:00:16.204 [INFO][4202] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6b0c1d848e76d8c4fd05df48326534fd6f359015df2c7d4199302727ed63712b" host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:16.339542 containerd[1468]: 2025-01-16 09:00:16.212 [INFO][4202] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:16.339542 containerd[1468]: 2025-01-16 09:00:16.232 [INFO][4202] ipam/ipam.go 489: Trying affinity for 192.168.16.64/26 host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:16.339542 containerd[1468]: 2025-01-16 09:00:16.237 [INFO][4202] ipam/ipam.go 155: Attempting to load block cidr=192.168.16.64/26 host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:16.339542 containerd[1468]: 2025-01-16 09:00:16.242 [INFO][4202] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.16.64/26 host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:16.339542 containerd[1468]: 2025-01-16 09:00:16.242 [INFO][4202] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.16.64/26 handle="k8s-pod-network.6b0c1d848e76d8c4fd05df48326534fd6f359015df2c7d4199302727ed63712b" host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:16.339542 containerd[1468]: 2025-01-16 09:00:16.252 [INFO][4202] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6b0c1d848e76d8c4fd05df48326534fd6f359015df2c7d4199302727ed63712b Jan 16 09:00:16.339542 containerd[1468]: 2025-01-16 09:00:16.264 [INFO][4202] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.16.64/26 handle="k8s-pod-network.6b0c1d848e76d8c4fd05df48326534fd6f359015df2c7d4199302727ed63712b" host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:16.339542 containerd[1468]: 2025-01-16 09:00:16.275 [INFO][4202] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.16.68/26] block=192.168.16.64/26 handle="k8s-pod-network.6b0c1d848e76d8c4fd05df48326534fd6f359015df2c7d4199302727ed63712b" host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:16.339542 containerd[1468]: 2025-01-16 09:00:16.275 [INFO][4202] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.16.68/26] handle="k8s-pod-network.6b0c1d848e76d8c4fd05df48326534fd6f359015df2c7d4199302727ed63712b" host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:16.339542 containerd[1468]: 2025-01-16 09:00:16.275 [INFO][4202] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:00:16.339542 containerd[1468]: 2025-01-16 09:00:16.275 [INFO][4202] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.68/26] IPv6=[] ContainerID="6b0c1d848e76d8c4fd05df48326534fd6f359015df2c7d4199302727ed63712b" HandleID="k8s-pod-network.6b0c1d848e76d8c4fd05df48326534fd6f359015df2c7d4199302727ed63712b" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--2bhf5-eth0" Jan 16 09:00:16.340385 containerd[1468]: 2025-01-16 09:00:16.280 [INFO][4184] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6b0c1d848e76d8c4fd05df48326534fd6f359015df2c7d4199302727ed63712b" Namespace="calico-apiserver" Pod="calico-apiserver-5948865d94-2bhf5" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--2bhf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--2bhf5-eth0", GenerateName:"calico-apiserver-5948865d94-", Namespace:"calico-apiserver", SelfLink:"", UID:"7ec8f642-82a0-4595-a31f-bbaab8ff9d73", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 8, 59, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5948865d94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-9-2d52908736", ContainerID:"", Pod:"calico-apiserver-5948865d94-2bhf5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaf9b318003d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:00:16.340385 containerd[1468]: 2025-01-16 09:00:16.280 [INFO][4184] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.16.68/32] ContainerID="6b0c1d848e76d8c4fd05df48326534fd6f359015df2c7d4199302727ed63712b" Namespace="calico-apiserver" Pod="calico-apiserver-5948865d94-2bhf5" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--2bhf5-eth0" Jan 16 09:00:16.340385 containerd[1468]: 2025-01-16 09:00:16.281 [INFO][4184] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaf9b318003d ContainerID="6b0c1d848e76d8c4fd05df48326534fd6f359015df2c7d4199302727ed63712b" Namespace="calico-apiserver" Pod="calico-apiserver-5948865d94-2bhf5" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--2bhf5-eth0" Jan 16 09:00:16.340385 containerd[1468]: 2025-01-16 09:00:16.287 [INFO][4184] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b0c1d848e76d8c4fd05df48326534fd6f359015df2c7d4199302727ed63712b" Namespace="calico-apiserver" Pod="calico-apiserver-5948865d94-2bhf5" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--2bhf5-eth0" Jan 16 09:00:16.340385 containerd[1468]: 2025-01-16 09:00:16.289 [INFO][4184] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6b0c1d848e76d8c4fd05df48326534fd6f359015df2c7d4199302727ed63712b" Namespace="calico-apiserver" Pod="calico-apiserver-5948865d94-2bhf5" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--2bhf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--2bhf5-eth0", GenerateName:"calico-apiserver-5948865d94-", Namespace:"calico-apiserver", SelfLink:"", UID:"7ec8f642-82a0-4595-a31f-bbaab8ff9d73", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 8, 59, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5948865d94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-9-2d52908736", ContainerID:"6b0c1d848e76d8c4fd05df48326534fd6f359015df2c7d4199302727ed63712b", Pod:"calico-apiserver-5948865d94-2bhf5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaf9b318003d", MAC:"1e:f2:25:8e:67:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:00:16.340385 containerd[1468]: 2025-01-16 09:00:16.335 [INFO][4184] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6b0c1d848e76d8c4fd05df48326534fd6f359015df2c7d4199302727ed63712b" Namespace="calico-apiserver" Pod="calico-apiserver-5948865d94-2bhf5" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--2bhf5-eth0" Jan 16 09:00:16.406364 containerd[1468]: time="2025-01-16T09:00:16.406170804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:00:16.406364 containerd[1468]: time="2025-01-16T09:00:16.406244772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:00:16.406364 containerd[1468]: time="2025-01-16T09:00:16.406261280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:00:16.406983 containerd[1468]: time="2025-01-16T09:00:16.406357962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:00:16.465649 systemd[1]: Started cri-containerd-6b0c1d848e76d8c4fd05df48326534fd6f359015df2c7d4199302727ed63712b.scope - libcontainer container 6b0c1d848e76d8c4fd05df48326534fd6f359015df2c7d4199302727ed63712b. Jan 16 09:00:16.516428 containerd[1468]: time="2025-01-16T09:00:16.515319184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-c4lh9,Uid:7552fa54-b39b-428e-9b31-66fd48108761,Namespace:kube-system,Attempt:1,} returns sandbox id \"f05a156eae561b9185cb1e6eec82674e94e25ab0bfebc2d19e01e4df3e3eae32\"" Jan 16 09:00:16.518044 kubelet[2539]: E0116 09:00:16.517742 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:00:16.523855 containerd[1468]: time="2025-01-16T09:00:16.523623132Z" level=info msg="CreateContainer within sandbox \"f05a156eae561b9185cb1e6eec82674e94e25ab0bfebc2d19e01e4df3e3eae32\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 09:00:16.556697 containerd[1468]: time="2025-01-16T09:00:16.556618530Z" level=info msg="CreateContainer within sandbox \"f05a156eae561b9185cb1e6eec82674e94e25ab0bfebc2d19e01e4df3e3eae32\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4c8af9dc4bc37dae710ef9446e487349cdd07b52867ba3e0ab9da2a700ac0c7f\"" Jan 16 09:00:16.558847 containerd[1468]: time="2025-01-16T09:00:16.557860377Z" level=info msg="StartContainer for \"4c8af9dc4bc37dae710ef9446e487349cdd07b52867ba3e0ab9da2a700ac0c7f\"" Jan 16 09:00:16.559137 systemd-networkd[1373]: cali73e433c7831: Gained IPv6LL Jan 16 09:00:16.602612 systemd[1]: Started cri-containerd-4c8af9dc4bc37dae710ef9446e487349cdd07b52867ba3e0ab9da2a700ac0c7f.scope - libcontainer container 4c8af9dc4bc37dae710ef9446e487349cdd07b52867ba3e0ab9da2a700ac0c7f. Jan 16 09:00:16.607755 containerd[1468]: time="2025-01-16T09:00:16.607673383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5948865d94-2bhf5,Uid:7ec8f642-82a0-4595-a31f-bbaab8ff9d73,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6b0c1d848e76d8c4fd05df48326534fd6f359015df2c7d4199302727ed63712b\"" Jan 16 09:00:16.641978 containerd[1468]: time="2025-01-16T09:00:16.641936596Z" level=info msg="StartContainer for \"4c8af9dc4bc37dae710ef9446e487349cdd07b52867ba3e0ab9da2a700ac0c7f\" returns successfully" Jan 16 09:00:16.879599 systemd-networkd[1373]: calif2c74ca7053: Gained IPv6LL Jan 16 09:00:17.151472 kubelet[2539]: E0116 09:00:17.151262 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:00:17.194497 kubelet[2539]: I0116 09:00:17.194452 2539 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-c4lh9" podStartSLOduration=41.194380852 podStartE2EDuration="41.194380852s" podCreationTimestamp="2025-01-16 08:59:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 09:00:17.170224455 +0000 UTC m=+56.552296030" watchObservedRunningTime="2025-01-16 09:00:17.194380852 +0000 UTC m=+56.576452432" Jan 16 09:00:18.031471 systemd-networkd[1373]: cali8413f25ea1e: Gained IPv6LL Jan 16 09:00:18.162452 kubelet[2539]: E0116 09:00:18.161736 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:00:18.286600 systemd-networkd[1373]: caliaf9b318003d: Gained IPv6LL Jan 16 09:00:19.165155 kubelet[2539]: E0116 09:00:19.165045 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:00:20.804690 containerd[1468]: time="2025-01-16T09:00:20.804632626Z" level=info msg="StopPodSandbox for \"1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259\"" Jan 16 09:00:20.940324 containerd[1468]: 2025-01-16 09:00:20.875 [WARNING][4386] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--pzkn6-eth0", GenerateName:"calico-apiserver-5948865d94-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbcd6f2e-da2f-4282-a7c3-3ba835a7bf1a", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 8, 59, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5948865d94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-9-2d52908736", ContainerID:"5eac6e3340bae2af589e72be908073a7c7725b1a05256d137059ee80fcfdffb4", Pod:"calico-apiserver-5948865d94-pzkn6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali73e433c7831", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:00:20.940324 containerd[1468]: 2025-01-16 09:00:20.875 [INFO][4386] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" Jan 16 09:00:20.940324 containerd[1468]: 2025-01-16 09:00:20.875 [INFO][4386] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" iface="eth0" netns="" Jan 16 09:00:20.940324 containerd[1468]: 2025-01-16 09:00:20.875 [INFO][4386] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" Jan 16 09:00:20.940324 containerd[1468]: 2025-01-16 09:00:20.875 [INFO][4386] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" Jan 16 09:00:20.940324 containerd[1468]: 2025-01-16 09:00:20.923 [INFO][4392] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" HandleID="k8s-pod-network.1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--pzkn6-eth0" Jan 16 09:00:20.940324 containerd[1468]: 2025-01-16 09:00:20.923 [INFO][4392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:00:20.940324 containerd[1468]: 2025-01-16 09:00:20.923 [INFO][4392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:00:20.940324 containerd[1468]: 2025-01-16 09:00:20.932 [WARNING][4392] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" HandleID="k8s-pod-network.1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--pzkn6-eth0" Jan 16 09:00:20.940324 containerd[1468]: 2025-01-16 09:00:20.932 [INFO][4392] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" HandleID="k8s-pod-network.1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--pzkn6-eth0" Jan 16 09:00:20.940324 containerd[1468]: 2025-01-16 09:00:20.935 [INFO][4392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:00:20.940324 containerd[1468]: 2025-01-16 09:00:20.938 [INFO][4386] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" Jan 16 09:00:20.941595 containerd[1468]: time="2025-01-16T09:00:20.940417030Z" level=info msg="TearDown network for sandbox \"1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259\" successfully" Jan 16 09:00:20.941595 containerd[1468]: time="2025-01-16T09:00:20.940444787Z" level=info msg="StopPodSandbox for \"1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259\" returns successfully" Jan 16 09:00:20.941595 containerd[1468]: time="2025-01-16T09:00:20.941074030Z" level=info msg="RemovePodSandbox for \"1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259\"" Jan 16 09:00:20.944888 containerd[1468]: time="2025-01-16T09:00:20.944173950Z" level=info msg="Forcibly stopping sandbox \"1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259\"" Jan 16 09:00:21.069701 containerd[1468]: 2025-01-16 09:00:20.996 [WARNING][4410] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--pzkn6-eth0", GenerateName:"calico-apiserver-5948865d94-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbcd6f2e-da2f-4282-a7c3-3ba835a7bf1a", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 8, 59, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5948865d94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-9-2d52908736", ContainerID:"5eac6e3340bae2af589e72be908073a7c7725b1a05256d137059ee80fcfdffb4", Pod:"calico-apiserver-5948865d94-pzkn6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali73e433c7831", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:00:21.069701 containerd[1468]: 2025-01-16 09:00:20.997 [INFO][4410] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" Jan 16 09:00:21.069701 containerd[1468]: 2025-01-16 09:00:20.997 [INFO][4410] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" iface="eth0" netns="" Jan 16 09:00:21.069701 containerd[1468]: 2025-01-16 09:00:20.998 [INFO][4410] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" Jan 16 09:00:21.069701 containerd[1468]: 2025-01-16 09:00:20.998 [INFO][4410] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" Jan 16 09:00:21.069701 containerd[1468]: 2025-01-16 09:00:21.046 [INFO][4416] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" HandleID="k8s-pod-network.1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--pzkn6-eth0" Jan 16 09:00:21.069701 containerd[1468]: 2025-01-16 09:00:21.046 [INFO][4416] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:00:21.069701 containerd[1468]: 2025-01-16 09:00:21.046 [INFO][4416] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:00:21.069701 containerd[1468]: 2025-01-16 09:00:21.061 [WARNING][4416] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" HandleID="k8s-pod-network.1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--pzkn6-eth0" Jan 16 09:00:21.069701 containerd[1468]: 2025-01-16 09:00:21.061 [INFO][4416] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" HandleID="k8s-pod-network.1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--pzkn6-eth0" Jan 16 09:00:21.069701 containerd[1468]: 2025-01-16 09:00:21.064 [INFO][4416] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:00:21.069701 containerd[1468]: 2025-01-16 09:00:21.067 [INFO][4410] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259" Jan 16 09:00:21.071147 containerd[1468]: time="2025-01-16T09:00:21.069677652Z" level=info msg="TearDown network for sandbox \"1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259\" successfully" Jan 16 09:00:21.077832 containerd[1468]: time="2025-01-16T09:00:21.077778865Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 09:00:21.077998 containerd[1468]: time="2025-01-16T09:00:21.077874096Z" level=info msg="RemovePodSandbox \"1d57f446dcf88e5db1a82d7d79ce675a768711c9bc5ede1b2d5f3b6f1fc3c259\" returns successfully" Jan 16 09:00:21.079557 containerd[1468]: time="2025-01-16T09:00:21.079000402Z" level=info msg="StopPodSandbox for \"2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f\"" Jan 16 09:00:21.209856 containerd[1468]: 2025-01-16 09:00:21.158 [WARNING][4434] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--2bhf5-eth0", GenerateName:"calico-apiserver-5948865d94-", Namespace:"calico-apiserver", SelfLink:"", UID:"7ec8f642-82a0-4595-a31f-bbaab8ff9d73", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 8, 59, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5948865d94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-9-2d52908736", ContainerID:"6b0c1d848e76d8c4fd05df48326534fd6f359015df2c7d4199302727ed63712b", Pod:"calico-apiserver-5948865d94-2bhf5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaf9b318003d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:00:21.209856 containerd[1468]: 2025-01-16 09:00:21.158 [INFO][4434] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" Jan 16 09:00:21.209856 containerd[1468]: 2025-01-16 09:00:21.158 [INFO][4434] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" iface="eth0" netns="" Jan 16 09:00:21.209856 containerd[1468]: 2025-01-16 09:00:21.158 [INFO][4434] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" Jan 16 09:00:21.209856 containerd[1468]: 2025-01-16 09:00:21.158 [INFO][4434] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" Jan 16 09:00:21.209856 containerd[1468]: 2025-01-16 09:00:21.193 [INFO][4440] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" HandleID="k8s-pod-network.2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--2bhf5-eth0" Jan 16 09:00:21.209856 containerd[1468]: 2025-01-16 09:00:21.194 [INFO][4440] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:00:21.209856 containerd[1468]: 2025-01-16 09:00:21.194 [INFO][4440] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:00:21.209856 containerd[1468]: 2025-01-16 09:00:21.202 [WARNING][4440] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" HandleID="k8s-pod-network.2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--2bhf5-eth0" Jan 16 09:00:21.209856 containerd[1468]: 2025-01-16 09:00:21.203 [INFO][4440] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" HandleID="k8s-pod-network.2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--2bhf5-eth0" Jan 16 09:00:21.209856 containerd[1468]: 2025-01-16 09:00:21.205 [INFO][4440] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:00:21.209856 containerd[1468]: 2025-01-16 09:00:21.207 [INFO][4434] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" Jan 16 09:00:21.210852 containerd[1468]: time="2025-01-16T09:00:21.209894518Z" level=info msg="TearDown network for sandbox \"2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f\" successfully" Jan 16 09:00:21.210852 containerd[1468]: time="2025-01-16T09:00:21.209919411Z" level=info msg="StopPodSandbox for \"2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f\" returns successfully" Jan 16 09:00:21.210852 containerd[1468]: time="2025-01-16T09:00:21.210659715Z" level=info msg="RemovePodSandbox for \"2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f\"" Jan 16 09:00:21.210852 containerd[1468]: time="2025-01-16T09:00:21.210701892Z" level=info msg="Forcibly stopping sandbox \"2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f\"" Jan 16 09:00:21.355373 containerd[1468]: 2025-01-16 09:00:21.282 [WARNING][4459] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--2bhf5-eth0", GenerateName:"calico-apiserver-5948865d94-", Namespace:"calico-apiserver", SelfLink:"", UID:"7ec8f642-82a0-4595-a31f-bbaab8ff9d73", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 8, 59, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5948865d94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-9-2d52908736", ContainerID:"6b0c1d848e76d8c4fd05df48326534fd6f359015df2c7d4199302727ed63712b", Pod:"calico-apiserver-5948865d94-2bhf5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaf9b318003d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:00:21.355373 containerd[1468]: 2025-01-16 09:00:21.283 [INFO][4459] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" Jan 16 09:00:21.355373 containerd[1468]: 2025-01-16 09:00:21.283 [INFO][4459] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" iface="eth0" netns="" Jan 16 09:00:21.355373 containerd[1468]: 2025-01-16 09:00:21.283 [INFO][4459] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" Jan 16 09:00:21.355373 containerd[1468]: 2025-01-16 09:00:21.283 [INFO][4459] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" Jan 16 09:00:21.355373 containerd[1468]: 2025-01-16 09:00:21.326 [INFO][4466] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" HandleID="k8s-pod-network.2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--2bhf5-eth0" Jan 16 09:00:21.355373 containerd[1468]: 2025-01-16 09:00:21.327 [INFO][4466] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:00:21.355373 containerd[1468]: 2025-01-16 09:00:21.327 [INFO][4466] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:00:21.355373 containerd[1468]: 2025-01-16 09:00:21.341 [WARNING][4466] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" HandleID="k8s-pod-network.2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--2bhf5-eth0" Jan 16 09:00:21.355373 containerd[1468]: 2025-01-16 09:00:21.341 [INFO][4466] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" HandleID="k8s-pod-network.2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--apiserver--5948865d94--2bhf5-eth0" Jan 16 09:00:21.355373 containerd[1468]: 2025-01-16 09:00:21.348 [INFO][4466] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:00:21.355373 containerd[1468]: 2025-01-16 09:00:21.352 [INFO][4459] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f" Jan 16 09:00:21.355373 containerd[1468]: time="2025-01-16T09:00:21.354677793Z" level=info msg="TearDown network for sandbox \"2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f\" successfully" Jan 16 09:00:21.359487 containerd[1468]: time="2025-01-16T09:00:21.359284565Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 09:00:21.359487 containerd[1468]: time="2025-01-16T09:00:21.359350485Z" level=info msg="RemovePodSandbox \"2aab4f2c36c54e68541d122a437d2b36a6a5074ef0ffb322b175a1764759f97f\" returns successfully" Jan 16 09:00:21.360409 containerd[1468]: time="2025-01-16T09:00:21.360366832Z" level=info msg="StopPodSandbox for \"ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78\"" Jan 16 09:00:21.490336 containerd[1468]: 2025-01-16 09:00:21.433 [WARNING][4485] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--9--2d52908736-k8s-csi--node--driver--9xsdq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77f8f711-c082-45da-b5d0-0016bf4eeb11", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 8, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-9-2d52908736", ContainerID:"7907f80fff6bf6e2c8499736fbed9dc15ef3baa46349b17d53cf0bded653e771", Pod:"csi-node-driver-9xsdq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.16.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif2c74ca7053", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:00:21.490336 containerd[1468]: 2025-01-16 09:00:21.433 [INFO][4485] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" Jan 16 09:00:21.490336 containerd[1468]: 2025-01-16 09:00:21.433 [INFO][4485] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" iface="eth0" netns="" Jan 16 09:00:21.490336 containerd[1468]: 2025-01-16 09:00:21.433 [INFO][4485] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" Jan 16 09:00:21.490336 containerd[1468]: 2025-01-16 09:00:21.434 [INFO][4485] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" Jan 16 09:00:21.490336 containerd[1468]: 2025-01-16 09:00:21.471 [INFO][4491] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" HandleID="k8s-pod-network.ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" Workload="ci--4081.3.0--9--2d52908736-k8s-csi--node--driver--9xsdq-eth0" Jan 16 09:00:21.490336 containerd[1468]: 2025-01-16 09:00:21.471 [INFO][4491] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:00:21.490336 containerd[1468]: 2025-01-16 09:00:21.471 [INFO][4491] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:00:21.490336 containerd[1468]: 2025-01-16 09:00:21.481 [WARNING][4491] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" HandleID="k8s-pod-network.ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" Workload="ci--4081.3.0--9--2d52908736-k8s-csi--node--driver--9xsdq-eth0" Jan 16 09:00:21.490336 containerd[1468]: 2025-01-16 09:00:21.481 [INFO][4491] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" HandleID="k8s-pod-network.ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" Workload="ci--4081.3.0--9--2d52908736-k8s-csi--node--driver--9xsdq-eth0" Jan 16 09:00:21.490336 containerd[1468]: 2025-01-16 09:00:21.484 [INFO][4491] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:00:21.490336 containerd[1468]: 2025-01-16 09:00:21.487 [INFO][4485] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" Jan 16 09:00:21.491033 containerd[1468]: time="2025-01-16T09:00:21.490385548Z" level=info msg="TearDown network for sandbox \"ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78\" successfully" Jan 16 09:00:21.491033 containerd[1468]: time="2025-01-16T09:00:21.490455260Z" level=info msg="StopPodSandbox for \"ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78\" returns successfully" Jan 16 09:00:21.492523 containerd[1468]: time="2025-01-16T09:00:21.491989058Z" level=info msg="RemovePodSandbox for \"ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78\"" Jan 16 09:00:21.492523 containerd[1468]: time="2025-01-16T09:00:21.492253222Z" level=info msg="Forcibly stopping sandbox \"ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78\"" Jan 16 09:00:21.626653 containerd[1468]: 2025-01-16 09:00:21.571 [WARNING][4509] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--9--2d52908736-k8s-csi--node--driver--9xsdq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77f8f711-c082-45da-b5d0-0016bf4eeb11", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 8, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-9-2d52908736", ContainerID:"7907f80fff6bf6e2c8499736fbed9dc15ef3baa46349b17d53cf0bded653e771", Pod:"csi-node-driver-9xsdq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.16.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif2c74ca7053", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:00:21.626653 containerd[1468]: 2025-01-16 09:00:21.572 [INFO][4509] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" Jan 16 09:00:21.626653 containerd[1468]: 2025-01-16 09:00:21.573 [INFO][4509] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" iface="eth0" netns="" Jan 16 09:00:21.626653 containerd[1468]: 2025-01-16 09:00:21.573 [INFO][4509] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" Jan 16 09:00:21.626653 containerd[1468]: 2025-01-16 09:00:21.573 [INFO][4509] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" Jan 16 09:00:21.626653 containerd[1468]: 2025-01-16 09:00:21.607 [INFO][4516] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" HandleID="k8s-pod-network.ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" Workload="ci--4081.3.0--9--2d52908736-k8s-csi--node--driver--9xsdq-eth0" Jan 16 09:00:21.626653 containerd[1468]: 2025-01-16 09:00:21.608 [INFO][4516] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:00:21.626653 containerd[1468]: 2025-01-16 09:00:21.608 [INFO][4516] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:00:21.626653 containerd[1468]: 2025-01-16 09:00:21.617 [WARNING][4516] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" HandleID="k8s-pod-network.ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" Workload="ci--4081.3.0--9--2d52908736-k8s-csi--node--driver--9xsdq-eth0" Jan 16 09:00:21.626653 containerd[1468]: 2025-01-16 09:00:21.617 [INFO][4516] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" HandleID="k8s-pod-network.ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" Workload="ci--4081.3.0--9--2d52908736-k8s-csi--node--driver--9xsdq-eth0" Jan 16 09:00:21.626653 containerd[1468]: 2025-01-16 09:00:21.621 [INFO][4516] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:00:21.626653 containerd[1468]: 2025-01-16 09:00:21.623 [INFO][4509] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78" Jan 16 09:00:21.626653 containerd[1468]: time="2025-01-16T09:00:21.625622958Z" level=info msg="TearDown network for sandbox \"ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78\" successfully" Jan 16 09:00:21.629141 containerd[1468]: time="2025-01-16T09:00:21.629100196Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 09:00:21.629324 containerd[1468]: time="2025-01-16T09:00:21.629306479Z" level=info msg="RemovePodSandbox \"ee27b0e45c688ffaaa8e99a2c7b0ab5ce4e16c4571e8546a128ce0e79df31b78\" returns successfully" Jan 16 09:00:21.630013 containerd[1468]: time="2025-01-16T09:00:21.629983383Z" level=info msg="StopPodSandbox for \"5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b\"" Jan 16 09:00:21.774987 containerd[1468]: 2025-01-16 09:00:21.685 [WARNING][4535] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--c4lh9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7552fa54-b39b-428e-9b31-66fd48108761", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 8, 59, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-9-2d52908736", ContainerID:"f05a156eae561b9185cb1e6eec82674e94e25ab0bfebc2d19e01e4df3e3eae32", Pod:"coredns-76f75df574-c4lh9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8413f25ea1e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:00:21.774987 containerd[1468]: 2025-01-16 09:00:21.685 [INFO][4535] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" Jan 16 09:00:21.774987 containerd[1468]: 2025-01-16 09:00:21.685 [INFO][4535] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" iface="eth0" netns="" Jan 16 09:00:21.774987 containerd[1468]: 2025-01-16 09:00:21.685 [INFO][4535] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" Jan 16 09:00:21.774987 containerd[1468]: 2025-01-16 09:00:21.685 [INFO][4535] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" Jan 16 09:00:21.774987 containerd[1468]: 2025-01-16 09:00:21.725 [INFO][4542] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" HandleID="k8s-pod-network.5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" Workload="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--c4lh9-eth0" Jan 16 09:00:21.774987 containerd[1468]: 2025-01-16 09:00:21.725 [INFO][4542] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:00:21.774987 containerd[1468]: 2025-01-16 09:00:21.725 [INFO][4542] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:00:21.774987 containerd[1468]: 2025-01-16 09:00:21.757 [WARNING][4542] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" HandleID="k8s-pod-network.5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" Workload="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--c4lh9-eth0" Jan 16 09:00:21.774987 containerd[1468]: 2025-01-16 09:00:21.757 [INFO][4542] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" HandleID="k8s-pod-network.5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" Workload="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--c4lh9-eth0" Jan 16 09:00:21.774987 containerd[1468]: 2025-01-16 09:00:21.767 [INFO][4542] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:00:21.774987 containerd[1468]: 2025-01-16 09:00:21.770 [INFO][4535] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" Jan 16 09:00:21.774987 containerd[1468]: time="2025-01-16T09:00:21.774743051Z" level=info msg="TearDown network for sandbox \"5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b\" successfully" Jan 16 09:00:21.774987 containerd[1468]: time="2025-01-16T09:00:21.774785918Z" level=info msg="StopPodSandbox for \"5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b\" returns successfully" Jan 16 09:00:21.781429 containerd[1468]: time="2025-01-16T09:00:21.779643894Z" level=info msg="RemovePodSandbox for \"5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b\"" Jan 16 09:00:21.781429 containerd[1468]: time="2025-01-16T09:00:21.779723270Z" level=info msg="Forcibly stopping sandbox \"5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b\"" Jan 16 09:00:21.950244 containerd[1468]: 2025-01-16 09:00:21.879 [WARNING][4560] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--c4lh9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7552fa54-b39b-428e-9b31-66fd48108761", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 8, 59, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-9-2d52908736", ContainerID:"f05a156eae561b9185cb1e6eec82674e94e25ab0bfebc2d19e01e4df3e3eae32", Pod:"coredns-76f75df574-c4lh9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8413f25ea1e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:00:21.950244 containerd[1468]: 2025-01-16 09:00:21.880 [INFO][4560] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" Jan 16 09:00:21.950244 containerd[1468]: 2025-01-16 09:00:21.880 [INFO][4560] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" iface="eth0" netns="" Jan 16 09:00:21.950244 containerd[1468]: 2025-01-16 09:00:21.880 [INFO][4560] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" Jan 16 09:00:21.950244 containerd[1468]: 2025-01-16 09:00:21.880 [INFO][4560] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" Jan 16 09:00:21.950244 containerd[1468]: 2025-01-16 09:00:21.929 [INFO][4566] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" HandleID="k8s-pod-network.5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" Workload="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--c4lh9-eth0" Jan 16 09:00:21.950244 containerd[1468]: 2025-01-16 09:00:21.929 [INFO][4566] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:00:21.950244 containerd[1468]: 2025-01-16 09:00:21.929 [INFO][4566] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:00:21.950244 containerd[1468]: 2025-01-16 09:00:21.940 [WARNING][4566] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" HandleID="k8s-pod-network.5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" Workload="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--c4lh9-eth0" Jan 16 09:00:21.950244 containerd[1468]: 2025-01-16 09:00:21.940 [INFO][4566] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" HandleID="k8s-pod-network.5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" Workload="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--c4lh9-eth0" Jan 16 09:00:21.950244 containerd[1468]: 2025-01-16 09:00:21.943 [INFO][4566] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:00:21.950244 containerd[1468]: 2025-01-16 09:00:21.947 [INFO][4560] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b" Jan 16 09:00:21.959881 containerd[1468]: time="2025-01-16T09:00:21.958560642Z" level=info msg="TearDown network for sandbox \"5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b\" successfully" Jan 16 09:00:21.961988 containerd[1468]: time="2025-01-16T09:00:21.961936841Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 09:00:21.962220 containerd[1468]: time="2025-01-16T09:00:21.962198653Z" level=info msg="RemovePodSandbox \"5d6bcf14d819a8a18ddc290d96bfa34e336349f8d1031d8f076cdaf7ff53345b\" returns successfully" Jan 16 09:00:22.769699 containerd[1468]: time="2025-01-16T09:00:22.769261277Z" level=info msg="StopPodSandbox for \"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2\"" Jan 16 09:00:22.918754 containerd[1468]: 2025-01-16 09:00:22.845 [INFO][4586] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" Jan 16 09:00:22.918754 containerd[1468]: 2025-01-16 09:00:22.845 [INFO][4586] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" iface="eth0" netns="/var/run/netns/cni-98d962ff-392d-3f2f-34e7-1890a9212847" Jan 16 09:00:22.918754 containerd[1468]: 2025-01-16 09:00:22.846 [INFO][4586] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" iface="eth0" netns="/var/run/netns/cni-98d962ff-392d-3f2f-34e7-1890a9212847" Jan 16 09:00:22.918754 containerd[1468]: 2025-01-16 09:00:22.847 [INFO][4586] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" iface="eth0" netns="/var/run/netns/cni-98d962ff-392d-3f2f-34e7-1890a9212847" Jan 16 09:00:22.918754 containerd[1468]: 2025-01-16 09:00:22.847 [INFO][4586] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" Jan 16 09:00:22.918754 containerd[1468]: 2025-01-16 09:00:22.847 [INFO][4586] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" Jan 16 09:00:22.918754 containerd[1468]: 2025-01-16 09:00:22.898 [INFO][4592] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" HandleID="k8s-pod-network.602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--kube--controllers--695856fb7d--5l4ph-eth0" Jan 16 09:00:22.918754 containerd[1468]: 2025-01-16 09:00:22.901 [INFO][4592] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:00:22.918754 containerd[1468]: 2025-01-16 09:00:22.901 [INFO][4592] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:00:22.918754 containerd[1468]: 2025-01-16 09:00:22.910 [WARNING][4592] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" HandleID="k8s-pod-network.602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--kube--controllers--695856fb7d--5l4ph-eth0" Jan 16 09:00:22.918754 containerd[1468]: 2025-01-16 09:00:22.910 [INFO][4592] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" HandleID="k8s-pod-network.602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--kube--controllers--695856fb7d--5l4ph-eth0" Jan 16 09:00:22.918754 containerd[1468]: 2025-01-16 09:00:22.912 [INFO][4592] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:00:22.918754 containerd[1468]: 2025-01-16 09:00:22.915 [INFO][4586] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" Jan 16 09:00:22.924239 containerd[1468]: time="2025-01-16T09:00:22.921578800Z" level=info msg="TearDown network for sandbox \"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2\" successfully" Jan 16 09:00:22.924239 containerd[1468]: time="2025-01-16T09:00:22.921630567Z" level=info msg="StopPodSandbox for \"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2\" returns successfully" Jan 16 09:00:22.924239 containerd[1468]: time="2025-01-16T09:00:22.922504428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-695856fb7d-5l4ph,Uid:5717cdda-4a10-4088-8b75-4fff7e8b3b8d,Namespace:calico-system,Attempt:1,}" Jan 16 09:00:22.933776 systemd[1]: run-netns-cni\x2d98d962ff\x2d392d\x2d3f2f\x2d34e7\x2d1890a9212847.mount: Deactivated successfully. Jan 16 09:00:23.117440 systemd-networkd[1373]: cali049fdbee065: Link UP Jan 16 09:00:23.119290 systemd-networkd[1373]: cali049fdbee065: Gained carrier Jan 16 09:00:23.152038 containerd[1468]: 2025-01-16 09:00:22.997 [INFO][4598] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--9--2d52908736-k8s-calico--kube--controllers--695856fb7d--5l4ph-eth0 calico-kube-controllers-695856fb7d- calico-system 5717cdda-4a10-4088-8b75-4fff7e8b3b8d 845 0 2025-01-16 08:59:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:695856fb7d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-9-2d52908736 calico-kube-controllers-695856fb7d-5l4ph eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali049fdbee065 [] []}} ContainerID="a5805ba78925a348e67cef1fa144a25e7a7fbc03d0d458bfa08a961093ebcfd2" Namespace="calico-system" Pod="calico-kube-controllers-695856fb7d-5l4ph" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-calico--kube--controllers--695856fb7d--5l4ph-" Jan 16 09:00:23.152038 containerd[1468]: 2025-01-16 09:00:22.997 [INFO][4598] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a5805ba78925a348e67cef1fa144a25e7a7fbc03d0d458bfa08a961093ebcfd2" Namespace="calico-system" Pod="calico-kube-controllers-695856fb7d-5l4ph" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-calico--kube--controllers--695856fb7d--5l4ph-eth0" Jan 16 09:00:23.152038 containerd[1468]: 2025-01-16 09:00:23.035 [INFO][4609] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a5805ba78925a348e67cef1fa144a25e7a7fbc03d0d458bfa08a961093ebcfd2" HandleID="k8s-pod-network.a5805ba78925a348e67cef1fa144a25e7a7fbc03d0d458bfa08a961093ebcfd2" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--kube--controllers--695856fb7d--5l4ph-eth0" Jan 16 09:00:23.152038 containerd[1468]: 2025-01-16 09:00:23.050 [INFO][4609] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a5805ba78925a348e67cef1fa144a25e7a7fbc03d0d458bfa08a961093ebcfd2" HandleID="k8s-pod-network.a5805ba78925a348e67cef1fa144a25e7a7fbc03d0d458bfa08a961093ebcfd2" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--kube--controllers--695856fb7d--5l4ph-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332c80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-9-2d52908736", "pod":"calico-kube-controllers-695856fb7d-5l4ph", "timestamp":"2025-01-16 09:00:23.035616978 +0000 UTC"}, Hostname:"ci-4081.3.0-9-2d52908736", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 09:00:23.152038 containerd[1468]: 2025-01-16 09:00:23.051 [INFO][4609] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:00:23.152038 containerd[1468]: 2025-01-16 09:00:23.051 [INFO][4609] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:00:23.152038 containerd[1468]: 2025-01-16 09:00:23.051 [INFO][4609] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-9-2d52908736' Jan 16 09:00:23.152038 containerd[1468]: 2025-01-16 09:00:23.055 [INFO][4609] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a5805ba78925a348e67cef1fa144a25e7a7fbc03d0d458bfa08a961093ebcfd2" host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:23.152038 containerd[1468]: 2025-01-16 09:00:23.073 [INFO][4609] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:23.152038 containerd[1468]: 2025-01-16 09:00:23.080 [INFO][4609] ipam/ipam.go 489: Trying affinity for 192.168.16.64/26 host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:23.152038 containerd[1468]: 2025-01-16 09:00:23.083 [INFO][4609] ipam/ipam.go 155: Attempting to load block cidr=192.168.16.64/26 host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:23.152038 containerd[1468]: 2025-01-16 09:00:23.086 [INFO][4609] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.16.64/26 host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:23.152038 containerd[1468]: 2025-01-16 09:00:23.087 [INFO][4609] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.16.64/26 handle="k8s-pod-network.a5805ba78925a348e67cef1fa144a25e7a7fbc03d0d458bfa08a961093ebcfd2" host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:23.152038 containerd[1468]: 2025-01-16 09:00:23.090 [INFO][4609] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a5805ba78925a348e67cef1fa144a25e7a7fbc03d0d458bfa08a961093ebcfd2 Jan 16 09:00:23.152038 containerd[1468]: 2025-01-16 09:00:23.097 [INFO][4609] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.16.64/26 handle="k8s-pod-network.a5805ba78925a348e67cef1fa144a25e7a7fbc03d0d458bfa08a961093ebcfd2" host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:23.152038 containerd[1468]: 2025-01-16 09:00:23.110 [INFO][4609] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.16.69/26] block=192.168.16.64/26 handle="k8s-pod-network.a5805ba78925a348e67cef1fa144a25e7a7fbc03d0d458bfa08a961093ebcfd2" host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:23.152038 containerd[1468]: 2025-01-16 09:00:23.110 [INFO][4609] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.16.69/26] handle="k8s-pod-network.a5805ba78925a348e67cef1fa144a25e7a7fbc03d0d458bfa08a961093ebcfd2" host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:23.152038 containerd[1468]: 2025-01-16 09:00:23.110 [INFO][4609] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:00:23.152038 containerd[1468]: 2025-01-16 09:00:23.110 [INFO][4609] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.69/26] IPv6=[] ContainerID="a5805ba78925a348e67cef1fa144a25e7a7fbc03d0d458bfa08a961093ebcfd2" HandleID="k8s-pod-network.a5805ba78925a348e67cef1fa144a25e7a7fbc03d0d458bfa08a961093ebcfd2" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--kube--controllers--695856fb7d--5l4ph-eth0" Jan 16 09:00:23.153295 containerd[1468]: 2025-01-16 09:00:23.113 [INFO][4598] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a5805ba78925a348e67cef1fa144a25e7a7fbc03d0d458bfa08a961093ebcfd2" Namespace="calico-system" Pod="calico-kube-controllers-695856fb7d-5l4ph" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-calico--kube--controllers--695856fb7d--5l4ph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--9--2d52908736-k8s-calico--kube--controllers--695856fb7d--5l4ph-eth0", GenerateName:"calico-kube-controllers-695856fb7d-", Namespace:"calico-system", SelfLink:"", UID:"5717cdda-4a10-4088-8b75-4fff7e8b3b8d", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 8, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"695856fb7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-9-2d52908736", ContainerID:"", Pod:"calico-kube-controllers-695856fb7d-5l4ph", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali049fdbee065", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:00:23.153295 containerd[1468]: 2025-01-16 09:00:23.113 [INFO][4598] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.16.69/32] ContainerID="a5805ba78925a348e67cef1fa144a25e7a7fbc03d0d458bfa08a961093ebcfd2" Namespace="calico-system" Pod="calico-kube-controllers-695856fb7d-5l4ph" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-calico--kube--controllers--695856fb7d--5l4ph-eth0" Jan 16 09:00:23.153295 containerd[1468]: 2025-01-16 09:00:23.113 [INFO][4598] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali049fdbee065 ContainerID="a5805ba78925a348e67cef1fa144a25e7a7fbc03d0d458bfa08a961093ebcfd2" Namespace="calico-system" Pod="calico-kube-controllers-695856fb7d-5l4ph" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-calico--kube--controllers--695856fb7d--5l4ph-eth0" Jan 16 09:00:23.153295 containerd[1468]: 2025-01-16 09:00:23.118 [INFO][4598] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a5805ba78925a348e67cef1fa144a25e7a7fbc03d0d458bfa08a961093ebcfd2" Namespace="calico-system" Pod="calico-kube-controllers-695856fb7d-5l4ph" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-calico--kube--controllers--695856fb7d--5l4ph-eth0" Jan 16 09:00:23.153295 containerd[1468]: 2025-01-16 09:00:23.121 [INFO][4598] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a5805ba78925a348e67cef1fa144a25e7a7fbc03d0d458bfa08a961093ebcfd2" Namespace="calico-system" Pod="calico-kube-controllers-695856fb7d-5l4ph" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-calico--kube--controllers--695856fb7d--5l4ph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--9--2d52908736-k8s-calico--kube--controllers--695856fb7d--5l4ph-eth0", GenerateName:"calico-kube-controllers-695856fb7d-", Namespace:"calico-system", SelfLink:"", UID:"5717cdda-4a10-4088-8b75-4fff7e8b3b8d", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 8, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"695856fb7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-9-2d52908736", ContainerID:"a5805ba78925a348e67cef1fa144a25e7a7fbc03d0d458bfa08a961093ebcfd2", Pod:"calico-kube-controllers-695856fb7d-5l4ph", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali049fdbee065", MAC:"e2:f3:d7:66:72:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:00:23.153295 containerd[1468]: 2025-01-16 09:00:23.143 [INFO][4598] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a5805ba78925a348e67cef1fa144a25e7a7fbc03d0d458bfa08a961093ebcfd2" Namespace="calico-system" Pod="calico-kube-controllers-695856fb7d-5l4ph" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-calico--kube--controllers--695856fb7d--5l4ph-eth0" Jan 16 09:00:23.191449 containerd[1468]: time="2025-01-16T09:00:23.189940806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:00:23.191449 containerd[1468]: time="2025-01-16T09:00:23.190003333Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:00:23.191449 containerd[1468]: time="2025-01-16T09:00:23.190018634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:00:23.191449 containerd[1468]: time="2025-01-16T09:00:23.190106174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:00:23.233725 systemd[1]: Started cri-containerd-a5805ba78925a348e67cef1fa144a25e7a7fbc03d0d458bfa08a961093ebcfd2.scope - libcontainer container a5805ba78925a348e67cef1fa144a25e7a7fbc03d0d458bfa08a961093ebcfd2. Jan 16 09:00:23.290766 containerd[1468]: time="2025-01-16T09:00:23.290689963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-695856fb7d-5l4ph,Uid:5717cdda-4a10-4088-8b75-4fff7e8b3b8d,Namespace:calico-system,Attempt:1,} returns sandbox id \"a5805ba78925a348e67cef1fa144a25e7a7fbc03d0d458bfa08a961093ebcfd2\"" Jan 16 09:00:24.769315 containerd[1468]: time="2025-01-16T09:00:24.769218345Z" level=info msg="StopPodSandbox for \"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122\"" Jan 16 09:00:24.875028 containerd[1468]: 2025-01-16 09:00:24.830 [INFO][4682] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" Jan 16 09:00:24.875028 containerd[1468]: 2025-01-16 09:00:24.831 [INFO][4682] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" iface="eth0" netns="/var/run/netns/cni-ff1b722d-1ed6-82ba-8dfa-8f2657adcc12" Jan 16 09:00:24.875028 containerd[1468]: 2025-01-16 09:00:24.832 [INFO][4682] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" iface="eth0" netns="/var/run/netns/cni-ff1b722d-1ed6-82ba-8dfa-8f2657adcc12" Jan 16 09:00:24.875028 containerd[1468]: 2025-01-16 09:00:24.833 [INFO][4682] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" iface="eth0" netns="/var/run/netns/cni-ff1b722d-1ed6-82ba-8dfa-8f2657adcc12" Jan 16 09:00:24.875028 containerd[1468]: 2025-01-16 09:00:24.833 [INFO][4682] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" Jan 16 09:00:24.875028 containerd[1468]: 2025-01-16 09:00:24.833 [INFO][4682] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" Jan 16 09:00:24.875028 containerd[1468]: 2025-01-16 09:00:24.860 [INFO][4688] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" HandleID="k8s-pod-network.75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" Workload="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--gz6pg-eth0" Jan 16 09:00:24.875028 containerd[1468]: 2025-01-16 09:00:24.860 [INFO][4688] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:00:24.875028 containerd[1468]: 2025-01-16 09:00:24.860 [INFO][4688] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:00:24.875028 containerd[1468]: 2025-01-16 09:00:24.868 [WARNING][4688] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" HandleID="k8s-pod-network.75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" Workload="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--gz6pg-eth0" Jan 16 09:00:24.875028 containerd[1468]: 2025-01-16 09:00:24.869 [INFO][4688] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" HandleID="k8s-pod-network.75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" Workload="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--gz6pg-eth0" Jan 16 09:00:24.875028 containerd[1468]: 2025-01-16 09:00:24.871 [INFO][4688] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:00:24.875028 containerd[1468]: 2025-01-16 09:00:24.873 [INFO][4682] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" Jan 16 09:00:24.875753 containerd[1468]: time="2025-01-16T09:00:24.875254353Z" level=info msg="TearDown network for sandbox \"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122\" successfully" Jan 16 09:00:24.875753 containerd[1468]: time="2025-01-16T09:00:24.875290831Z" level=info msg="StopPodSandbox for \"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122\" returns successfully" Jan 16 09:00:24.878789 kubelet[2539]: E0116 09:00:24.878750 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:00:24.881701 containerd[1468]: time="2025-01-16T09:00:24.879679032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gz6pg,Uid:d521e8c4-e6b6-49d5-b863-b778812328d0,Namespace:kube-system,Attempt:1,}" Jan 16 09:00:24.880999 systemd[1]: run-netns-cni\x2dff1b722d\x2d1ed6\x2d82ba\x2d8dfa\x2d8f2657adcc12.mount: Deactivated successfully. Jan 16 09:00:24.943445 systemd-networkd[1373]: cali049fdbee065: Gained IPv6LL Jan 16 09:00:25.084867 systemd-networkd[1373]: cali42af353429c: Link UP Jan 16 09:00:25.085879 systemd-networkd[1373]: cali42af353429c: Gained carrier Jan 16 09:00:25.110197 containerd[1468]: 2025-01-16 09:00:24.963 [INFO][4695] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--gz6pg-eth0 coredns-76f75df574- kube-system d521e8c4-e6b6-49d5-b863-b778812328d0 855 0 2025-01-16 08:59:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-9-2d52908736 coredns-76f75df574-gz6pg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali42af353429c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ea5a75bae2b68529d0c91916d05003fc5954b5636f25ead82b744ca795b550d4" Namespace="kube-system" Pod="coredns-76f75df574-gz6pg" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--gz6pg-" Jan 16 09:00:25.110197 containerd[1468]: 2025-01-16 09:00:24.963 [INFO][4695] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ea5a75bae2b68529d0c91916d05003fc5954b5636f25ead82b744ca795b550d4" Namespace="kube-system" Pod="coredns-76f75df574-gz6pg" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--gz6pg-eth0" Jan 16 09:00:25.110197 containerd[1468]: 2025-01-16 09:00:25.012 [INFO][4706] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea5a75bae2b68529d0c91916d05003fc5954b5636f25ead82b744ca795b550d4" HandleID="k8s-pod-network.ea5a75bae2b68529d0c91916d05003fc5954b5636f25ead82b744ca795b550d4" Workload="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--gz6pg-eth0" Jan 16 09:00:25.110197 containerd[1468]: 2025-01-16 09:00:25.027 [INFO][4706] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ea5a75bae2b68529d0c91916d05003fc5954b5636f25ead82b744ca795b550d4" HandleID="k8s-pod-network.ea5a75bae2b68529d0c91916d05003fc5954b5636f25ead82b744ca795b550d4" Workload="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--gz6pg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050d30), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-9-2d52908736", "pod":"coredns-76f75df574-gz6pg", "timestamp":"2025-01-16 09:00:25.012145186 +0000 UTC"}, Hostname:"ci-4081.3.0-9-2d52908736", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 09:00:25.110197 containerd[1468]: 2025-01-16 09:00:25.027 [INFO][4706] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:00:25.110197 containerd[1468]: 2025-01-16 09:00:25.027 [INFO][4706] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:00:25.110197 containerd[1468]: 2025-01-16 09:00:25.027 [INFO][4706] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-9-2d52908736' Jan 16 09:00:25.110197 containerd[1468]: 2025-01-16 09:00:25.030 [INFO][4706] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ea5a75bae2b68529d0c91916d05003fc5954b5636f25ead82b744ca795b550d4" host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:25.110197 containerd[1468]: 2025-01-16 09:00:25.037 [INFO][4706] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:25.110197 containerd[1468]: 2025-01-16 09:00:25.045 [INFO][4706] ipam/ipam.go 489: Trying affinity for 192.168.16.64/26 host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:25.110197 containerd[1468]: 2025-01-16 09:00:25.048 [INFO][4706] ipam/ipam.go 155: Attempting to load block cidr=192.168.16.64/26 host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:25.110197 containerd[1468]: 2025-01-16 09:00:25.051 [INFO][4706] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.16.64/26 host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:25.110197 containerd[1468]: 2025-01-16 09:00:25.051 [INFO][4706] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.16.64/26 handle="k8s-pod-network.ea5a75bae2b68529d0c91916d05003fc5954b5636f25ead82b744ca795b550d4" host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:25.110197 containerd[1468]: 2025-01-16 09:00:25.054 [INFO][4706] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ea5a75bae2b68529d0c91916d05003fc5954b5636f25ead82b744ca795b550d4 Jan 16 09:00:25.110197 containerd[1468]: 2025-01-16 09:00:25.062 [INFO][4706] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.16.64/26 handle="k8s-pod-network.ea5a75bae2b68529d0c91916d05003fc5954b5636f25ead82b744ca795b550d4" host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:25.110197 containerd[1468]: 2025-01-16 09:00:25.072 [INFO][4706] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.16.70/26] block=192.168.16.64/26 handle="k8s-pod-network.ea5a75bae2b68529d0c91916d05003fc5954b5636f25ead82b744ca795b550d4" host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:25.110197 containerd[1468]: 2025-01-16 09:00:25.072 [INFO][4706] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.16.70/26] handle="k8s-pod-network.ea5a75bae2b68529d0c91916d05003fc5954b5636f25ead82b744ca795b550d4" host="ci-4081.3.0-9-2d52908736" Jan 16 09:00:25.110197 containerd[1468]: 2025-01-16 09:00:25.072 [INFO][4706] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:00:25.110197 containerd[1468]: 2025-01-16 09:00:25.072 [INFO][4706] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.16.70/26] IPv6=[] ContainerID="ea5a75bae2b68529d0c91916d05003fc5954b5636f25ead82b744ca795b550d4" HandleID="k8s-pod-network.ea5a75bae2b68529d0c91916d05003fc5954b5636f25ead82b744ca795b550d4" Workload="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--gz6pg-eth0" Jan 16 09:00:25.110994 containerd[1468]: 2025-01-16 09:00:25.076 [INFO][4695] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ea5a75bae2b68529d0c91916d05003fc5954b5636f25ead82b744ca795b550d4" Namespace="kube-system" Pod="coredns-76f75df574-gz6pg" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--gz6pg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--gz6pg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d521e8c4-e6b6-49d5-b863-b778812328d0", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 8, 59, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-9-2d52908736", ContainerID:"", Pod:"coredns-76f75df574-gz6pg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali42af353429c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:00:25.110994 containerd[1468]: 2025-01-16 09:00:25.076 [INFO][4695] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.16.70/32] ContainerID="ea5a75bae2b68529d0c91916d05003fc5954b5636f25ead82b744ca795b550d4" Namespace="kube-system" Pod="coredns-76f75df574-gz6pg" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--gz6pg-eth0" Jan 16 09:00:25.110994 containerd[1468]: 2025-01-16 09:00:25.076 [INFO][4695] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali42af353429c ContainerID="ea5a75bae2b68529d0c91916d05003fc5954b5636f25ead82b744ca795b550d4" Namespace="kube-system" Pod="coredns-76f75df574-gz6pg" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--gz6pg-eth0" Jan 16 09:00:25.110994 containerd[1468]: 2025-01-16 09:00:25.086 [INFO][4695] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea5a75bae2b68529d0c91916d05003fc5954b5636f25ead82b744ca795b550d4" Namespace="kube-system" Pod="coredns-76f75df574-gz6pg" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--gz6pg-eth0" Jan 16 09:00:25.110994 containerd[1468]: 2025-01-16 09:00:25.088 [INFO][4695] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ea5a75bae2b68529d0c91916d05003fc5954b5636f25ead82b744ca795b550d4" Namespace="kube-system" Pod="coredns-76f75df574-gz6pg" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--gz6pg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--gz6pg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d521e8c4-e6b6-49d5-b863-b778812328d0", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 8, 59, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-9-2d52908736", ContainerID:"ea5a75bae2b68529d0c91916d05003fc5954b5636f25ead82b744ca795b550d4", Pod:"coredns-76f75df574-gz6pg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali42af353429c", MAC:"8e:f1:a3:88:84:8c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:00:25.110994 containerd[1468]: 2025-01-16 09:00:25.105 [INFO][4695] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ea5a75bae2b68529d0c91916d05003fc5954b5636f25ead82b744ca795b550d4" Namespace="kube-system" Pod="coredns-76f75df574-gz6pg" WorkloadEndpoint="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--gz6pg-eth0" Jan 16 09:00:25.157319 containerd[1468]: time="2025-01-16T09:00:25.157187870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:00:25.157631 containerd[1468]: time="2025-01-16T09:00:25.157252821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:00:25.157631 containerd[1468]: time="2025-01-16T09:00:25.157291318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:00:25.157631 containerd[1468]: time="2025-01-16T09:00:25.157434732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:00:25.194751 systemd[1]: Started cri-containerd-ea5a75bae2b68529d0c91916d05003fc5954b5636f25ead82b744ca795b550d4.scope - libcontainer container ea5a75bae2b68529d0c91916d05003fc5954b5636f25ead82b744ca795b550d4. Jan 16 09:00:25.269539 containerd[1468]: time="2025-01-16T09:00:25.269478560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gz6pg,Uid:d521e8c4-e6b6-49d5-b863-b778812328d0,Namespace:kube-system,Attempt:1,} returns sandbox id \"ea5a75bae2b68529d0c91916d05003fc5954b5636f25ead82b744ca795b550d4\"" Jan 16 09:00:25.271226 kubelet[2539]: E0116 09:00:25.271191 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:00:25.276480 containerd[1468]: time="2025-01-16T09:00:25.276361526Z" level=info msg="CreateContainer within sandbox \"ea5a75bae2b68529d0c91916d05003fc5954b5636f25ead82b744ca795b550d4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 09:00:25.289049 containerd[1468]: time="2025-01-16T09:00:25.288908028Z" level=info msg="CreateContainer within sandbox \"ea5a75bae2b68529d0c91916d05003fc5954b5636f25ead82b744ca795b550d4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b78d63beeb1e4f691b69ec7843c2721001ab735f01d0aa6ad3fbb76e727e922a\"" Jan 16 09:00:25.290197 containerd[1468]: time="2025-01-16T09:00:25.290149774Z" level=info msg="StartContainer for \"b78d63beeb1e4f691b69ec7843c2721001ab735f01d0aa6ad3fbb76e727e922a\"" Jan 16 09:00:25.328614 systemd[1]: Started cri-containerd-b78d63beeb1e4f691b69ec7843c2721001ab735f01d0aa6ad3fbb76e727e922a.scope - libcontainer container b78d63beeb1e4f691b69ec7843c2721001ab735f01d0aa6ad3fbb76e727e922a. Jan 16 09:00:25.377975 containerd[1468]: time="2025-01-16T09:00:25.377926784Z" level=info msg="StartContainer for \"b78d63beeb1e4f691b69ec7843c2721001ab735f01d0aa6ad3fbb76e727e922a\" returns successfully" Jan 16 09:00:26.196153 kubelet[2539]: E0116 09:00:26.196094 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:00:26.218448 kubelet[2539]: I0116 09:00:26.217901 2539 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-gz6pg" podStartSLOduration=50.217854837 podStartE2EDuration="50.217854837s" podCreationTimestamp="2025-01-16 08:59:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 09:00:26.217284483 +0000 UTC m=+65.599356062" watchObservedRunningTime="2025-01-16 09:00:26.217854837 +0000 UTC m=+65.599926433" Jan 16 09:00:26.926886 systemd-networkd[1373]: cali42af353429c: Gained IPv6LL Jan 16 09:00:27.199573 kubelet[2539]: E0116 09:00:27.199435 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:00:28.202191 kubelet[2539]: E0116 09:00:28.201863 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:00:39.767601 kubelet[2539]: E0116 09:00:39.767558 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:00:41.019469 containerd[1468]: time="2025-01-16T09:00:41.019198200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:00:41.020754 containerd[1468]: time="2025-01-16T09:00:41.020355484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 16 09:00:41.021426 containerd[1468]: time="2025-01-16T09:00:41.021377504Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:00:41.025119 containerd[1468]: time="2025-01-16T09:00:41.024694263Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:00:41.026005 containerd[1468]: time="2025-01-16T09:00:41.025954768Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 25.844324986s" Jan 16 09:00:41.026245 containerd[1468]: time="2025-01-16T09:00:41.026217551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 16 09:00:41.027766 containerd[1468]: time="2025-01-16T09:00:41.027714975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 16 09:00:41.033144 containerd[1468]: time="2025-01-16T09:00:41.033081066Z" level=info msg="CreateContainer within sandbox \"7907f80fff6bf6e2c8499736fbed9dc15ef3baa46349b17d53cf0bded653e771\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 16 09:00:41.066344 containerd[1468]: time="2025-01-16T09:00:41.066281173Z" level=info msg="CreateContainer within sandbox \"7907f80fff6bf6e2c8499736fbed9dc15ef3baa46349b17d53cf0bded653e771\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5d1282fac2e037e12bd127c92989dd9ed32a5bebf8e904367c3247a84de80515\"" Jan 16 09:00:41.070905 containerd[1468]: time="2025-01-16T09:00:41.067652821Z" level=info msg="StartContainer for \"5d1282fac2e037e12bd127c92989dd9ed32a5bebf8e904367c3247a84de80515\"" Jan 16 09:00:41.144636 systemd[1]: Started cri-containerd-5d1282fac2e037e12bd127c92989dd9ed32a5bebf8e904367c3247a84de80515.scope - libcontainer container 5d1282fac2e037e12bd127c92989dd9ed32a5bebf8e904367c3247a84de80515. Jan 16 09:00:41.181490 containerd[1468]: time="2025-01-16T09:00:41.181247462Z" level=info msg="StartContainer for \"5d1282fac2e037e12bd127c92989dd9ed32a5bebf8e904367c3247a84de80515\" returns successfully" Jan 16 09:00:45.767619 kubelet[2539]: E0116 09:00:45.767564 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:00:48.028649 containerd[1468]: time="2025-01-16T09:00:48.028513996Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:00:48.029753 containerd[1468]: time="2025-01-16T09:00:48.029546399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 16 09:00:48.030579 containerd[1468]: time="2025-01-16T09:00:48.030241973Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:00:48.037302 containerd[1468]: time="2025-01-16T09:00:48.036980501Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:00:48.037971 containerd[1468]: time="2025-01-16T09:00:48.037918365Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 7.010152268s" Jan 16 09:00:48.037971 containerd[1468]: time="2025-01-16T09:00:48.037971196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 16 09:00:48.040429 containerd[1468]: time="2025-01-16T09:00:48.038864870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 16 09:00:48.043466 containerd[1468]: time="2025-01-16T09:00:48.043287733Z" level=info msg="CreateContainer within sandbox \"5eac6e3340bae2af589e72be908073a7c7725b1a05256d137059ee80fcfdffb4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 16 09:00:48.059104 containerd[1468]: time="2025-01-16T09:00:48.058872083Z" level=info msg="CreateContainer within sandbox \"5eac6e3340bae2af589e72be908073a7c7725b1a05256d137059ee80fcfdffb4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"35f73b32345163066d7dc1d84b5232bd43d6f457362960462924f51960903d19\"" Jan 16 09:00:48.061510 containerd[1468]: time="2025-01-16T09:00:48.060054136Z" level=info msg="StartContainer for \"35f73b32345163066d7dc1d84b5232bd43d6f457362960462924f51960903d19\"" Jan 16 09:00:48.126640 systemd[1]: Started cri-containerd-35f73b32345163066d7dc1d84b5232bd43d6f457362960462924f51960903d19.scope - libcontainer container 35f73b32345163066d7dc1d84b5232bd43d6f457362960462924f51960903d19. Jan 16 09:00:48.178741 containerd[1468]: time="2025-01-16T09:00:48.178691647Z" level=info msg="StartContainer for \"35f73b32345163066d7dc1d84b5232bd43d6f457362960462924f51960903d19\" returns successfully" Jan 16 09:00:48.292635 kubelet[2539]: I0116 09:00:48.292467 2539 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5948865d94-pzkn6" podStartSLOduration=27.482081228 podStartE2EDuration="1m0.292417715s" podCreationTimestamp="2025-01-16 08:59:48 +0000 UTC" firstStartedPulling="2025-01-16 09:00:15.228256834 +0000 UTC m=+54.610328404" lastFinishedPulling="2025-01-16 09:00:48.038593315 +0000 UTC m=+87.420664891" observedRunningTime="2025-01-16 09:00:48.292218881 +0000 UTC m=+87.674290459" watchObservedRunningTime="2025-01-16 09:00:48.292417715 +0000 UTC m=+87.674489288" Jan 16 09:00:48.420436 containerd[1468]: time="2025-01-16T09:00:48.420194439Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:00:48.424064 containerd[1468]: time="2025-01-16T09:00:48.423211047Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 16 09:00:48.429109 containerd[1468]: time="2025-01-16T09:00:48.428854051Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 389.94782ms" Jan 16 09:00:48.429918 containerd[1468]: time="2025-01-16T09:00:48.429724793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 16 09:00:48.432348 containerd[1468]: time="2025-01-16T09:00:48.431890439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 16 09:00:48.435267 containerd[1468]: time="2025-01-16T09:00:48.435085380Z" level=info msg="CreateContainer within sandbox \"6b0c1d848e76d8c4fd05df48326534fd6f359015df2c7d4199302727ed63712b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 16 09:00:48.454281 containerd[1468]: time="2025-01-16T09:00:48.454214209Z" level=info msg="CreateContainer within sandbox \"6b0c1d848e76d8c4fd05df48326534fd6f359015df2c7d4199302727ed63712b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1a92a96e3a0879e1f167fa2880ea1e7d91b561a8edda5337b4c43c6954f79a15\"" Jan 16 09:00:48.457202 containerd[1468]: time="2025-01-16T09:00:48.456375042Z" level=info msg="StartContainer for \"1a92a96e3a0879e1f167fa2880ea1e7d91b561a8edda5337b4c43c6954f79a15\"" Jan 16 09:00:48.522707 systemd[1]: Started cri-containerd-1a92a96e3a0879e1f167fa2880ea1e7d91b561a8edda5337b4c43c6954f79a15.scope - libcontainer container 1a92a96e3a0879e1f167fa2880ea1e7d91b561a8edda5337b4c43c6954f79a15. Jan 16 09:00:48.591604 containerd[1468]: time="2025-01-16T09:00:48.589666176Z" level=info msg="StartContainer for \"1a92a96e3a0879e1f167fa2880ea1e7d91b561a8edda5337b4c43c6954f79a15\" returns successfully" Jan 16 09:00:49.487910 kubelet[2539]: I0116 09:00:49.487731 2539 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5948865d94-2bhf5" podStartSLOduration=29.668340881 podStartE2EDuration="1m1.487633046s" podCreationTimestamp="2025-01-16 08:59:48 +0000 UTC" firstStartedPulling="2025-01-16 09:00:16.611485588 +0000 UTC m=+55.993557148" lastFinishedPulling="2025-01-16 09:00:48.430777737 +0000 UTC m=+87.812849313" observedRunningTime="2025-01-16 09:00:49.304741138 +0000 UTC m=+88.686812718" watchObservedRunningTime="2025-01-16 09:00:49.487633046 +0000 UTC m=+88.869704626" Jan 16 09:00:51.767449 kubelet[2539]: E0116 09:00:51.767371 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:00:54.196790 containerd[1468]: time="2025-01-16T09:00:54.195546038Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:00:54.196790 containerd[1468]: time="2025-01-16T09:00:54.196318534Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 16 09:00:54.196790 containerd[1468]: time="2025-01-16T09:00:54.196693124Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:00:54.200619 containerd[1468]: time="2025-01-16T09:00:54.200564165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:00:54.201674 containerd[1468]: time="2025-01-16T09:00:54.201620425Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 5.769684224s" Jan 16 09:00:54.201674 containerd[1468]: time="2025-01-16T09:00:54.201672972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 16 09:00:54.202663 containerd[1468]: time="2025-01-16T09:00:54.202624455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 16 09:00:54.238952 containerd[1468]: time="2025-01-16T09:00:54.238904277Z" level=info msg="CreateContainer within sandbox \"a5805ba78925a348e67cef1fa144a25e7a7fbc03d0d458bfa08a961093ebcfd2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 16 09:00:54.264376 containerd[1468]: time="2025-01-16T09:00:54.264009717Z" level=info msg="CreateContainer within sandbox \"a5805ba78925a348e67cef1fa144a25e7a7fbc03d0d458bfa08a961093ebcfd2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"bf5ddd00b25153bac8d5b7b190eb85a87ec29cc0cf9d13865a811ffca6827bb8\"" Jan 16 09:00:54.267741 containerd[1468]: time="2025-01-16T09:00:54.267688922Z" level=info msg="StartContainer for \"bf5ddd00b25153bac8d5b7b190eb85a87ec29cc0cf9d13865a811ffca6827bb8\"" Jan 16 09:00:54.332643 systemd[1]: Started cri-containerd-bf5ddd00b25153bac8d5b7b190eb85a87ec29cc0cf9d13865a811ffca6827bb8.scope - libcontainer container bf5ddd00b25153bac8d5b7b190eb85a87ec29cc0cf9d13865a811ffca6827bb8. Jan 16 09:00:54.402219 containerd[1468]: time="2025-01-16T09:00:54.402174370Z" level=info msg="StartContainer for \"bf5ddd00b25153bac8d5b7b190eb85a87ec29cc0cf9d13865a811ffca6827bb8\" returns successfully" Jan 16 09:00:55.403740 kubelet[2539]: I0116 09:00:55.403071 2539 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-695856fb7d-5l4ph" podStartSLOduration=35.494251888 podStartE2EDuration="1m6.403013322s" podCreationTimestamp="2025-01-16 08:59:49 +0000 UTC" firstStartedPulling="2025-01-16 09:00:23.293475745 +0000 UTC m=+62.675547302" lastFinishedPulling="2025-01-16 09:00:54.202237176 +0000 UTC m=+93.584308736" observedRunningTime="2025-01-16 09:00:55.339270736 +0000 UTC m=+94.721342317" watchObservedRunningTime="2025-01-16 09:00:55.403013322 +0000 UTC m=+94.785084905" Jan 16 09:01:02.277185 containerd[1468]: time="2025-01-16T09:01:02.276991602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:01:02.282132 containerd[1468]: time="2025-01-16T09:01:02.281746018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 16 09:01:02.286448 containerd[1468]: time="2025-01-16T09:01:02.284710534Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:01:02.290772 containerd[1468]: time="2025-01-16T09:01:02.290644502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:01:02.293835 containerd[1468]: time="2025-01-16T09:01:02.293722762Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 8.091055052s" Jan 16 09:01:02.293835 containerd[1468]: time="2025-01-16T09:01:02.293816586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 16 09:01:02.301245 containerd[1468]: time="2025-01-16T09:01:02.301085339Z" level=info msg="CreateContainer within sandbox \"7907f80fff6bf6e2c8499736fbed9dc15ef3baa46349b17d53cf0bded653e771\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 16 09:01:02.338443 containerd[1468]: time="2025-01-16T09:01:02.335583939Z" level=info msg="CreateContainer within sandbox \"7907f80fff6bf6e2c8499736fbed9dc15ef3baa46349b17d53cf0bded653e771\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f4e16f0237fe88fa68b7de69664104b9796ef13608efd7ec1230fa42c022a4ee\"" Jan 16 09:01:02.338443 containerd[1468]: time="2025-01-16T09:01:02.336766342Z" level=info msg="StartContainer for \"f4e16f0237fe88fa68b7de69664104b9796ef13608efd7ec1230fa42c022a4ee\"" Jan 16 09:01:02.442429 systemd[1]: Started cri-containerd-f4e16f0237fe88fa68b7de69664104b9796ef13608efd7ec1230fa42c022a4ee.scope - libcontainer container f4e16f0237fe88fa68b7de69664104b9796ef13608efd7ec1230fa42c022a4ee. Jan 16 09:01:02.577361 containerd[1468]: time="2025-01-16T09:01:02.569922409Z" level=info msg="StartContainer for \"f4e16f0237fe88fa68b7de69664104b9796ef13608efd7ec1230fa42c022a4ee\" returns successfully" Jan 16 09:01:02.773657 kubelet[2539]: E0116 09:01:02.768674 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:01:03.236426 kubelet[2539]: I0116 09:01:03.236335 2539 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 16 09:01:03.240771 kubelet[2539]: I0116 09:01:03.240701 2539 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 16 09:01:05.974946 systemd[1]: Started sshd@11-147.182.202.230:22-139.178.68.195:56526.service - OpenSSH per-connection server daemon (139.178.68.195:56526). Jan 16 09:01:06.167448 sshd[5135]: Accepted publickey for core from 139.178.68.195 port 56526 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:01:06.171171 sshd[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:06.180631 systemd-logind[1445]: New session 8 of user core. Jan 16 09:01:06.188752 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 16 09:01:06.782147 sshd[5135]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:06.789473 systemd[1]: sshd@11-147.182.202.230:22-139.178.68.195:56526.service: Deactivated successfully. Jan 16 09:01:06.792648 systemd[1]: session-8.scope: Deactivated successfully. Jan 16 09:01:06.794483 systemd-logind[1445]: Session 8 logged out. Waiting for processes to exit. Jan 16 09:01:06.796279 systemd-logind[1445]: Removed session 8. Jan 16 09:01:11.805003 systemd[1]: Started sshd@12-147.182.202.230:22-139.178.68.195:56528.service - OpenSSH per-connection server daemon (139.178.68.195:56528). Jan 16 09:01:11.861840 sshd[5153]: Accepted publickey for core from 139.178.68.195 port 56528 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:01:11.864016 sshd[5153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:11.869094 systemd-logind[1445]: New session 9 of user core. Jan 16 09:01:11.875698 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 16 09:01:12.035799 sshd[5153]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:12.040369 systemd[1]: sshd@12-147.182.202.230:22-139.178.68.195:56528.service: Deactivated successfully. Jan 16 09:01:12.043303 systemd[1]: session-9.scope: Deactivated successfully. Jan 16 09:01:12.044208 systemd-logind[1445]: Session 9 logged out. Waiting for processes to exit. Jan 16 09:01:12.045472 systemd-logind[1445]: Removed session 9. Jan 16 09:01:15.767850 kubelet[2539]: E0116 09:01:15.767794 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:01:17.055918 systemd[1]: Started sshd@13-147.182.202.230:22-139.178.68.195:53766.service - OpenSSH per-connection server daemon (139.178.68.195:53766). Jan 16 09:01:17.139123 sshd[5188]: Accepted publickey for core from 139.178.68.195 port 53766 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:01:17.142360 sshd[5188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:17.150433 systemd-logind[1445]: New session 10 of user core. Jan 16 09:01:17.155716 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 16 09:01:17.308943 sshd[5188]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:17.312890 systemd-logind[1445]: Session 10 logged out. Waiting for processes to exit. Jan 16 09:01:17.313134 systemd[1]: sshd@13-147.182.202.230:22-139.178.68.195:53766.service: Deactivated successfully. Jan 16 09:01:17.315889 systemd[1]: session-10.scope: Deactivated successfully. Jan 16 09:01:17.318592 systemd-logind[1445]: Removed session 10. Jan 16 09:01:21.978042 containerd[1468]: time="2025-01-16T09:01:21.977990053Z" level=info msg="StopPodSandbox for \"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122\"" Jan 16 09:01:22.250339 containerd[1468]: 2025-01-16 09:01:22.183 [WARNING][5215] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--gz6pg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d521e8c4-e6b6-49d5-b863-b778812328d0", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 8, 59, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-9-2d52908736", ContainerID:"ea5a75bae2b68529d0c91916d05003fc5954b5636f25ead82b744ca795b550d4", Pod:"coredns-76f75df574-gz6pg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali42af353429c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:01:22.250339 containerd[1468]: 2025-01-16 09:01:22.187 [INFO][5215] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" Jan 16 09:01:22.250339 containerd[1468]: 2025-01-16 09:01:22.187 [INFO][5215] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" iface="eth0" netns="" Jan 16 09:01:22.250339 containerd[1468]: 2025-01-16 09:01:22.187 [INFO][5215] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" Jan 16 09:01:22.250339 containerd[1468]: 2025-01-16 09:01:22.187 [INFO][5215] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" Jan 16 09:01:22.250339 containerd[1468]: 2025-01-16 09:01:22.228 [INFO][5221] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" HandleID="k8s-pod-network.75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" Workload="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--gz6pg-eth0" Jan 16 09:01:22.250339 containerd[1468]: 2025-01-16 09:01:22.229 [INFO][5221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:01:22.250339 containerd[1468]: 2025-01-16 09:01:22.229 [INFO][5221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:01:22.250339 containerd[1468]: 2025-01-16 09:01:22.239 [WARNING][5221] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" HandleID="k8s-pod-network.75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" Workload="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--gz6pg-eth0" Jan 16 09:01:22.250339 containerd[1468]: 2025-01-16 09:01:22.239 [INFO][5221] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" HandleID="k8s-pod-network.75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" Workload="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--gz6pg-eth0" Jan 16 09:01:22.250339 containerd[1468]: 2025-01-16 09:01:22.242 [INFO][5221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:01:22.250339 containerd[1468]: 2025-01-16 09:01:22.248 [INFO][5215] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" Jan 16 09:01:22.250339 containerd[1468]: time="2025-01-16T09:01:22.250280042Z" level=info msg="TearDown network for sandbox \"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122\" successfully" Jan 16 09:01:22.250339 containerd[1468]: time="2025-01-16T09:01:22.250309774Z" level=info msg="StopPodSandbox for \"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122\" returns successfully" Jan 16 09:01:22.253966 containerd[1468]: time="2025-01-16T09:01:22.252322567Z" level=info msg="RemovePodSandbox for \"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122\"" Jan 16 09:01:22.253966 containerd[1468]: time="2025-01-16T09:01:22.252361150Z" level=info msg="Forcibly stopping sandbox \"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122\"" Jan 16 09:01:22.334085 systemd[1]: Started sshd@14-147.182.202.230:22-139.178.68.195:53778.service - OpenSSH per-connection server daemon (139.178.68.195:53778). Jan 16 09:01:22.410257 containerd[1468]: 2025-01-16 09:01:22.308 [WARNING][5239] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--gz6pg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d521e8c4-e6b6-49d5-b863-b778812328d0", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 8, 59, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-9-2d52908736", ContainerID:"ea5a75bae2b68529d0c91916d05003fc5954b5636f25ead82b744ca795b550d4", Pod:"coredns-76f75df574-gz6pg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali42af353429c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:01:22.410257 containerd[1468]: 2025-01-16 09:01:22.309 [INFO][5239] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" Jan 16 09:01:22.410257 containerd[1468]: 2025-01-16 09:01:22.309 [INFO][5239] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" iface="eth0" netns="" Jan 16 09:01:22.410257 containerd[1468]: 2025-01-16 09:01:22.309 [INFO][5239] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" Jan 16 09:01:22.410257 containerd[1468]: 2025-01-16 09:01:22.309 [INFO][5239] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" Jan 16 09:01:22.410257 containerd[1468]: 2025-01-16 09:01:22.373 [INFO][5246] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" HandleID="k8s-pod-network.75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" Workload="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--gz6pg-eth0" Jan 16 09:01:22.410257 containerd[1468]: 2025-01-16 09:01:22.374 [INFO][5246] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:01:22.410257 containerd[1468]: 2025-01-16 09:01:22.374 [INFO][5246] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:01:22.410257 containerd[1468]: 2025-01-16 09:01:22.389 [WARNING][5246] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" HandleID="k8s-pod-network.75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" Workload="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--gz6pg-eth0" Jan 16 09:01:22.410257 containerd[1468]: 2025-01-16 09:01:22.390 [INFO][5246] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" HandleID="k8s-pod-network.75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" Workload="ci--4081.3.0--9--2d52908736-k8s-coredns--76f75df574--gz6pg-eth0" Jan 16 09:01:22.410257 containerd[1468]: 2025-01-16 09:01:22.400 [INFO][5246] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:01:22.410257 containerd[1468]: 2025-01-16 09:01:22.405 [INFO][5239] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122" Jan 16 09:01:22.412519 containerd[1468]: time="2025-01-16T09:01:22.411072861Z" level=info msg="TearDown network for sandbox \"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122\" successfully" Jan 16 09:01:22.440018 containerd[1468]: time="2025-01-16T09:01:22.439855250Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 09:01:22.440414 containerd[1468]: time="2025-01-16T09:01:22.440113976Z" level=info msg="RemovePodSandbox \"75b105e0458b352b6c5cbbbc5f9b4addca9ab4ddf85cbb27ad727ee2ba17f122\" returns successfully" Jan 16 09:01:22.442572 containerd[1468]: time="2025-01-16T09:01:22.442034998Z" level=info msg="StopPodSandbox for \"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2\"" Jan 16 09:01:22.475498 sshd[5251]: Accepted publickey for core from 139.178.68.195 port 53778 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:01:22.483094 sshd[5251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:22.501507 systemd-logind[1445]: New session 11 of user core. Jan 16 09:01:22.506779 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 16 09:01:22.628668 containerd[1468]: 2025-01-16 09:01:22.548 [WARNING][5267] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--9--2d52908736-k8s-calico--kube--controllers--695856fb7d--5l4ph-eth0", GenerateName:"calico-kube-controllers-695856fb7d-", Namespace:"calico-system", SelfLink:"", UID:"5717cdda-4a10-4088-8b75-4fff7e8b3b8d", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 8, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"695856fb7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-9-2d52908736", ContainerID:"a5805ba78925a348e67cef1fa144a25e7a7fbc03d0d458bfa08a961093ebcfd2", Pod:"calico-kube-controllers-695856fb7d-5l4ph", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali049fdbee065", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:01:22.628668 containerd[1468]: 2025-01-16 09:01:22.548 [INFO][5267] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" Jan 16 09:01:22.628668 containerd[1468]: 2025-01-16 09:01:22.548 [INFO][5267] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" iface="eth0" netns="" Jan 16 09:01:22.628668 containerd[1468]: 2025-01-16 09:01:22.548 [INFO][5267] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" Jan 16 09:01:22.628668 containerd[1468]: 2025-01-16 09:01:22.548 [INFO][5267] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" Jan 16 09:01:22.628668 containerd[1468]: 2025-01-16 09:01:22.598 [INFO][5274] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" HandleID="k8s-pod-network.602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--kube--controllers--695856fb7d--5l4ph-eth0" Jan 16 09:01:22.628668 containerd[1468]: 2025-01-16 09:01:22.598 [INFO][5274] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:01:22.628668 containerd[1468]: 2025-01-16 09:01:22.598 [INFO][5274] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:01:22.628668 containerd[1468]: 2025-01-16 09:01:22.615 [WARNING][5274] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" HandleID="k8s-pod-network.602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--kube--controllers--695856fb7d--5l4ph-eth0" Jan 16 09:01:22.628668 containerd[1468]: 2025-01-16 09:01:22.615 [INFO][5274] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" HandleID="k8s-pod-network.602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--kube--controllers--695856fb7d--5l4ph-eth0" Jan 16 09:01:22.628668 containerd[1468]: 2025-01-16 09:01:22.619 [INFO][5274] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:01:22.628668 containerd[1468]: 2025-01-16 09:01:22.625 [INFO][5267] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" Jan 16 09:01:22.630564 containerd[1468]: time="2025-01-16T09:01:22.628736567Z" level=info msg="TearDown network for sandbox \"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2\" successfully" Jan 16 09:01:22.630564 containerd[1468]: time="2025-01-16T09:01:22.630507407Z" level=info msg="StopPodSandbox for \"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2\" returns successfully" Jan 16 09:01:22.632195 containerd[1468]: time="2025-01-16T09:01:22.631104783Z" level=info msg="RemovePodSandbox for \"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2\"" Jan 16 09:01:22.632195 containerd[1468]: time="2025-01-16T09:01:22.631141018Z" level=info msg="Forcibly stopping sandbox \"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2\"" Jan 16 09:01:22.763094 sshd[5251]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:22.770835 kubelet[2539]: E0116 09:01:22.770806 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:01:22.781120 systemd[1]: sshd@14-147.182.202.230:22-139.178.68.195:53778.service: Deactivated successfully. Jan 16 09:01:22.786052 systemd[1]: session-11.scope: Deactivated successfully. Jan 16 09:01:22.791598 systemd-logind[1445]: Session 11 logged out. Waiting for processes to exit. Jan 16 09:01:22.800826 systemd[1]: Started sshd@15-147.182.202.230:22-139.178.68.195:53782.service - OpenSSH per-connection server daemon (139.178.68.195:53782). Jan 16 09:01:22.815165 systemd-logind[1445]: Removed session 11. Jan 16 09:01:22.823404 containerd[1468]: 2025-01-16 09:01:22.738 [WARNING][5300] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--9--2d52908736-k8s-calico--kube--controllers--695856fb7d--5l4ph-eth0", GenerateName:"calico-kube-controllers-695856fb7d-", Namespace:"calico-system", SelfLink:"", UID:"5717cdda-4a10-4088-8b75-4fff7e8b3b8d", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.January, 16, 8, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"695856fb7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-9-2d52908736", ContainerID:"a5805ba78925a348e67cef1fa144a25e7a7fbc03d0d458bfa08a961093ebcfd2", Pod:"calico-kube-controllers-695856fb7d-5l4ph", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali049fdbee065", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 16 09:01:22.823404 containerd[1468]: 2025-01-16 09:01:22.739 [INFO][5300] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" Jan 16 09:01:22.823404 containerd[1468]: 2025-01-16 09:01:22.739 [INFO][5300] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" iface="eth0" netns="" Jan 16 09:01:22.823404 containerd[1468]: 2025-01-16 09:01:22.739 [INFO][5300] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" Jan 16 09:01:22.823404 containerd[1468]: 2025-01-16 09:01:22.739 [INFO][5300] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" Jan 16 09:01:22.823404 containerd[1468]: 2025-01-16 09:01:22.793 [INFO][5307] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" HandleID="k8s-pod-network.602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--kube--controllers--695856fb7d--5l4ph-eth0" Jan 16 09:01:22.823404 containerd[1468]: 2025-01-16 09:01:22.793 [INFO][5307] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 16 09:01:22.823404 containerd[1468]: 2025-01-16 09:01:22.794 [INFO][5307] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 16 09:01:22.823404 containerd[1468]: 2025-01-16 09:01:22.809 [WARNING][5307] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" HandleID="k8s-pod-network.602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--kube--controllers--695856fb7d--5l4ph-eth0" Jan 16 09:01:22.823404 containerd[1468]: 2025-01-16 09:01:22.809 [INFO][5307] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" HandleID="k8s-pod-network.602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" Workload="ci--4081.3.0--9--2d52908736-k8s-calico--kube--controllers--695856fb7d--5l4ph-eth0" Jan 16 09:01:22.823404 containerd[1468]: 2025-01-16 09:01:22.816 [INFO][5307] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 16 09:01:22.823404 containerd[1468]: 2025-01-16 09:01:22.820 [INFO][5300] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2" Jan 16 09:01:22.823404 containerd[1468]: time="2025-01-16T09:01:22.823189960Z" level=info msg="TearDown network for sandbox \"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2\" successfully" Jan 16 09:01:22.838475 containerd[1468]: time="2025-01-16T09:01:22.838061790Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 09:01:22.838475 containerd[1468]: time="2025-01-16T09:01:22.838206431Z" level=info msg="RemovePodSandbox \"602ea6638f52dd8b61311452e235b881d12df663a071f7b97bfe1326d09e0cc2\" returns successfully" Jan 16 09:01:22.877281 sshd[5315]: Accepted publickey for core from 139.178.68.195 port 53782 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:01:22.878233 sshd[5315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:22.884920 systemd-logind[1445]: New session 12 of user core. Jan 16 09:01:22.889988 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 16 09:01:23.117194 sshd[5315]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:23.131163 systemd[1]: sshd@15-147.182.202.230:22-139.178.68.195:53782.service: Deactivated successfully. Jan 16 09:01:23.138303 systemd[1]: session-12.scope: Deactivated successfully. Jan 16 09:01:23.143090 systemd-logind[1445]: Session 12 logged out. Waiting for processes to exit. Jan 16 09:01:23.153704 systemd[1]: Started sshd@16-147.182.202.230:22-139.178.68.195:53794.service - OpenSSH per-connection server daemon (139.178.68.195:53794). Jan 16 09:01:23.157265 systemd-logind[1445]: Removed session 12. Jan 16 09:01:23.216239 sshd[5327]: Accepted publickey for core from 139.178.68.195 port 53794 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:01:23.218561 sshd[5327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:23.226357 systemd-logind[1445]: New session 13 of user core. Jan 16 09:01:23.233767 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 16 09:01:23.385801 sshd[5327]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:23.391618 systemd[1]: sshd@16-147.182.202.230:22-139.178.68.195:53794.service: Deactivated successfully. Jan 16 09:01:23.394676 systemd[1]: session-13.scope: Deactivated successfully. Jan 16 09:01:23.395693 systemd-logind[1445]: Session 13 logged out. Waiting for processes to exit. Jan 16 09:01:23.396883 systemd-logind[1445]: Removed session 13. Jan 16 09:01:28.402802 systemd[1]: Started sshd@17-147.182.202.230:22-139.178.68.195:35384.service - OpenSSH per-connection server daemon (139.178.68.195:35384). Jan 16 09:01:28.449904 sshd[5344]: Accepted publickey for core from 139.178.68.195 port 35384 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:01:28.451773 sshd[5344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:28.458039 systemd-logind[1445]: New session 14 of user core. Jan 16 09:01:28.464800 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 16 09:01:28.607731 sshd[5344]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:28.612942 systemd[1]: sshd@17-147.182.202.230:22-139.178.68.195:35384.service: Deactivated successfully. Jan 16 09:01:28.617364 systemd[1]: session-14.scope: Deactivated successfully. Jan 16 09:01:28.619574 systemd-logind[1445]: Session 14 logged out. Waiting for processes to exit. Jan 16 09:01:28.620906 systemd-logind[1445]: Removed session 14. Jan 16 09:01:30.768184 kubelet[2539]: E0116 09:01:30.767856 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:01:33.634068 systemd[1]: Started sshd@18-147.182.202.230:22-139.178.68.195:35388.service - OpenSSH per-connection server daemon (139.178.68.195:35388). Jan 16 09:01:33.673847 sshd[5376]: Accepted publickey for core from 139.178.68.195 port 35388 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:01:33.676082 sshd[5376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:33.681262 systemd-logind[1445]: New session 15 of user core. Jan 16 09:01:33.690696 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 16 09:01:33.836216 sshd[5376]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:33.841872 systemd[1]: sshd@18-147.182.202.230:22-139.178.68.195:35388.service: Deactivated successfully. Jan 16 09:01:33.849277 systemd[1]: session-15.scope: Deactivated successfully. Jan 16 09:01:33.850942 systemd-logind[1445]: Session 15 logged out. Waiting for processes to exit. Jan 16 09:01:33.852720 systemd-logind[1445]: Removed session 15. Jan 16 09:01:35.769538 kubelet[2539]: E0116 09:01:35.769373 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:01:38.858105 systemd[1]: Started sshd@19-147.182.202.230:22-139.178.68.195:35848.service - OpenSSH per-connection server daemon (139.178.68.195:35848). Jan 16 09:01:38.905219 sshd[5397]: Accepted publickey for core from 139.178.68.195 port 35848 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:01:38.907089 sshd[5397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:38.913591 systemd-logind[1445]: New session 16 of user core. Jan 16 09:01:38.919689 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 16 09:01:39.064628 sshd[5397]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:39.070143 systemd[1]: sshd@19-147.182.202.230:22-139.178.68.195:35848.service: Deactivated successfully. Jan 16 09:01:39.073256 systemd[1]: session-16.scope: Deactivated successfully. Jan 16 09:01:39.075087 systemd-logind[1445]: Session 16 logged out. Waiting for processes to exit. Jan 16 09:01:39.076344 systemd-logind[1445]: Removed session 16. Jan 16 09:01:44.086985 systemd[1]: Started sshd@20-147.182.202.230:22-139.178.68.195:35862.service - OpenSSH per-connection server daemon (139.178.68.195:35862). Jan 16 09:01:44.144861 sshd[5410]: Accepted publickey for core from 139.178.68.195 port 35862 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:01:44.147366 sshd[5410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:44.154230 systemd-logind[1445]: New session 17 of user core. Jan 16 09:01:44.162841 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 16 09:01:44.448613 sshd[5410]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:44.454160 systemd[1]: sshd@20-147.182.202.230:22-139.178.68.195:35862.service: Deactivated successfully. Jan 16 09:01:44.457186 systemd[1]: session-17.scope: Deactivated successfully. Jan 16 09:01:44.458336 systemd-logind[1445]: Session 17 logged out. Waiting for processes to exit. Jan 16 09:01:44.459767 systemd-logind[1445]: Removed session 17. Jan 16 09:01:49.469855 systemd[1]: Started sshd@21-147.182.202.230:22-139.178.68.195:32870.service - OpenSSH per-connection server daemon (139.178.68.195:32870). Jan 16 09:01:49.566068 sshd[5445]: Accepted publickey for core from 139.178.68.195 port 32870 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:01:49.569149 sshd[5445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:49.575477 systemd-logind[1445]: New session 18 of user core. Jan 16 09:01:49.580786 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 16 09:01:49.744134 sshd[5445]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:49.753209 systemd[1]: sshd@21-147.182.202.230:22-139.178.68.195:32870.service: Deactivated successfully. Jan 16 09:01:49.755932 systemd[1]: session-18.scope: Deactivated successfully. Jan 16 09:01:49.759957 systemd-logind[1445]: Session 18 logged out. Waiting for processes to exit. Jan 16 09:01:49.766785 systemd[1]: Started sshd@22-147.182.202.230:22-139.178.68.195:32872.service - OpenSSH per-connection server daemon (139.178.68.195:32872). Jan 16 09:01:49.770642 kubelet[2539]: E0116 09:01:49.770279 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:01:49.772053 systemd-logind[1445]: Removed session 18. Jan 16 09:01:49.818769 sshd[5457]: Accepted publickey for core from 139.178.68.195 port 32872 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:01:49.821192 sshd[5457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:49.828940 systemd-logind[1445]: New session 19 of user core. Jan 16 09:01:49.834754 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 16 09:01:50.284094 sshd[5457]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:50.295504 systemd[1]: sshd@22-147.182.202.230:22-139.178.68.195:32872.service: Deactivated successfully. Jan 16 09:01:50.299531 systemd[1]: session-19.scope: Deactivated successfully. Jan 16 09:01:50.301854 systemd-logind[1445]: Session 19 logged out. Waiting for processes to exit. Jan 16 09:01:50.310972 systemd[1]: Started sshd@23-147.182.202.230:22-139.178.68.195:32880.service - OpenSSH per-connection server daemon (139.178.68.195:32880). Jan 16 09:01:50.312507 systemd-logind[1445]: Removed session 19. Jan 16 09:01:50.392068 sshd[5482]: Accepted publickey for core from 139.178.68.195 port 32880 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:01:50.394260 sshd[5482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:50.404202 systemd-logind[1445]: New session 20 of user core. Jan 16 09:01:50.408670 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 16 09:01:53.582905 sshd[5482]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:53.599302 systemd[1]: Started sshd@24-147.182.202.230:22-139.178.68.195:32890.service - OpenSSH per-connection server daemon (139.178.68.195:32890). Jan 16 09:01:53.602298 systemd[1]: sshd@23-147.182.202.230:22-139.178.68.195:32880.service: Deactivated successfully. Jan 16 09:01:53.608296 systemd[1]: session-20.scope: Deactivated successfully. Jan 16 09:01:53.618496 systemd-logind[1445]: Session 20 logged out. Waiting for processes to exit. Jan 16 09:01:53.622769 systemd-logind[1445]: Removed session 20. Jan 16 09:01:53.796266 sshd[5500]: Accepted publickey for core from 139.178.68.195 port 32890 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:01:53.798633 sshd[5500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:53.817206 systemd-logind[1445]: New session 21 of user core. Jan 16 09:01:53.823483 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 16 09:01:54.709389 sshd[5500]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:54.726057 systemd[1]: Started sshd@25-147.182.202.230:22-139.178.68.195:46736.service - OpenSSH per-connection server daemon (139.178.68.195:46736). Jan 16 09:01:54.729565 systemd[1]: sshd@24-147.182.202.230:22-139.178.68.195:32890.service: Deactivated successfully. Jan 16 09:01:54.740903 systemd[1]: session-21.scope: Deactivated successfully. Jan 16 09:01:54.745968 systemd-logind[1445]: Session 21 logged out. Waiting for processes to exit. Jan 16 09:01:54.749162 systemd-logind[1445]: Removed session 21. Jan 16 09:01:54.785880 kubelet[2539]: E0116 09:01:54.784862 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:01:54.866304 sshd[5515]: Accepted publickey for core from 139.178.68.195 port 46736 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:01:54.869257 sshd[5515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:54.877532 systemd-logind[1445]: New session 22 of user core. Jan 16 09:01:54.884717 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 16 09:01:55.099291 sshd[5515]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:55.109259 systemd-logind[1445]: Session 22 logged out. Waiting for processes to exit. Jan 16 09:01:55.109579 systemd[1]: sshd@25-147.182.202.230:22-139.178.68.195:46736.service: Deactivated successfully. Jan 16 09:01:55.113113 systemd[1]: session-22.scope: Deactivated successfully. Jan 16 09:01:55.115724 systemd-logind[1445]: Removed session 22. Jan 16 09:02:00.121423 systemd[1]: Started sshd@26-147.182.202.230:22-139.178.68.195:46748.service - OpenSSH per-connection server daemon (139.178.68.195:46748). Jan 16 09:02:00.187558 sshd[5552]: Accepted publickey for core from 139.178.68.195 port 46748 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:02:00.190064 sshd[5552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:00.197639 systemd-logind[1445]: New session 23 of user core. Jan 16 09:02:00.212766 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 16 09:02:00.420508 sshd[5552]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:00.428003 systemd[1]: sshd@26-147.182.202.230:22-139.178.68.195:46748.service: Deactivated successfully. Jan 16 09:02:00.433128 systemd[1]: session-23.scope: Deactivated successfully. Jan 16 09:02:00.434796 systemd-logind[1445]: Session 23 logged out. Waiting for processes to exit. Jan 16 09:02:00.437155 systemd-logind[1445]: Removed session 23. Jan 16 09:02:05.441815 systemd[1]: Started sshd@27-147.182.202.230:22-139.178.68.195:40906.service - OpenSSH per-connection server daemon (139.178.68.195:40906). Jan 16 09:02:05.549440 sshd[5583]: Accepted publickey for core from 139.178.68.195 port 40906 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:02:05.550748 sshd[5583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:05.557684 systemd-logind[1445]: New session 24 of user core. Jan 16 09:02:05.562708 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 16 09:02:05.758699 sshd[5583]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:05.763834 systemd[1]: sshd@27-147.182.202.230:22-139.178.68.195:40906.service: Deactivated successfully. Jan 16 09:02:05.766232 systemd[1]: session-24.scope: Deactivated successfully. Jan 16 09:02:05.770293 systemd-logind[1445]: Session 24 logged out. Waiting for processes to exit. Jan 16 09:02:05.774806 systemd-logind[1445]: Removed session 24. Jan 16 09:02:10.781033 systemd[1]: Started sshd@28-147.182.202.230:22-139.178.68.195:40918.service - OpenSSH per-connection server daemon (139.178.68.195:40918). Jan 16 09:02:10.903445 sshd[5599]: Accepted publickey for core from 139.178.68.195 port 40918 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:02:10.905634 sshd[5599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:10.912728 systemd-logind[1445]: New session 25 of user core. Jan 16 09:02:10.922729 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 16 09:02:11.228243 sshd[5599]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:11.234222 systemd[1]: sshd@28-147.182.202.230:22-139.178.68.195:40918.service: Deactivated successfully. Jan 16 09:02:11.238360 systemd[1]: session-25.scope: Deactivated successfully. Jan 16 09:02:11.241169 systemd-logind[1445]: Session 25 logged out. Waiting for processes to exit. Jan 16 09:02:11.242898 systemd-logind[1445]: Removed session 25. Jan 16 09:02:12.769018 kubelet[2539]: E0116 09:02:12.768856 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2"