Feb 13 20:14:47.075662 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 20:14:47.075709 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:14:47.076588 kernel: BIOS-provided physical RAM map: Feb 13 20:14:47.076623 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 20:14:47.076634 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 20:14:47.076653 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 20:14:47.076676 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Feb 13 20:14:47.076699 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Feb 13 20:14:47.076709 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 20:14:47.076732 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 20:14:47.076817 kernel: NX (Execute Disable) protection: active Feb 13 20:14:47.076828 kernel: APIC: Static calls initialized Feb 13 20:14:47.076848 kernel: SMBIOS 2.8 present. Feb 13 20:14:47.076860 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Feb 13 20:14:47.076880 kernel: Hypervisor detected: KVM Feb 13 20:14:47.076905 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 20:14:47.076927 kernel: kvm-clock: using sched offset of 4106098319 cycles Feb 13 20:14:47.076946 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 20:14:47.076963 kernel: tsc: Detected 2294.606 MHz processor Feb 13 20:14:47.076981 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:14:47.077000 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:14:47.077017 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Feb 13 20:14:47.077031 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 20:14:47.077043 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:14:47.077062 kernel: ACPI: Early table checksum verification disabled Feb 13 20:14:47.077075 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Feb 13 20:14:47.077088 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:14:47.077100 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:14:47.077112 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:14:47.077125 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 13 20:14:47.077139 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:14:47.077153 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:14:47.077168 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:14:47.077187 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:14:47.077200 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Feb 13 20:14:47.077226 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Feb 13 20:14:47.077238 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 13 20:14:47.077250 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Feb 13 20:14:47.077262 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Feb 13 20:14:47.077274 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Feb 13 20:14:47.077296 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Feb 13 20:14:47.077308 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 20:14:47.077322 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 20:14:47.077335 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 20:14:47.077347 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 13 20:14:47.077365 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Feb 13 20:14:47.077378 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Feb 13 20:14:47.077398 kernel: Zone ranges: Feb 13 20:14:47.077411 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:14:47.077425 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Feb 13 20:14:47.077439 kernel: Normal empty Feb 13 20:14:47.077452 kernel: Movable zone start for each node Feb 13 20:14:47.077466 kernel: Early memory node ranges Feb 13 20:14:47.077481 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 20:14:47.077495 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Feb 13 20:14:47.077508 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Feb 13 20:14:47.077528 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:14:47.077543 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 20:14:47.077562 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Feb 13 20:14:47.077578 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 20:14:47.077592 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 20:14:47.077605 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 20:14:47.077618 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 20:14:47.077634 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 20:14:47.077648 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 20:14:47.077669 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 20:14:47.077686 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 20:14:47.077699 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:14:47.077713 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 20:14:47.077728 kernel: TSC deadline timer available Feb 13 20:14:47.077804 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 20:14:47.077824 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 20:14:47.077843 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 13 20:14:47.077868 kernel: Booting paravirtualized kernel on KVM Feb 13 20:14:47.077894 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:14:47.077913 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 20:14:47.077932 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 20:14:47.077952 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 20:14:47.077968 kernel: pcpu-alloc: [0] 0 1 Feb 13 20:14:47.077982 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 13 20:14:47.078001 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:14:47.078017 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:14:47.078037 kernel: random: crng init done Feb 13 20:14:47.078051 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:14:47.078065 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 20:14:47.078079 kernel: Fallback order for Node 0: 0 Feb 13 20:14:47.078093 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Feb 13 20:14:47.078105 kernel: Policy zone: DMA32 Feb 13 20:14:47.078123 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:14:47.078140 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 125148K reserved, 0K cma-reserved) Feb 13 20:14:47.078154 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 20:14:47.078187 kernel: Kernel/User page tables isolation: enabled Feb 13 20:14:47.078214 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 20:14:47.078240 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:14:47.078266 kernel: Dynamic Preempt: voluntary Feb 13 20:14:47.078286 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:14:47.078309 kernel: rcu: RCU event tracing is enabled. Feb 13 20:14:47.078331 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 20:14:47.078352 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:14:47.078373 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:14:47.078404 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:14:47.078426 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:14:47.078447 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 20:14:47.078468 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 20:14:47.078491 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:14:47.078513 kernel: Console: colour VGA+ 80x25 Feb 13 20:14:47.078528 kernel: printk: console [tty0] enabled Feb 13 20:14:47.078546 kernel: printk: console [ttyS0] enabled Feb 13 20:14:47.078564 kernel: ACPI: Core revision 20230628 Feb 13 20:14:47.078592 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 20:14:47.078612 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:14:47.078631 kernel: x2apic enabled Feb 13 20:14:47.078651 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 20:14:47.078670 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 20:14:47.078691 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134dbeb26, max_idle_ns: 440795298546 ns Feb 13 20:14:47.078707 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294606) Feb 13 20:14:47.078720 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 13 20:14:47.079787 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 13 20:14:47.079870 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:14:47.079885 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 20:14:47.079900 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:14:47.079918 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 20:14:47.079932 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Feb 13 20:14:47.079947 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 20:14:47.079961 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 20:14:47.079976 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 20:14:47.079991 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 20:14:47.080017 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 20:14:47.080032 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 20:14:47.080048 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 20:14:47.080066 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 20:14:47.080083 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 20:14:47.080098 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:14:47.080111 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:14:47.080131 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:14:47.080146 kernel: landlock: Up and running. Feb 13 20:14:47.080161 kernel: SELinux: Initializing. Feb 13 20:14:47.080174 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 20:14:47.080188 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 20:14:47.080201 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Feb 13 20:14:47.080215 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:14:47.080232 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:14:47.080246 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:14:47.080266 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Feb 13 20:14:47.080283 kernel: signal: max sigframe size: 1776 Feb 13 20:14:47.080299 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:14:47.080316 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:14:47.080331 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 20:14:47.080345 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:14:47.080359 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:14:47.080373 kernel: .... node #0, CPUs: #1 Feb 13 20:14:47.080387 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:14:47.080419 kernel: smpboot: Max logical packages: 1 Feb 13 20:14:47.080439 kernel: smpboot: Total of 2 processors activated (9178.42 BogoMIPS) Feb 13 20:14:47.080458 kernel: devtmpfs: initialized Feb 13 20:14:47.080478 kernel: x86/mm: Memory block size: 128MB Feb 13 20:14:47.080497 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:14:47.080517 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 20:14:47.080537 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:14:47.080556 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:14:47.080572 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:14:47.080596 kernel: audit: type=2000 audit(1739477685.073:1): state=initialized audit_enabled=0 res=1 Feb 13 20:14:47.080611 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:14:47.080625 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:14:47.080640 kernel: cpuidle: using governor menu Feb 13 20:14:47.080654 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:14:47.080670 kernel: dca service started, version 1.12.1 Feb 13 20:14:47.080683 kernel: PCI: Using configuration type 1 for base access Feb 13 20:14:47.080698 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:14:47.080713 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:14:47.080752 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:14:47.080767 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:14:47.080783 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:14:47.080797 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:14:47.080810 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:14:47.080823 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:14:47.080844 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 20:14:47.080863 kernel: ACPI: Interpreter enabled Feb 13 20:14:47.080877 kernel: ACPI: PM: (supports S0 S5) Feb 13 20:14:47.080891 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:14:47.080912 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:14:47.080928 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 20:14:47.080943 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 13 20:14:47.080959 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:14:47.081342 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:14:47.081538 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 20:14:47.081684 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 20:14:47.081713 kernel: acpiphp: Slot [3] registered Feb 13 20:14:47.081730 kernel: acpiphp: Slot [4] registered Feb 13 20:14:47.084423 kernel: acpiphp: Slot [5] registered Feb 13 20:14:47.084447 kernel: acpiphp: Slot [6] registered Feb 13 20:14:47.084469 kernel: acpiphp: Slot [7] registered Feb 13 20:14:47.084490 kernel: acpiphp: Slot [8] registered Feb 13 20:14:47.084511 kernel: acpiphp: Slot [9] registered Feb 13 20:14:47.084532 kernel: acpiphp: Slot [10] registered Feb 13 20:14:47.084553 kernel: acpiphp: Slot [11] registered Feb 13 20:14:47.084587 kernel: acpiphp: Slot [12] registered Feb 13 20:14:47.084610 kernel: acpiphp: Slot [13] registered Feb 13 20:14:47.084630 kernel: acpiphp: Slot [14] registered Feb 13 20:14:47.084645 kernel: acpiphp: Slot [15] registered Feb 13 20:14:47.084659 kernel: acpiphp: Slot [16] registered Feb 13 20:14:47.084673 kernel: acpiphp: Slot [17] registered Feb 13 20:14:47.084691 kernel: acpiphp: Slot [18] registered Feb 13 20:14:47.084705 kernel: acpiphp: Slot [19] registered Feb 13 20:14:47.084718 kernel: acpiphp: Slot [20] registered Feb 13 20:14:47.084756 kernel: acpiphp: Slot [21] registered Feb 13 20:14:47.084776 kernel: acpiphp: Slot [22] registered Feb 13 20:14:47.084796 kernel: acpiphp: Slot [23] registered Feb 13 20:14:47.084817 kernel: acpiphp: Slot [24] registered Feb 13 20:14:47.084837 kernel: acpiphp: Slot [25] registered Feb 13 20:14:47.084857 kernel: acpiphp: Slot [26] registered Feb 13 20:14:47.084877 kernel: acpiphp: Slot [27] registered Feb 13 20:14:47.084895 kernel: acpiphp: Slot [28] registered Feb 13 20:14:47.084909 kernel: acpiphp: Slot [29] registered Feb 13 20:14:47.084922 kernel: acpiphp: Slot [30] registered Feb 13 20:14:47.084946 kernel: acpiphp: Slot [31] registered Feb 13 20:14:47.084963 kernel: PCI host bridge to bus 0000:00 Feb 13 20:14:47.085294 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 20:14:47.085465 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 20:14:47.085624 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 20:14:47.087621 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 13 20:14:47.087869 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 13 20:14:47.088042 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:14:47.088261 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 20:14:47.088474 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 13 20:14:47.088665 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 13 20:14:47.093072 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Feb 13 20:14:47.093344 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 13 20:14:47.093518 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 13 20:14:47.093674 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 13 20:14:47.097027 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 13 20:14:47.097270 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Feb 13 20:14:47.097468 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Feb 13 20:14:47.097725 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 13 20:14:47.099102 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 13 20:14:47.099269 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 13 20:14:47.099432 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 13 20:14:47.099584 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 13 20:14:47.099748 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 13 20:14:47.100346 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Feb 13 20:14:47.100500 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 13 20:14:47.100649 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 20:14:47.101938 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 13 20:14:47.102069 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Feb 13 20:14:47.102172 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Feb 13 20:14:47.102343 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 13 20:14:47.102492 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 20:14:47.102598 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Feb 13 20:14:47.102709 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Feb 13 20:14:47.103983 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 13 20:14:47.104188 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Feb 13 20:14:47.104344 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Feb 13 20:14:47.104498 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Feb 13 20:14:47.105446 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 13 20:14:47.105698 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Feb 13 20:14:47.110033 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 20:14:47.110237 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Feb 13 20:14:47.110408 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 13 20:14:47.110610 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Feb 13 20:14:47.110822 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Feb 13 20:14:47.110978 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Feb 13 20:14:47.111144 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Feb 13 20:14:47.111347 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Feb 13 20:14:47.111501 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Feb 13 20:14:47.111651 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Feb 13 20:14:47.111675 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 20:14:47.111690 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 20:14:47.111705 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 20:14:47.111718 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 20:14:47.113894 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 20:14:47.113932 kernel: iommu: Default domain type: Translated Feb 13 20:14:47.113954 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:14:47.113976 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:14:47.113997 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 20:14:47.114018 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 20:14:47.114039 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Feb 13 20:14:47.114313 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 13 20:14:47.114472 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 13 20:14:47.114650 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 20:14:47.114669 kernel: vgaarb: loaded Feb 13 20:14:47.114684 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 20:14:47.114700 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 20:14:47.114724 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 20:14:47.114775 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:14:47.114799 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:14:47.114814 kernel: pnp: PnP ACPI init Feb 13 20:14:47.114828 kernel: pnp: PnP ACPI: found 4 devices Feb 13 20:14:47.114853 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:14:47.114868 kernel: NET: Registered PF_INET protocol family Feb 13 20:14:47.114882 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:14:47.114896 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 20:14:47.114912 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:14:47.114927 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:14:47.114946 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 20:14:47.114970 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 20:14:47.114996 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 20:14:47.115018 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 20:14:47.115039 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:14:47.115060 kernel: NET: Registered PF_XDP protocol family Feb 13 20:14:47.115264 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 20:14:47.115373 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 20:14:47.115491 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 20:14:47.115611 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 13 20:14:47.116802 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 13 20:14:47.117029 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 13 20:14:47.117306 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 20:14:47.117342 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 13 20:14:47.117517 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 44735 usecs Feb 13 20:14:47.117545 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:14:47.117567 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 20:14:47.117589 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134dbeb26, max_idle_ns: 440795298546 ns Feb 13 20:14:47.117610 kernel: Initialise system trusted keyrings Feb 13 20:14:47.117641 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 20:14:47.117662 kernel: Key type asymmetric registered Feb 13 20:14:47.117683 kernel: Asymmetric key parser 'x509' registered Feb 13 20:14:47.117705 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:14:47.117728 kernel: io scheduler mq-deadline registered Feb 13 20:14:47.117780 kernel: io scheduler kyber registered Feb 13 20:14:47.117800 kernel: io scheduler bfq registered Feb 13 20:14:47.117825 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:14:47.117847 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 13 20:14:47.117875 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 13 20:14:47.117897 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 13 20:14:47.117919 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:14:47.117941 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:14:47.117955 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 20:14:47.117970 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 20:14:47.117989 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 20:14:47.118016 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 20:14:47.118267 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 13 20:14:47.118451 kernel: rtc_cmos 00:03: registered as rtc0 Feb 13 20:14:47.118621 kernel: rtc_cmos 00:03: setting system clock to 2025-02-13T20:14:46 UTC (1739477686) Feb 13 20:14:47.121947 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 13 20:14:47.122005 kernel: intel_pstate: CPU model not supported Feb 13 20:14:47.122029 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:14:47.122050 kernel: Segment Routing with IPv6 Feb 13 20:14:47.122072 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:14:47.122093 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:14:47.122129 kernel: Key type dns_resolver registered Feb 13 20:14:47.122151 kernel: IPI shorthand broadcast: enabled Feb 13 20:14:47.122172 kernel: sched_clock: Marking stable (1398004760, 203110717)->(1669069183, -67953706) Feb 13 20:14:47.122193 kernel: registered taskstats version 1 Feb 13 20:14:47.122214 kernel: Loading compiled-in X.509 certificates Feb 13 20:14:47.122235 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 20:14:47.122261 kernel: Key type .fscrypt registered Feb 13 20:14:47.122277 kernel: Key type fscrypt-provisioning registered Feb 13 20:14:47.122291 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:14:47.122317 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:14:47.122344 kernel: ima: No architecture policies found Feb 13 20:14:47.122372 kernel: clk: Disabling unused clocks Feb 13 20:14:47.122400 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 20:14:47.122422 kernel: Write protecting the kernel read-only data: 36864k Feb 13 20:14:47.122480 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 20:14:47.122514 kernel: Run /init as init process Feb 13 20:14:47.122543 kernel: with arguments: Feb 13 20:14:47.122576 kernel: /init Feb 13 20:14:47.122605 kernel: with environment: Feb 13 20:14:47.122633 kernel: HOME=/ Feb 13 20:14:47.122661 kernel: TERM=linux Feb 13 20:14:47.122677 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:14:47.122698 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:14:47.122718 systemd[1]: Detected virtualization kvm. Feb 13 20:14:47.122760 systemd[1]: Detected architecture x86-64. Feb 13 20:14:47.122795 systemd[1]: Running in initrd. Feb 13 20:14:47.122815 systemd[1]: No hostname configured, using default hostname. Feb 13 20:14:47.122831 systemd[1]: Hostname set to . Feb 13 20:14:47.122857 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:14:47.122887 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:14:47.122917 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:14:47.122945 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:14:47.122964 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:14:47.122997 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:14:47.123028 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:14:47.123053 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:14:47.123074 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:14:47.123106 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:14:47.123137 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:14:47.123167 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:14:47.123201 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:14:47.125352 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:14:47.125394 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:14:47.125429 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:14:47.125454 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:14:47.125481 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:14:47.125506 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:14:47.125530 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:14:47.125554 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:14:47.125583 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:14:47.125600 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:14:47.125616 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:14:47.125635 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:14:47.125661 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:14:47.125689 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:14:47.125713 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:14:47.126838 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:14:47.126872 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:14:47.126888 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:14:47.126959 systemd-journald[184]: Collecting audit messages is disabled. Feb 13 20:14:47.127021 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:14:47.127038 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:14:47.127060 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:14:47.127089 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:14:47.127119 systemd-journald[184]: Journal started Feb 13 20:14:47.127168 systemd-journald[184]: Runtime Journal (/run/log/journal/b135a433ac4f47e4bc66cf7d60cc4807) is 4.9M, max 39.3M, 34.4M free. Feb 13 20:14:47.095731 systemd-modules-load[185]: Inserted module 'overlay' Feb 13 20:14:47.130265 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:14:47.145769 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:14:47.147938 systemd-modules-load[185]: Inserted module 'br_netfilter' Feb 13 20:14:47.188548 kernel: Bridge firewalling registered Feb 13 20:14:47.188752 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:14:47.195773 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:14:47.196880 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:14:47.205119 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:14:47.209092 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:14:47.218054 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:14:47.220989 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:14:47.242338 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:14:47.250962 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:14:47.259253 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:14:47.269363 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:14:47.271731 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:14:47.289111 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:14:47.304078 dracut-cmdline[216]: dracut-dracut-053 Feb 13 20:14:47.310006 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:14:47.354289 systemd-resolved[219]: Positive Trust Anchors: Feb 13 20:14:47.354318 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:14:47.354380 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:14:47.364803 systemd-resolved[219]: Defaulting to hostname 'linux'. Feb 13 20:14:47.368694 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:14:47.369682 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:14:47.445820 kernel: SCSI subsystem initialized Feb 13 20:14:47.461793 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:14:47.478784 kernel: iscsi: registered transport (tcp) Feb 13 20:14:47.512922 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:14:47.513067 kernel: QLogic iSCSI HBA Driver Feb 13 20:14:47.578601 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:14:47.585092 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:14:47.636201 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:14:47.636331 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:14:47.636355 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:14:47.691827 kernel: raid6: avx2x4 gen() 15479 MB/s Feb 13 20:14:47.707820 kernel: raid6: avx2x2 gen() 14903 MB/s Feb 13 20:14:47.726182 kernel: raid6: avx2x1 gen() 12369 MB/s Feb 13 20:14:47.726287 kernel: raid6: using algorithm avx2x4 gen() 15479 MB/s Feb 13 20:14:47.744839 kernel: raid6: .... xor() 6760 MB/s, rmw enabled Feb 13 20:14:47.744968 kernel: raid6: using avx2x2 recovery algorithm Feb 13 20:14:47.770777 kernel: xor: automatically using best checksumming function avx Feb 13 20:14:47.977820 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:14:47.995944 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:14:48.004146 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:14:48.038926 systemd-udevd[402]: Using default interface naming scheme 'v255'. Feb 13 20:14:48.048524 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:14:48.059322 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:14:48.083303 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Feb 13 20:14:48.136621 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:14:48.144097 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:14:48.230645 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:14:48.239899 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:14:48.278179 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:14:48.279703 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:14:48.281665 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:14:48.283578 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:14:48.292055 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:14:48.310765 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Feb 13 20:14:48.370449 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 13 20:14:48.370609 kernel: scsi host0: Virtio SCSI HBA Feb 13 20:14:48.370757 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:14:48.370787 kernel: GPT:9289727 != 125829119 Feb 13 20:14:48.370799 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:14:48.370812 kernel: GPT:9289727 != 125829119 Feb 13 20:14:48.370824 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:14:48.370836 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:14:48.370849 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:14:48.370862 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Feb 13 20:14:48.396588 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Feb 13 20:14:48.331783 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:14:48.401808 kernel: ACPI: bus type USB registered Feb 13 20:14:48.404772 kernel: usbcore: registered new interface driver usbfs Feb 13 20:14:48.410767 kernel: usbcore: registered new interface driver hub Feb 13 20:14:48.418760 kernel: usbcore: registered new device driver usb Feb 13 20:14:48.435947 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:14:48.436200 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:14:48.437454 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:14:48.438086 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:14:48.438439 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:14:48.439149 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:14:48.452783 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 20:14:48.452666 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:14:48.463773 kernel: libata version 3.00 loaded. Feb 13 20:14:48.477338 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 13 20:14:48.528144 kernel: AES CTR mode by8 optimization enabled Feb 13 20:14:48.528168 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (447) Feb 13 20:14:48.528182 kernel: scsi host1: ata_piix Feb 13 20:14:48.528394 kernel: scsi host2: ata_piix Feb 13 20:14:48.528591 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Feb 13 20:14:48.528606 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Feb 13 20:14:48.531450 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 20:14:48.611598 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (463) Feb 13 20:14:48.611657 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Feb 13 20:14:48.612087 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Feb 13 20:14:48.612353 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Feb 13 20:14:48.612595 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Feb 13 20:14:48.614864 kernel: hub 1-0:1.0: USB hub found Feb 13 20:14:48.615214 kernel: hub 1-0:1.0: 2 ports detected Feb 13 20:14:48.620795 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:14:48.633872 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 20:14:48.641932 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:14:48.647457 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 20:14:48.648340 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 20:14:48.662143 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:14:48.668045 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:14:48.687314 disk-uuid[532]: Primary Header is updated. Feb 13 20:14:48.687314 disk-uuid[532]: Secondary Entries is updated. Feb 13 20:14:48.687314 disk-uuid[532]: Secondary Header is updated. Feb 13 20:14:48.707776 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:14:48.712501 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:14:48.719797 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:14:49.742810 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:14:49.743945 disk-uuid[535]: The operation has completed successfully. Feb 13 20:14:49.814587 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:14:49.815878 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:14:49.844234 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:14:49.848860 sh[564]: Success Feb 13 20:14:49.871047 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 20:14:49.996975 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:14:50.013838 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:14:50.015818 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:14:50.058045 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 20:14:50.058182 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:14:50.059903 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:14:50.061968 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:14:50.064411 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:14:50.078351 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:14:50.080780 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:14:50.087072 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:14:50.091042 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:14:50.115799 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:14:50.115909 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:14:50.119849 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:14:50.126828 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:14:50.146114 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:14:50.147490 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:14:50.162040 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:14:50.167091 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:14:50.275342 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:14:50.293168 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:14:50.351234 systemd-networkd[750]: lo: Link UP Feb 13 20:14:50.351250 systemd-networkd[750]: lo: Gained carrier Feb 13 20:14:50.356620 systemd-networkd[750]: Enumeration completed Feb 13 20:14:50.356814 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:14:50.358003 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Feb 13 20:14:50.358009 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Feb 13 20:14:50.360929 systemd[1]: Reached target network.target - Network. Feb 13 20:14:50.360981 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:14:50.360987 systemd-networkd[750]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:14:50.363621 systemd-networkd[750]: eth0: Link UP Feb 13 20:14:50.363630 systemd-networkd[750]: eth0: Gained carrier Feb 13 20:14:50.363648 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Feb 13 20:14:50.369370 systemd-networkd[750]: eth1: Link UP Feb 13 20:14:50.369374 systemd-networkd[750]: eth1: Gained carrier Feb 13 20:14:50.369391 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:14:50.383916 systemd-networkd[750]: eth1: DHCPv4 address 10.124.0.14/20 acquired from 169.254.169.253 Feb 13 20:14:50.384228 ignition[661]: Ignition 2.19.0 Feb 13 20:14:50.384237 ignition[661]: Stage: fetch-offline Feb 13 20:14:50.384302 ignition[661]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:14:50.385478 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:14:50.385659 ignition[661]: parsed url from cmdline: "" Feb 13 20:14:50.388357 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:14:50.385665 ignition[661]: no config URL provided Feb 13 20:14:50.390238 systemd-networkd[750]: eth0: DHCPv4 address 64.23.201.9/19, gateway 64.23.192.1 acquired from 169.254.169.253 Feb 13 20:14:50.385674 ignition[661]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:14:50.385691 ignition[661]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:14:50.385700 ignition[661]: failed to fetch config: resource requires networking Feb 13 20:14:50.386005 ignition[661]: Ignition finished successfully Feb 13 20:14:50.400116 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 20:14:50.424638 ignition[759]: Ignition 2.19.0 Feb 13 20:14:50.424654 ignition[759]: Stage: fetch Feb 13 20:14:50.424940 ignition[759]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:14:50.424957 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:14:50.425158 ignition[759]: parsed url from cmdline: "" Feb 13 20:14:50.425163 ignition[759]: no config URL provided Feb 13 20:14:50.425171 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:14:50.425183 ignition[759]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:14:50.425208 ignition[759]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Feb 13 20:14:50.441525 ignition[759]: GET result: OK Feb 13 20:14:50.442625 ignition[759]: parsing config with SHA512: 75327b7a77a18bb49beb3f9b6d5c2d3288f38f489231a94bfb3494d769397c7c3d6979d6791fa2fc528a53b0451273c456e12afcb7aca221b87c23674f1b9795 Feb 13 20:14:50.451819 unknown[759]: fetched base config from "system" Feb 13 20:14:50.451842 unknown[759]: fetched base config from "system" Feb 13 20:14:50.452594 ignition[759]: fetch: fetch complete Feb 13 20:14:50.451890 unknown[759]: fetched user config from "digitalocean" Feb 13 20:14:50.452602 ignition[759]: fetch: fetch passed Feb 13 20:14:50.456558 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 20:14:50.452733 ignition[759]: Ignition finished successfully Feb 13 20:14:50.464066 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:14:50.491771 ignition[766]: Ignition 2.19.0 Feb 13 20:14:50.491781 ignition[766]: Stage: kargs Feb 13 20:14:50.492088 ignition[766]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:14:50.492107 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:14:50.494073 ignition[766]: kargs: kargs passed Feb 13 20:14:50.495952 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:14:50.494157 ignition[766]: Ignition finished successfully Feb 13 20:14:50.503084 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:14:50.543150 ignition[772]: Ignition 2.19.0 Feb 13 20:14:50.543168 ignition[772]: Stage: disks Feb 13 20:14:50.543491 ignition[772]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:14:50.543505 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:14:50.545498 ignition[772]: disks: disks passed Feb 13 20:14:50.545571 ignition[772]: Ignition finished successfully Feb 13 20:14:50.548145 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:14:50.555390 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:14:50.556856 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:14:50.558251 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:14:50.559504 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:14:50.560622 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:14:50.569189 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:14:50.605385 systemd-fsck[780]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 20:14:50.614239 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:14:50.620053 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:14:50.763771 kernel: EXT4-fs (vda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 20:14:50.765157 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:14:50.766575 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:14:50.773960 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:14:50.788114 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:14:50.792181 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Feb 13 20:14:50.803782 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (788) Feb 13 20:14:50.809379 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:14:50.809462 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:14:50.809338 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 20:14:50.813549 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:14:50.815794 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:14:50.815864 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:14:50.820644 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:14:50.824536 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:14:50.830635 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:14:50.841138 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:14:50.949999 coreos-metadata[790]: Feb 13 20:14:50.949 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:14:50.956342 initrd-setup-root[819]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:14:50.965633 coreos-metadata[791]: Feb 13 20:14:50.964 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:14:50.967067 initrd-setup-root[826]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:14:50.968585 coreos-metadata[790]: Feb 13 20:14:50.968 INFO Fetch successful Feb 13 20:14:50.977904 initrd-setup-root[833]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:14:50.982933 coreos-metadata[791]: Feb 13 20:14:50.982 INFO Fetch successful Feb 13 20:14:50.989431 initrd-setup-root[841]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:14:50.990142 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Feb 13 20:14:50.990296 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Feb 13 20:14:50.996902 coreos-metadata[791]: Feb 13 20:14:50.996 INFO wrote hostname ci-4081.3.1-e-9d3732dae3 to /sysroot/etc/hostname Feb 13 20:14:50.999199 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:14:51.132761 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:14:51.141084 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:14:51.146048 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:14:51.160552 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:14:51.164114 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:14:51.204312 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:14:51.214040 ignition[910]: INFO : Ignition 2.19.0 Feb 13 20:14:51.216764 ignition[910]: INFO : Stage: mount Feb 13 20:14:51.216764 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:14:51.216764 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:14:51.216764 ignition[910]: INFO : mount: mount passed Feb 13 20:14:51.216764 ignition[910]: INFO : Ignition finished successfully Feb 13 20:14:51.220365 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:14:51.227009 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:14:51.257114 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:14:51.271817 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (921) Feb 13 20:14:51.278836 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:14:51.278943 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:14:51.280874 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:14:51.287840 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:14:51.292082 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:14:51.328125 ignition[938]: INFO : Ignition 2.19.0 Feb 13 20:14:51.329338 ignition[938]: INFO : Stage: files Feb 13 20:14:51.329338 ignition[938]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:14:51.329338 ignition[938]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:14:51.332150 ignition[938]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:14:51.333595 ignition[938]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:14:51.333595 ignition[938]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:14:51.339833 ignition[938]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:14:51.341276 ignition[938]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:14:51.342510 unknown[938]: wrote ssh authorized keys file for user: core Feb 13 20:14:51.343662 ignition[938]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:14:51.344904 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:14:51.346361 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 20:14:51.389109 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:14:51.620230 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:14:51.621647 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:14:51.621647 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:14:51.621647 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:14:51.621647 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:14:51.621647 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:14:51.621647 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:14:51.621647 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:14:51.629273 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:14:51.629273 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:14:51.629273 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:14:51.629273 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:14:51.629273 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:14:51.629273 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:14:51.629273 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 13 20:14:51.670973 systemd-networkd[750]: eth0: Gained IPv6LL Feb 13 20:14:51.991145 systemd-networkd[750]: eth1: Gained IPv6LL Feb 13 20:14:52.114848 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 20:14:52.424630 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:14:52.424630 ignition[938]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 20:14:52.428939 ignition[938]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:14:52.428939 ignition[938]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:14:52.428939 ignition[938]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 20:14:52.428939 ignition[938]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:14:52.428939 ignition[938]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:14:52.435993 ignition[938]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:14:52.435993 ignition[938]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:14:52.435993 ignition[938]: INFO : files: files passed Feb 13 20:14:52.435993 ignition[938]: INFO : Ignition finished successfully Feb 13 20:14:52.433125 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:14:52.441156 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:14:52.447946 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:14:52.453109 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:14:52.454059 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:14:52.479252 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:14:52.479252 initrd-setup-root-after-ignition[967]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:14:52.482971 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:14:52.485704 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:14:52.488378 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:14:52.499099 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:14:52.549914 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:14:52.550102 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:14:52.551822 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:14:52.553267 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:14:52.554499 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:14:52.560104 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:14:52.589086 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:14:52.594112 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:14:52.616700 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:14:52.617810 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:14:52.619285 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:14:52.620519 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:14:52.620770 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:14:52.622408 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:14:52.624062 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:14:52.625214 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:14:52.626549 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:14:52.628067 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:14:52.629410 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:14:52.630806 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:14:52.632397 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:14:52.633792 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:14:52.635087 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:14:52.636208 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:14:52.636433 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:14:52.638149 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:14:52.639830 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:14:52.641118 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:14:52.641305 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:14:52.642644 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:14:52.643011 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:14:52.644485 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:14:52.644873 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:14:52.646293 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:14:52.646538 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:14:52.647408 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 20:14:52.647624 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:14:52.655223 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:14:52.659989 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:14:52.660770 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:14:52.661116 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:14:52.664142 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:14:52.664392 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:14:52.678411 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:14:52.679623 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:14:52.697776 ignition[991]: INFO : Ignition 2.19.0 Feb 13 20:14:52.697776 ignition[991]: INFO : Stage: umount Feb 13 20:14:52.697776 ignition[991]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:14:52.697776 ignition[991]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:14:52.702856 ignition[991]: INFO : umount: umount passed Feb 13 20:14:52.702856 ignition[991]: INFO : Ignition finished successfully Feb 13 20:14:52.706327 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:14:52.706513 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:14:52.710372 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:14:52.710532 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:14:52.714997 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:14:52.715105 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:14:52.726174 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 20:14:52.726282 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 20:14:52.730530 systemd[1]: Stopped target network.target - Network. Feb 13 20:14:52.733813 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:14:52.733955 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:14:52.754368 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:14:52.754985 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:14:52.759204 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:14:52.771895 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:14:52.775480 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:14:52.776569 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:14:52.776648 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:14:52.778634 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:14:52.778702 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:14:52.779603 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:14:52.779681 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:14:52.780657 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:14:52.780718 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:14:52.782244 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:14:52.784003 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:14:52.787458 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:14:52.787814 systemd-networkd[750]: eth0: DHCPv6 lease lost Feb 13 20:14:52.790537 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:14:52.790697 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:14:52.792408 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:14:52.792571 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:14:52.794853 systemd-networkd[750]: eth1: DHCPv6 lease lost Feb 13 20:14:52.794999 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:14:52.795180 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:14:52.798997 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:14:52.801251 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:14:52.804759 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:14:52.804856 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:14:52.813046 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:14:52.813817 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:14:52.813966 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:14:52.815365 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:14:52.815484 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:14:52.818683 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:14:52.818827 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:14:52.820329 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:14:52.820425 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:14:52.827095 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:14:52.846425 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:14:52.847783 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:14:52.850572 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:14:52.850748 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:14:52.853560 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:14:52.853658 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:14:52.855417 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:14:52.855499 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:14:52.856571 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:14:52.856653 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:14:52.858453 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:14:52.858550 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:14:52.859997 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:14:52.860093 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:14:52.873227 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:14:52.875148 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:14:52.875266 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:14:52.875957 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 20:14:52.876022 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:14:52.876720 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:14:52.879163 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:14:52.880094 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:14:52.880156 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:14:52.885236 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:14:52.885414 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:14:52.887527 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:14:52.901167 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:14:52.916082 systemd[1]: Switching root. Feb 13 20:14:52.966080 systemd-journald[184]: Journal stopped Feb 13 20:14:54.691937 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Feb 13 20:14:54.692047 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:14:54.692073 kernel: SELinux: policy capability open_perms=1 Feb 13 20:14:54.692086 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:14:54.692105 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:14:54.692117 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:14:54.692129 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:14:54.692141 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:14:54.692160 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:14:54.692173 kernel: audit: type=1403 audit(1739477693.200:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:14:54.692193 systemd[1]: Successfully loaded SELinux policy in 56.922ms. Feb 13 20:14:54.692222 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.707ms. Feb 13 20:14:54.692238 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:14:54.692261 systemd[1]: Detected virtualization kvm. Feb 13 20:14:54.692275 systemd[1]: Detected architecture x86-64. Feb 13 20:14:54.692288 systemd[1]: Detected first boot. Feb 13 20:14:54.692302 systemd[1]: Hostname set to . Feb 13 20:14:54.692316 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:14:54.692335 zram_generator::config[1034]: No configuration found. Feb 13 20:14:54.692350 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:14:54.692368 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 20:14:54.692381 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 20:14:54.692395 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 20:14:54.692410 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:14:54.692425 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:14:54.692439 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:14:54.692453 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:14:54.692467 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:14:54.692480 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:14:54.692497 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:14:54.692511 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:14:54.692525 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:14:54.692538 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:14:54.692553 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:14:54.692566 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:14:54.692581 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:14:54.692595 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:14:54.692608 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 20:14:54.692626 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:14:54.692639 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 20:14:54.692653 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 20:14:54.692667 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 20:14:54.692681 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:14:54.692695 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:14:54.692718 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:14:54.692732 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:14:54.698099 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:14:54.698122 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:14:54.698137 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:14:54.698152 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:14:54.698167 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:14:54.698181 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:14:54.698194 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:14:54.698221 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:14:54.698235 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:14:54.698249 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:14:54.698264 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:14:54.698279 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:14:54.698293 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:14:54.698325 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:14:54.698353 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:14:54.698379 systemd[1]: Reached target machines.target - Containers. Feb 13 20:14:54.698400 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:14:54.698416 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:14:54.698430 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:14:54.698445 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:14:54.698460 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:14:54.698473 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:14:54.698487 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:14:54.698501 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:14:54.698521 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:14:54.698534 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:14:54.698548 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 20:14:54.698562 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 20:14:54.698575 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 20:14:54.698589 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 20:14:54.698602 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:14:54.698615 kernel: loop: module loaded Feb 13 20:14:54.698631 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:14:54.698648 kernel: fuse: init (API version 7.39) Feb 13 20:14:54.698661 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:14:54.698675 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:14:54.698688 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:14:54.698702 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 20:14:54.698715 systemd[1]: Stopped verity-setup.service. Feb 13 20:14:54.698729 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:14:54.699867 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:14:54.699894 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:14:54.699917 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:14:54.699932 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:14:54.699945 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:14:54.699974 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:14:54.699993 kernel: ACPI: bus type drm_connector registered Feb 13 20:14:54.700056 systemd-journald[1107]: Collecting audit messages is disabled. Feb 13 20:14:54.700089 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:14:54.700108 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:14:54.700121 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:14:54.700149 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:14:54.700164 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:14:54.700178 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:14:54.700192 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:14:54.700207 systemd-journald[1107]: Journal started Feb 13 20:14:54.700243 systemd-journald[1107]: Runtime Journal (/run/log/journal/b135a433ac4f47e4bc66cf7d60cc4807) is 4.9M, max 39.3M, 34.4M free. Feb 13 20:14:54.238321 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:14:54.261989 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 20:14:54.703449 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:14:54.262646 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 20:14:54.704503 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:14:54.705985 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:14:54.707624 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:14:54.708882 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:14:54.709967 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:14:54.710935 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:14:54.712760 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:14:54.715283 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:14:54.716360 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:14:54.749475 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:14:54.768422 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:14:54.778951 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:14:54.780462 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:14:54.780530 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:14:54.785017 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:14:54.795082 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:14:54.805271 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:14:54.807104 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:14:54.815131 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:14:54.832675 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:14:54.833936 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:14:54.845119 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:14:54.846046 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:14:54.854087 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:14:54.862083 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:14:54.868112 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:14:54.874862 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:14:54.877076 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:14:54.879195 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:14:54.881096 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:14:54.890889 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:14:54.909094 kernel: loop0: detected capacity change from 0 to 140768 Feb 13 20:14:54.919051 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:14:54.930529 systemd-journald[1107]: Time spent on flushing to /var/log/journal/b135a433ac4f47e4bc66cf7d60cc4807 is 56.135ms for 991 entries. Feb 13 20:14:54.930529 systemd-journald[1107]: System Journal (/var/log/journal/b135a433ac4f47e4bc66cf7d60cc4807) is 8.0M, max 195.6M, 187.6M free. Feb 13 20:14:55.024711 systemd-journald[1107]: Received client request to flush runtime journal. Feb 13 20:14:55.024845 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:14:55.024944 kernel: loop1: detected capacity change from 0 to 205544 Feb 13 20:14:54.969338 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:14:54.975002 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:14:54.989231 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:14:54.994893 udevadm[1160]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 20:14:55.035834 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:14:55.072204 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:14:55.078501 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:14:55.081895 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:14:55.083183 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Feb 13 20:14:55.083204 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Feb 13 20:14:55.102730 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:14:55.108074 kernel: loop2: detected capacity change from 0 to 8 Feb 13 20:14:55.118154 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:14:55.148779 kernel: loop3: detected capacity change from 0 to 142488 Feb 13 20:14:55.242778 kernel: loop4: detected capacity change from 0 to 140768 Feb 13 20:14:55.257845 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:14:55.274469 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:14:55.285852 kernel: loop5: detected capacity change from 0 to 205544 Feb 13 20:14:55.310927 kernel: loop6: detected capacity change from 0 to 8 Feb 13 20:14:55.329915 kernel: loop7: detected capacity change from 0 to 142488 Feb 13 20:14:55.354396 (sd-merge)[1178]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Feb 13 20:14:55.355260 (sd-merge)[1178]: Merged extensions into '/usr'. Feb 13 20:14:55.373482 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:14:55.373510 systemd[1]: Reloading... Feb 13 20:14:55.391628 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Feb 13 20:14:55.394158 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Feb 13 20:14:55.516765 zram_generator::config[1205]: No configuration found. Feb 13 20:14:55.813259 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:14:55.872777 ldconfig[1148]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:14:55.942832 systemd[1]: Reloading finished in 568 ms. Feb 13 20:14:55.970149 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:14:55.972115 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:14:55.974233 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:14:55.991236 systemd[1]: Starting ensure-sysext.service... Feb 13 20:14:56.003265 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:14:56.027984 systemd[1]: Reloading requested from client PID 1252 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:14:56.028024 systemd[1]: Reloading... Feb 13 20:14:56.059291 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:14:56.061950 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:14:56.066043 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:14:56.066491 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Feb 13 20:14:56.066602 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Feb 13 20:14:56.080930 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:14:56.080947 systemd-tmpfiles[1253]: Skipping /boot Feb 13 20:14:56.109710 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:14:56.109731 systemd-tmpfiles[1253]: Skipping /boot Feb 13 20:14:56.248766 zram_generator::config[1286]: No configuration found. Feb 13 20:14:56.449011 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:14:56.540833 systemd[1]: Reloading finished in 511 ms. Feb 13 20:14:56.564585 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:14:56.587110 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:14:56.596234 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:14:56.604076 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:14:56.617112 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:14:56.627123 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:14:56.637082 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:14:56.637400 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:14:56.641190 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:14:56.652333 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:14:56.655975 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:14:56.657342 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:14:56.657549 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:14:56.662915 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:14:56.663237 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:14:56.663522 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:14:56.663693 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:14:56.668656 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:14:56.669083 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:14:56.675215 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:14:56.676204 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:14:56.685255 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:14:56.687932 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:14:56.693662 systemd[1]: Finished ensure-sysext.service. Feb 13 20:14:56.703334 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 20:14:56.711010 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:14:56.734207 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:14:56.734466 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:14:56.750520 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:14:56.752890 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:14:56.754235 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:14:56.757374 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:14:56.757963 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:14:56.761198 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:14:56.801936 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:14:56.803319 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:14:56.804526 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:14:56.805061 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:14:56.818185 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:14:56.824043 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:14:56.835868 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:14:56.838198 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:14:56.843997 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:14:56.853200 augenrules[1364]: No rules Feb 13 20:14:56.853901 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:14:56.887795 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:14:56.940106 systemd-resolved[1328]: Positive Trust Anchors: Feb 13 20:14:56.940128 systemd-resolved[1328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:14:56.940223 systemd-resolved[1328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:14:56.943288 systemd-udevd[1357]: Using default interface naming scheme 'v255'. Feb 13 20:14:56.947540 systemd-resolved[1328]: Using system hostname 'ci-4081.3.1-e-9d3732dae3'. Feb 13 20:14:56.950104 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:14:56.950920 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:14:56.995621 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 20:14:56.996426 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:14:57.001924 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:14:57.012128 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:14:57.146727 systemd-networkd[1378]: lo: Link UP Feb 13 20:14:57.146771 systemd-networkd[1378]: lo: Gained carrier Feb 13 20:14:57.149269 systemd-networkd[1378]: Enumeration completed Feb 13 20:14:57.149420 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:14:57.150222 systemd[1]: Reached target network.target - Network. Feb 13 20:14:57.159076 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:14:57.161997 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 20:14:57.199809 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1379) Feb 13 20:14:57.221841 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Feb 13 20:14:57.222676 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:14:57.222988 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:14:57.225442 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:14:57.240086 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:14:57.245562 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:14:57.246949 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:14:57.247018 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:14:57.247042 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:14:57.247648 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:14:57.248875 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:14:57.250592 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:14:57.275550 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:14:57.277785 kernel: ISO 9660 Extensions: RRIP_1991A Feb 13 20:14:57.280173 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:14:57.286376 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Feb 13 20:14:57.287976 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:14:57.288343 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:14:57.293794 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:14:57.329676 systemd-networkd[1378]: eth1: Configuring with /run/systemd/network/10-66:c2:b7:8c:05:5a.network. Feb 13 20:14:57.331887 systemd-networkd[1378]: eth1: Link UP Feb 13 20:14:57.331898 systemd-networkd[1378]: eth1: Gained carrier Feb 13 20:14:57.339805 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Feb 13 20:14:57.345052 systemd-networkd[1378]: eth0: Configuring with /run/systemd/network/10-22:f4:7e:68:4d:eb.network. Feb 13 20:14:57.345494 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Feb 13 20:14:57.346780 systemd-networkd[1378]: eth0: Link UP Feb 13 20:14:57.346792 systemd-networkd[1378]: eth0: Gained carrier Feb 13 20:14:57.349496 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Feb 13 20:14:57.353405 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Feb 13 20:14:57.366781 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 20:14:57.376030 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:14:57.383783 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 13 20:14:57.392673 kernel: ACPI: button: Power Button [PWRF] Feb 13 20:14:57.386430 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:14:57.400824 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 20:14:57.420668 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:14:57.470788 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Feb 13 20:14:57.483779 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Feb 13 20:14:57.496774 kernel: Console: switching to colour dummy device 80x25 Feb 13 20:14:57.498375 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Feb 13 20:14:57.498478 kernel: [drm] features: -context_init Feb 13 20:14:57.500104 kernel: [drm] number of scanouts: 1 Feb 13 20:14:57.500170 kernel: [drm] number of cap sets: 0 Feb 13 20:14:57.505799 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Feb 13 20:14:57.509333 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:14:57.516790 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 20:14:57.531008 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Feb 13 20:14:57.531113 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 20:14:57.547944 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Feb 13 20:14:57.560998 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:14:57.561265 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:14:57.571068 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:14:57.586316 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:14:57.586632 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:14:57.603526 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:14:57.747859 kernel: EDAC MC: Ver: 3.0.0 Feb 13 20:14:57.778200 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:14:57.785135 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:14:57.799105 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:14:57.816535 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:14:57.860152 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:14:57.861685 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:14:57.861965 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:14:57.862346 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:14:57.862574 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:14:57.863841 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:14:57.865815 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:14:57.865984 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:14:57.866076 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:14:57.866112 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:14:57.866184 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:14:57.868484 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:14:57.871086 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:14:57.885490 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:14:57.894067 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:14:57.897313 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:14:57.899205 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:14:57.901403 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:14:57.902122 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:14:57.902173 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:14:57.911121 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:14:57.911278 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:14:57.925885 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 20:14:57.931852 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:14:57.940015 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:14:57.950041 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:14:57.950683 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:14:57.960163 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:14:57.967065 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:14:57.973521 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:14:57.978769 coreos-metadata[1440]: Feb 13 20:14:57.975 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:14:57.982996 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:14:57.994402 jq[1442]: false Feb 13 20:14:57.997219 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:14:58.000074 coreos-metadata[1440]: Feb 13 20:14:57.997 INFO Fetch successful Feb 13 20:14:57.999484 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:14:58.001207 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:14:58.010258 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:14:58.020028 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:14:58.023217 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:14:58.036279 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:14:58.036580 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:14:58.048191 extend-filesystems[1445]: Found loop4 Feb 13 20:14:58.063962 extend-filesystems[1445]: Found loop5 Feb 13 20:14:58.063962 extend-filesystems[1445]: Found loop6 Feb 13 20:14:58.063962 extend-filesystems[1445]: Found loop7 Feb 13 20:14:58.063962 extend-filesystems[1445]: Found vda Feb 13 20:14:58.063962 extend-filesystems[1445]: Found vda1 Feb 13 20:14:58.063962 extend-filesystems[1445]: Found vda2 Feb 13 20:14:58.063962 extend-filesystems[1445]: Found vda3 Feb 13 20:14:58.063962 extend-filesystems[1445]: Found usr Feb 13 20:14:58.063962 extend-filesystems[1445]: Found vda4 Feb 13 20:14:58.063962 extend-filesystems[1445]: Found vda6 Feb 13 20:14:58.063962 extend-filesystems[1445]: Found vda7 Feb 13 20:14:58.063962 extend-filesystems[1445]: Found vda9 Feb 13 20:14:58.063962 extend-filesystems[1445]: Checking size of /dev/vda9 Feb 13 20:14:58.168086 extend-filesystems[1445]: Resized partition /dev/vda9 Feb 13 20:14:58.077477 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:14:58.078186 dbus-daemon[1441]: [system] SELinux support is enabled Feb 13 20:14:58.192215 extend-filesystems[1479]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:14:58.083579 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:14:58.195536 update_engine[1452]: I20250213 20:14:58.189169 1452 main.cc:92] Flatcar Update Engine starting Feb 13 20:14:58.209002 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Feb 13 20:14:58.090152 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:14:58.090396 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:14:58.214475 jq[1453]: true Feb 13 20:14:58.093382 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:14:58.236305 update_engine[1452]: I20250213 20:14:58.222074 1452 update_check_scheduler.cc:74] Next update check in 11m29s Feb 13 20:14:58.093501 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:14:58.236686 tar[1464]: linux-amd64/helm Feb 13 20:14:58.097719 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:14:58.099449 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Feb 13 20:14:58.099530 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:14:58.138806 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:14:58.139254 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:14:58.163911 (ntainerd)[1475]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:14:58.229622 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:14:58.246530 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:14:58.251080 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 20:14:58.254624 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:14:58.314806 jq[1481]: true Feb 13 20:14:58.342412 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1386) Feb 13 20:14:58.412037 systemd-logind[1451]: New seat seat0. Feb 13 20:14:58.422141 systemd-logind[1451]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 20:14:58.423150 systemd-logind[1451]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 20:14:58.424336 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:14:58.451204 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 13 20:14:58.451408 bash[1503]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:14:58.456153 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:14:58.476391 systemd[1]: Starting sshkeys.service... Feb 13 20:14:58.555126 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 20:14:58.566776 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 20:14:58.591514 extend-filesystems[1479]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 20:14:58.591514 extend-filesystems[1479]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 13 20:14:58.591514 extend-filesystems[1479]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 13 20:14:58.603964 extend-filesystems[1445]: Resized filesystem in /dev/vda9 Feb 13 20:14:58.603964 extend-filesystems[1445]: Found vdb Feb 13 20:14:58.601146 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:14:58.601495 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:14:58.708890 locksmithd[1485]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:14:58.721778 coreos-metadata[1510]: Feb 13 20:14:58.720 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:14:58.734321 coreos-metadata[1510]: Feb 13 20:14:58.732 INFO Fetch successful Feb 13 20:14:58.774273 unknown[1510]: wrote ssh authorized keys file for user: core Feb 13 20:14:58.838920 systemd-networkd[1378]: eth1: Gained IPv6LL Feb 13 20:14:58.841919 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Feb 13 20:14:58.842676 update-ssh-keys[1521]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:14:58.842052 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:14:58.849293 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 20:14:58.856481 systemd[1]: Finished sshkeys.service. Feb 13 20:14:58.865764 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:14:58.877277 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:14:58.892188 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:14:58.901630 sshd_keygen[1470]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:14:58.954829 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:14:58.977603 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:14:58.994580 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:14:59.005044 systemd[1]: Started sshd@0-64.23.201.9:22-147.75.109.163:33294.service - OpenSSH per-connection server daemon (147.75.109.163:33294). Feb 13 20:14:59.020928 containerd[1475]: time="2025-02-13T20:14:59.020028825Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:14:59.054431 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:14:59.054898 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:14:59.071924 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:14:59.124858 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:14:59.145141 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:14:59.158054 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 20:14:59.161432 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:14:59.176773 containerd[1475]: time="2025-02-13T20:14:59.175523738Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:14:59.183544 containerd[1475]: time="2025-02-13T20:14:59.183484873Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:14:59.184179 containerd[1475]: time="2025-02-13T20:14:59.184141170Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:14:59.184350 containerd[1475]: time="2025-02-13T20:14:59.184332161Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:14:59.185421 containerd[1475]: time="2025-02-13T20:14:59.185393158Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:14:59.185560 containerd[1475]: time="2025-02-13T20:14:59.185544799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:14:59.186152 containerd[1475]: time="2025-02-13T20:14:59.186107722Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:14:59.186316 containerd[1475]: time="2025-02-13T20:14:59.186295800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:14:59.187751 containerd[1475]: time="2025-02-13T20:14:59.187685727Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:14:59.187751 containerd[1475]: time="2025-02-13T20:14:59.187719604Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:14:59.187925 containerd[1475]: time="2025-02-13T20:14:59.187903998Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:14:59.188022 containerd[1475]: time="2025-02-13T20:14:59.188007453Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:14:59.188284 containerd[1475]: time="2025-02-13T20:14:59.188253917Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:14:59.190126 containerd[1475]: time="2025-02-13T20:14:59.189462477Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:14:59.190126 containerd[1475]: time="2025-02-13T20:14:59.189682954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:14:59.190126 containerd[1475]: time="2025-02-13T20:14:59.189713653Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:14:59.190126 containerd[1475]: time="2025-02-13T20:14:59.189892209Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:14:59.190126 containerd[1475]: time="2025-02-13T20:14:59.189957494Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:14:59.204182 containerd[1475]: time="2025-02-13T20:14:59.204123763Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:14:59.204401 containerd[1475]: time="2025-02-13T20:14:59.204386515Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:14:59.204494 containerd[1475]: time="2025-02-13T20:14:59.204479222Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:14:59.207223 containerd[1475]: time="2025-02-13T20:14:59.204872839Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:14:59.207223 containerd[1475]: time="2025-02-13T20:14:59.204920017Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:14:59.207223 containerd[1475]: time="2025-02-13T20:14:59.205144705Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:14:59.207223 containerd[1475]: time="2025-02-13T20:14:59.205535879Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:14:59.207223 containerd[1475]: time="2025-02-13T20:14:59.205729122Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:14:59.207223 containerd[1475]: time="2025-02-13T20:14:59.205786611Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:14:59.207223 containerd[1475]: time="2025-02-13T20:14:59.205806654Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:14:59.207223 containerd[1475]: time="2025-02-13T20:14:59.205844202Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:14:59.207223 containerd[1475]: time="2025-02-13T20:14:59.205870652Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:14:59.207223 containerd[1475]: time="2025-02-13T20:14:59.205889067Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:14:59.207223 containerd[1475]: time="2025-02-13T20:14:59.205924518Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:14:59.207223 containerd[1475]: time="2025-02-13T20:14:59.205946951Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:14:59.207223 containerd[1475]: time="2025-02-13T20:14:59.205965326Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:14:59.207223 containerd[1475]: time="2025-02-13T20:14:59.205983645Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:14:59.208025 containerd[1475]: time="2025-02-13T20:14:59.206002634Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:14:59.208025 containerd[1475]: time="2025-02-13T20:14:59.206042442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:14:59.208025 containerd[1475]: time="2025-02-13T20:14:59.206064859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:14:59.208025 containerd[1475]: time="2025-02-13T20:14:59.206082048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:14:59.208025 containerd[1475]: time="2025-02-13T20:14:59.206100233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:14:59.208025 containerd[1475]: time="2025-02-13T20:14:59.206117275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:14:59.208025 containerd[1475]: time="2025-02-13T20:14:59.206136273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:14:59.208025 containerd[1475]: time="2025-02-13T20:14:59.206157915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:14:59.208025 containerd[1475]: time="2025-02-13T20:14:59.206181655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:14:59.208025 containerd[1475]: time="2025-02-13T20:14:59.206206343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:14:59.208025 containerd[1475]: time="2025-02-13T20:14:59.206228974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:14:59.208025 containerd[1475]: time="2025-02-13T20:14:59.206246175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:14:59.208025 containerd[1475]: time="2025-02-13T20:14:59.206264682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:14:59.208025 containerd[1475]: time="2025-02-13T20:14:59.206282228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:14:59.208025 containerd[1475]: time="2025-02-13T20:14:59.206330628Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:14:59.210673 containerd[1475]: time="2025-02-13T20:14:59.206363447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:14:59.210673 containerd[1475]: time="2025-02-13T20:14:59.206381464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:14:59.210673 containerd[1475]: time="2025-02-13T20:14:59.206401595Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:14:59.210673 containerd[1475]: time="2025-02-13T20:14:59.206465506Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:14:59.210673 containerd[1475]: time="2025-02-13T20:14:59.206499248Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:14:59.210673 containerd[1475]: time="2025-02-13T20:14:59.206516955Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:14:59.210673 containerd[1475]: time="2025-02-13T20:14:59.206538983Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:14:59.210673 containerd[1475]: time="2025-02-13T20:14:59.206555596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:14:59.210673 containerd[1475]: time="2025-02-13T20:14:59.206599030Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:14:59.210673 containerd[1475]: time="2025-02-13T20:14:59.206622426Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:14:59.210673 containerd[1475]: time="2025-02-13T20:14:59.206638748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:14:59.213813 containerd[1475]: time="2025-02-13T20:14:59.212265133Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:14:59.213813 containerd[1475]: time="2025-02-13T20:14:59.212445850Z" level=info msg="Connect containerd service" Feb 13 20:14:59.213813 containerd[1475]: time="2025-02-13T20:14:59.212535439Z" level=info msg="using legacy CRI server" Feb 13 20:14:59.213813 containerd[1475]: time="2025-02-13T20:14:59.212552673Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:14:59.213813 containerd[1475]: time="2025-02-13T20:14:59.212846182Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:14:59.214896 containerd[1475]: time="2025-02-13T20:14:59.214781902Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:14:59.215331 containerd[1475]: time="2025-02-13T20:14:59.215236502Z" level=info msg="Start subscribing containerd event" Feb 13 20:14:59.215422 containerd[1475]: time="2025-02-13T20:14:59.215357632Z" level=info msg="Start recovering state" Feb 13 20:14:59.215503 containerd[1475]: time="2025-02-13T20:14:59.215474800Z" level=info msg="Start event monitor" Feb 13 20:14:59.215561 containerd[1475]: time="2025-02-13T20:14:59.215506848Z" level=info msg="Start snapshots syncer" Feb 13 20:14:59.215561 containerd[1475]: time="2025-02-13T20:14:59.215521351Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:14:59.215561 containerd[1475]: time="2025-02-13T20:14:59.215533441Z" level=info msg="Start streaming server" Feb 13 20:14:59.216791 containerd[1475]: time="2025-02-13T20:14:59.215981175Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:14:59.216791 containerd[1475]: time="2025-02-13T20:14:59.216054326Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:14:59.216791 containerd[1475]: time="2025-02-13T20:14:59.216139210Z" level=info msg="containerd successfully booted in 0.198806s" Feb 13 20:14:59.216952 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:14:59.224240 systemd-networkd[1378]: eth0: Gained IPv6LL Feb 13 20:14:59.227038 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Feb 13 20:14:59.231598 sshd[1546]: Accepted publickey for core from 147.75.109.163 port 33294 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:14:59.236407 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:59.265968 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:14:59.283153 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:14:59.295388 systemd-logind[1451]: New session 1 of user core. Feb 13 20:14:59.335081 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:14:59.350340 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:14:59.376014 (systemd)[1561]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:14:59.635627 systemd[1561]: Queued start job for default target default.target. Feb 13 20:14:59.644548 systemd[1561]: Created slice app.slice - User Application Slice. Feb 13 20:14:59.644596 systemd[1561]: Reached target paths.target - Paths. Feb 13 20:14:59.644624 systemd[1561]: Reached target timers.target - Timers. Feb 13 20:14:59.648027 systemd[1561]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:14:59.687432 systemd[1561]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:14:59.690674 systemd[1561]: Reached target sockets.target - Sockets. Feb 13 20:14:59.690716 systemd[1561]: Reached target basic.target - Basic System. Feb 13 20:14:59.690847 systemd[1561]: Reached target default.target - Main User Target. Feb 13 20:14:59.690901 systemd[1561]: Startup finished in 298ms. Feb 13 20:14:59.691894 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:14:59.706229 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:14:59.801346 systemd[1]: Started sshd@1-64.23.201.9:22-147.75.109.163:45202.service - OpenSSH per-connection server daemon (147.75.109.163:45202). Feb 13 20:14:59.891862 tar[1464]: linux-amd64/LICENSE Feb 13 20:14:59.891862 tar[1464]: linux-amd64/README.md Feb 13 20:14:59.911773 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:14:59.914629 sshd[1572]: Accepted publickey for core from 147.75.109.163 port 45202 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:14:59.920110 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:59.935567 systemd-logind[1451]: New session 2 of user core. Feb 13 20:14:59.953239 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:15:00.039304 sshd[1572]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:00.052019 systemd[1]: sshd@1-64.23.201.9:22-147.75.109.163:45202.service: Deactivated successfully. Feb 13 20:15:00.056064 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:15:00.061126 systemd-logind[1451]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:15:00.071265 systemd[1]: Started sshd@2-64.23.201.9:22-147.75.109.163:45208.service - OpenSSH per-connection server daemon (147.75.109.163:45208). Feb 13 20:15:00.077804 systemd-logind[1451]: Removed session 2. Feb 13 20:15:00.119034 sshd[1582]: Accepted publickey for core from 147.75.109.163 port 45208 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:15:00.122136 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:00.133803 systemd-logind[1451]: New session 3 of user core. Feb 13 20:15:00.141233 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:15:00.219522 sshd[1582]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:00.227183 systemd[1]: sshd@2-64.23.201.9:22-147.75.109.163:45208.service: Deactivated successfully. Feb 13 20:15:00.230957 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:15:00.232404 systemd-logind[1451]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:15:00.235822 systemd-logind[1451]: Removed session 3. Feb 13 20:15:00.860131 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:15:00.864433 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:15:00.868871 systemd[1]: Startup finished in 1.571s (kernel) + 6.439s (initrd) + 7.722s (userspace) = 15.734s. Feb 13 20:15:00.873885 (kubelet)[1593]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:15:02.597216 kubelet[1593]: E0213 20:15:02.596087 1593 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:15:02.600697 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:15:02.600976 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:15:02.601458 systemd[1]: kubelet.service: Consumed 1.462s CPU time. Feb 13 20:15:10.242436 systemd[1]: Started sshd@3-64.23.201.9:22-147.75.109.163:53912.service - OpenSSH per-connection server daemon (147.75.109.163:53912). Feb 13 20:15:10.292764 sshd[1606]: Accepted publickey for core from 147.75.109.163 port 53912 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:15:10.295386 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:10.304369 systemd-logind[1451]: New session 4 of user core. Feb 13 20:15:10.311574 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:15:10.381592 sshd[1606]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:10.393729 systemd[1]: sshd@3-64.23.201.9:22-147.75.109.163:53912.service: Deactivated successfully. Feb 13 20:15:10.397534 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:15:10.401072 systemd-logind[1451]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:15:10.406426 systemd[1]: Started sshd@4-64.23.201.9:22-147.75.109.163:53926.service - OpenSSH per-connection server daemon (147.75.109.163:53926). Feb 13 20:15:10.408885 systemd-logind[1451]: Removed session 4. Feb 13 20:15:10.460627 sshd[1613]: Accepted publickey for core from 147.75.109.163 port 53926 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:15:10.463289 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:10.472968 systemd-logind[1451]: New session 5 of user core. Feb 13 20:15:10.483169 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:15:10.548175 sshd[1613]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:10.563143 systemd[1]: sshd@4-64.23.201.9:22-147.75.109.163:53926.service: Deactivated successfully. Feb 13 20:15:10.566827 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:15:10.571140 systemd-logind[1451]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:15:10.576604 systemd[1]: Started sshd@5-64.23.201.9:22-147.75.109.163:53928.service - OpenSSH per-connection server daemon (147.75.109.163:53928). Feb 13 20:15:10.579155 systemd-logind[1451]: Removed session 5. Feb 13 20:15:10.645960 sshd[1620]: Accepted publickey for core from 147.75.109.163 port 53928 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:15:10.648598 sshd[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:10.658148 systemd-logind[1451]: New session 6 of user core. Feb 13 20:15:10.665164 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:15:10.737152 sshd[1620]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:10.749226 systemd[1]: sshd@5-64.23.201.9:22-147.75.109.163:53928.service: Deactivated successfully. Feb 13 20:15:10.752271 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:15:10.753904 systemd-logind[1451]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:15:10.763397 systemd[1]: Started sshd@6-64.23.201.9:22-147.75.109.163:53934.service - OpenSSH per-connection server daemon (147.75.109.163:53934). Feb 13 20:15:10.768065 systemd-logind[1451]: Removed session 6. Feb 13 20:15:10.830622 sshd[1627]: Accepted publickey for core from 147.75.109.163 port 53934 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:15:10.833680 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:10.841879 systemd-logind[1451]: New session 7 of user core. Feb 13 20:15:10.851173 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:15:10.945097 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 20:15:10.945661 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:15:10.961690 sudo[1630]: pam_unix(sudo:session): session closed for user root Feb 13 20:15:10.966424 sshd[1627]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:10.979687 systemd[1]: sshd@6-64.23.201.9:22-147.75.109.163:53934.service: Deactivated successfully. Feb 13 20:15:10.983315 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:15:10.986006 systemd-logind[1451]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:15:10.993445 systemd[1]: Started sshd@7-64.23.201.9:22-147.75.109.163:53938.service - OpenSSH per-connection server daemon (147.75.109.163:53938). Feb 13 20:15:10.996703 systemd-logind[1451]: Removed session 7. Feb 13 20:15:11.057800 sshd[1635]: Accepted publickey for core from 147.75.109.163 port 53938 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:15:11.060658 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:11.069340 systemd-logind[1451]: New session 8 of user core. Feb 13 20:15:11.075171 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:15:11.142395 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 20:15:11.142958 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:15:11.150534 sudo[1639]: pam_unix(sudo:session): session closed for user root Feb 13 20:15:11.160897 sudo[1638]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 20:15:11.161420 sudo[1638]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:15:11.183303 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 20:15:11.200906 auditctl[1642]: No rules Feb 13 20:15:11.201569 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:15:11.201976 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 20:15:11.214812 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:15:11.259496 augenrules[1660]: No rules Feb 13 20:15:11.261093 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:15:11.263001 sudo[1638]: pam_unix(sudo:session): session closed for user root Feb 13 20:15:11.269033 sshd[1635]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:11.282674 systemd[1]: sshd@7-64.23.201.9:22-147.75.109.163:53938.service: Deactivated successfully. Feb 13 20:15:11.285508 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:15:11.288980 systemd-logind[1451]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:15:11.297566 systemd[1]: Started sshd@8-64.23.201.9:22-147.75.109.163:53954.service - OpenSSH per-connection server daemon (147.75.109.163:53954). Feb 13 20:15:11.300160 systemd-logind[1451]: Removed session 8. Feb 13 20:15:11.344303 sshd[1668]: Accepted publickey for core from 147.75.109.163 port 53954 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:15:11.346685 sshd[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:11.356918 systemd-logind[1451]: New session 9 of user core. Feb 13 20:15:11.363183 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:15:11.428002 sudo[1671]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:15:11.428584 sudo[1671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:15:12.224408 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:15:12.239011 (dockerd)[1687]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:15:12.816490 dockerd[1687]: time="2025-02-13T20:15:12.816328108Z" level=info msg="Starting up" Feb 13 20:15:12.823862 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:15:12.837145 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:15:13.044019 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3438926146-merged.mount: Deactivated successfully. Feb 13 20:15:13.580566 (kubelet)[1713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:15:13.581321 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:15:13.625590 dockerd[1687]: time="2025-02-13T20:15:13.625204424Z" level=info msg="Loading containers: start." Feb 13 20:15:13.683521 kubelet[1713]: E0213 20:15:13.683378 1713 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:15:13.693200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:15:13.694815 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:15:13.811865 kernel: Initializing XFRM netlink socket Feb 13 20:15:13.855118 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Feb 13 20:15:13.926652 systemd-networkd[1378]: docker0: Link UP Feb 13 20:15:13.941408 systemd-timesyncd[1342]: Contacted time server 45.55.58.103:123 (2.flatcar.pool.ntp.org). Feb 13 20:15:13.941519 systemd-timesyncd[1342]: Initial clock synchronization to Thu 2025-02-13 20:15:14.020462 UTC. Feb 13 20:15:13.965359 dockerd[1687]: time="2025-02-13T20:15:13.965283425Z" level=info msg="Loading containers: done." Feb 13 20:15:14.001972 dockerd[1687]: time="2025-02-13T20:15:14.001868661Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:15:14.002209 dockerd[1687]: time="2025-02-13T20:15:14.002066075Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:15:14.002278 dockerd[1687]: time="2025-02-13T20:15:14.002204679Z" level=info msg="Daemon has completed initialization" Feb 13 20:15:14.073629 dockerd[1687]: time="2025-02-13T20:15:14.073507321Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:15:14.073974 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:15:15.218902 containerd[1475]: time="2025-02-13T20:15:15.218784746Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 20:15:15.945104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1800731418.mount: Deactivated successfully. Feb 13 20:15:17.541285 containerd[1475]: time="2025-02-13T20:15:17.540945643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:17.543661 containerd[1475]: time="2025-02-13T20:15:17.543095536Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=27976588" Feb 13 20:15:17.546225 containerd[1475]: time="2025-02-13T20:15:17.546159596Z" level=info msg="ImageCreate event name:\"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:17.554749 containerd[1475]: time="2025-02-13T20:15:17.554669126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:17.557149 containerd[1475]: time="2025-02-13T20:15:17.557085161Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"27973388\" in 2.338177778s" Feb 13 20:15:17.557149 containerd[1475]: time="2025-02-13T20:15:17.557150721Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\"" Feb 13 20:15:17.560184 containerd[1475]: time="2025-02-13T20:15:17.560132033Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 20:15:19.425513 containerd[1475]: time="2025-02-13T20:15:19.425042891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:19.431144 containerd[1475]: time="2025-02-13T20:15:19.431003126Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=24708193" Feb 13 20:15:19.433681 containerd[1475]: time="2025-02-13T20:15:19.433608534Z" level=info msg="ImageCreate event name:\"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:19.449781 containerd[1475]: time="2025-02-13T20:15:19.449687534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:19.454137 containerd[1475]: time="2025-02-13T20:15:19.453862515Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"26154739\" in 1.893504992s" Feb 13 20:15:19.454137 containerd[1475]: time="2025-02-13T20:15:19.453944990Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\"" Feb 13 20:15:19.455800 containerd[1475]: time="2025-02-13T20:15:19.455599856Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 20:15:20.950666 containerd[1475]: time="2025-02-13T20:15:20.950580144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:20.953274 containerd[1475]: time="2025-02-13T20:15:20.953171570Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=18652425" Feb 13 20:15:20.956092 containerd[1475]: time="2025-02-13T20:15:20.955981197Z" level=info msg="ImageCreate event name:\"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:20.962964 containerd[1475]: time="2025-02-13T20:15:20.962845059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:20.965539 containerd[1475]: time="2025-02-13T20:15:20.965331387Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"20098989\" in 1.509666914s" Feb 13 20:15:20.965539 containerd[1475]: time="2025-02-13T20:15:20.965392572Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\"" Feb 13 20:15:20.966384 containerd[1475]: time="2025-02-13T20:15:20.966323742Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 20:15:20.971105 systemd-resolved[1328]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Feb 13 20:15:22.215270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2261598215.mount: Deactivated successfully. Feb 13 20:15:22.824770 containerd[1475]: time="2025-02-13T20:15:22.823377854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:22.830926 containerd[1475]: time="2025-02-13T20:15:22.830832421Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=30229108" Feb 13 20:15:22.834655 containerd[1475]: time="2025-02-13T20:15:22.834545095Z" level=info msg="ImageCreate event name:\"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:22.841673 containerd[1475]: time="2025-02-13T20:15:22.841184141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:22.842576 containerd[1475]: time="2025-02-13T20:15:22.842444937Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"30228127\" in 1.875885481s" Feb 13 20:15:22.842695 containerd[1475]: time="2025-02-13T20:15:22.842579465Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\"" Feb 13 20:15:22.843891 containerd[1475]: time="2025-02-13T20:15:22.843828319Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 20:15:23.454945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2021174803.mount: Deactivated successfully. Feb 13 20:15:23.892596 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:15:23.903183 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:15:24.055024 systemd-resolved[1328]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Feb 13 20:15:24.082059 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:15:24.086826 (kubelet)[1945]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:15:24.169047 kubelet[1945]: E0213 20:15:24.168482 1945 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:15:24.173618 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:15:24.174081 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:15:24.754808 containerd[1475]: time="2025-02-13T20:15:24.753916189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:24.756874 containerd[1475]: time="2025-02-13T20:15:24.756780480Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 20:15:24.759618 containerd[1475]: time="2025-02-13T20:15:24.759514748Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:24.768814 containerd[1475]: time="2025-02-13T20:15:24.767232897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:24.770262 containerd[1475]: time="2025-02-13T20:15:24.770189450Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.926135712s" Feb 13 20:15:24.770501 containerd[1475]: time="2025-02-13T20:15:24.770469788Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 20:15:24.771415 containerd[1475]: time="2025-02-13T20:15:24.771380352Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 20:15:25.372125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1024829718.mount: Deactivated successfully. Feb 13 20:15:25.386288 containerd[1475]: time="2025-02-13T20:15:25.386182684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:25.388537 containerd[1475]: time="2025-02-13T20:15:25.388474908Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Feb 13 20:15:25.391061 containerd[1475]: time="2025-02-13T20:15:25.390943248Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:25.396306 containerd[1475]: time="2025-02-13T20:15:25.396221540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:25.398296 containerd[1475]: time="2025-02-13T20:15:25.398209544Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 626.652361ms" Feb 13 20:15:25.398296 containerd[1475]: time="2025-02-13T20:15:25.398294854Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 20:15:25.399498 containerd[1475]: time="2025-02-13T20:15:25.399099324Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 20:15:25.993581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1734427597.mount: Deactivated successfully. Feb 13 20:15:28.246026 containerd[1475]: time="2025-02-13T20:15:28.245921015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:28.249477 containerd[1475]: time="2025-02-13T20:15:28.248921802Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Feb 13 20:15:28.252810 containerd[1475]: time="2025-02-13T20:15:28.252026796Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:28.259309 containerd[1475]: time="2025-02-13T20:15:28.259211372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:28.261975 containerd[1475]: time="2025-02-13T20:15:28.261767253Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.862596643s" Feb 13 20:15:28.261975 containerd[1475]: time="2025-02-13T20:15:28.261829748Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Feb 13 20:15:31.781304 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:15:31.795899 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:15:31.847776 systemd[1]: Reloading requested from client PID 2062 ('systemctl') (unit session-9.scope)... Feb 13 20:15:31.847805 systemd[1]: Reloading... Feb 13 20:15:32.044786 zram_generator::config[2110]: No configuration found. Feb 13 20:15:32.202378 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:15:32.316931 systemd[1]: Reloading finished in 468 ms. Feb 13 20:15:32.390323 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:15:32.395351 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:15:32.401051 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:15:32.401348 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:15:32.407297 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:15:32.560424 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:15:32.577326 (kubelet)[2158]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:15:32.653517 kubelet[2158]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:15:32.653517 kubelet[2158]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:15:32.653517 kubelet[2158]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:15:32.655271 kubelet[2158]: I0213 20:15:32.655158 2158 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:15:33.386632 kubelet[2158]: I0213 20:15:33.384882 2158 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 20:15:33.386632 kubelet[2158]: I0213 20:15:33.384927 2158 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:15:33.386632 kubelet[2158]: I0213 20:15:33.385496 2158 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 20:15:33.418480 kubelet[2158]: I0213 20:15:33.418238 2158 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:15:33.418876 kubelet[2158]: E0213 20:15:33.418845 2158 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://64.23.201.9:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.23.201.9:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:15:33.428951 kubelet[2158]: E0213 20:15:33.428870 2158 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:15:33.428951 kubelet[2158]: I0213 20:15:33.428948 2158 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:15:33.435770 kubelet[2158]: I0213 20:15:33.435707 2158 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:15:33.435999 kubelet[2158]: I0213 20:15:33.435975 2158 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 20:15:33.436273 kubelet[2158]: I0213 20:15:33.436141 2158 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:15:33.436441 kubelet[2158]: I0213 20:15:33.436177 2158 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.1-e-9d3732dae3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:15:33.436762 kubelet[2158]: I0213 20:15:33.436483 2158 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:15:33.436762 kubelet[2158]: I0213 20:15:33.436500 2158 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 20:15:33.436762 kubelet[2158]: I0213 20:15:33.436680 2158 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:15:33.443132 kubelet[2158]: I0213 20:15:33.443064 2158 kubelet.go:408] "Attempting to sync node with API server" Feb 13 20:15:33.443273 kubelet[2158]: I0213 20:15:33.443159 2158 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:15:33.443273 kubelet[2158]: I0213 20:15:33.443249 2158 kubelet.go:314] "Adding apiserver pod source" Feb 13 20:15:33.443427 kubelet[2158]: I0213 20:15:33.443301 2158 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:15:33.444772 kubelet[2158]: W0213 20:15:33.444421 2158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.201.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-e-9d3732dae3&limit=500&resourceVersion=0": dial tcp 64.23.201.9:6443: connect: connection refused Feb 13 20:15:33.444772 kubelet[2158]: E0213 20:15:33.444617 2158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.23.201.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-e-9d3732dae3&limit=500&resourceVersion=0\": dial tcp 64.23.201.9:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:15:33.451241 kubelet[2158]: W0213 20:15:33.450842 2158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.201.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.23.201.9:6443: connect: connection refused Feb 13 20:15:33.451241 kubelet[2158]: E0213 20:15:33.450915 2158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.23.201.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.201.9:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:15:33.451600 kubelet[2158]: I0213 20:15:33.451567 2158 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:15:33.453800 kubelet[2158]: I0213 20:15:33.453767 2158 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:15:33.453940 kubelet[2158]: W0213 20:15:33.453902 2158 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:15:33.461497 kubelet[2158]: I0213 20:15:33.461462 2158 server.go:1269] "Started kubelet" Feb 13 20:15:33.466776 kubelet[2158]: I0213 20:15:33.466027 2158 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:15:33.467616 kubelet[2158]: I0213 20:15:33.467591 2158 server.go:460] "Adding debug handlers to kubelet server" Feb 13 20:15:33.470591 kubelet[2158]: I0213 20:15:33.469422 2158 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:15:33.470591 kubelet[2158]: I0213 20:15:33.469700 2158 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:15:33.470591 kubelet[2158]: I0213 20:15:33.469751 2158 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:15:33.474042 kubelet[2158]: E0213 20:15:33.470253 2158 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.201.9:6443/api/v1/namespaces/default/events\": dial tcp 64.23.201.9:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.1-e-9d3732dae3.1823ddc84dab5c74 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.1-e-9d3732dae3,UID:ci-4081.3.1-e-9d3732dae3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.1-e-9d3732dae3,},FirstTimestamp:2025-02-13 20:15:33.461417076 +0000 UTC m=+0.879127700,LastTimestamp:2025-02-13 20:15:33.461417076 +0000 UTC m=+0.879127700,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.1-e-9d3732dae3,}" Feb 13 20:15:33.475200 kubelet[2158]: I0213 20:15:33.475172 2158 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:15:33.479952 kubelet[2158]: E0213 20:15:33.479921 2158 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:15:33.480216 kubelet[2158]: E0213 20:15:33.480195 2158 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.1-e-9d3732dae3\" not found" Feb 13 20:15:33.480291 kubelet[2158]: I0213 20:15:33.480242 2158 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 20:15:33.480539 kubelet[2158]: I0213 20:15:33.480518 2158 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 20:15:33.480684 kubelet[2158]: I0213 20:15:33.480624 2158 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:15:33.481822 kubelet[2158]: E0213 20:15:33.481784 2158 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.201.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-e-9d3732dae3?timeout=10s\": dial tcp 64.23.201.9:6443: connect: connection refused" interval="200ms" Feb 13 20:15:33.482052 kubelet[2158]: I0213 20:15:33.482033 2158 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:15:33.482132 kubelet[2158]: I0213 20:15:33.482101 2158 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:15:33.482596 kubelet[2158]: W0213 20:15:33.482549 2158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.201.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.201.9:6443: connect: connection refused Feb 13 20:15:33.482686 kubelet[2158]: E0213 20:15:33.482599 2158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.23.201.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.201.9:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:15:33.483984 kubelet[2158]: I0213 20:15:33.483950 2158 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:15:33.497330 kubelet[2158]: I0213 20:15:33.497281 2158 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:15:33.499178 kubelet[2158]: I0213 20:15:33.499143 2158 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:15:33.499407 kubelet[2158]: I0213 20:15:33.499391 2158 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:15:33.499522 kubelet[2158]: I0213 20:15:33.499513 2158 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 20:15:33.499763 kubelet[2158]: E0213 20:15:33.499664 2158 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:15:33.509250 kubelet[2158]: W0213 20:15:33.508958 2158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.201.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.201.9:6443: connect: connection refused Feb 13 20:15:33.509250 kubelet[2158]: E0213 20:15:33.509062 2158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.23.201.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.201.9:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:15:33.517470 kubelet[2158]: I0213 20:15:33.517426 2158 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:15:33.517470 kubelet[2158]: I0213 20:15:33.517454 2158 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:15:33.517683 kubelet[2158]: I0213 20:15:33.517485 2158 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:15:33.522801 kubelet[2158]: I0213 20:15:33.522752 2158 policy_none.go:49] "None policy: Start" Feb 13 20:15:33.523770 kubelet[2158]: I0213 20:15:33.523715 2158 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:15:33.523770 kubelet[2158]: I0213 20:15:33.523773 2158 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:15:33.537639 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 20:15:33.555701 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 20:15:33.561589 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 20:15:33.574522 kubelet[2158]: I0213 20:15:33.574457 2158 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:15:33.574872 kubelet[2158]: I0213 20:15:33.574820 2158 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:15:33.575295 kubelet[2158]: I0213 20:15:33.574848 2158 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:15:33.575511 kubelet[2158]: I0213 20:15:33.575457 2158 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:15:33.577943 kubelet[2158]: E0213 20:15:33.577786 2158 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.1-e-9d3732dae3\" not found" Feb 13 20:15:33.612608 systemd[1]: Created slice kubepods-burstable-pode9c29d35398135a356597b7dde391159.slice - libcontainer container kubepods-burstable-pode9c29d35398135a356597b7dde391159.slice. Feb 13 20:15:33.635594 systemd[1]: Created slice kubepods-burstable-pod5ce21aaae380eb8369655eda09bd6edf.slice - libcontainer container kubepods-burstable-pod5ce21aaae380eb8369655eda09bd6edf.slice. Feb 13 20:15:33.652912 systemd[1]: Created slice kubepods-burstable-podef69e0d22ecf7e43700a1b178e5798de.slice - libcontainer container kubepods-burstable-podef69e0d22ecf7e43700a1b178e5798de.slice. Feb 13 20:15:33.677820 kubelet[2158]: I0213 20:15:33.677366 2158 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:33.680304 kubelet[2158]: E0213 20:15:33.678003 2158 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.201.9:6443/api/v1/nodes\": dial tcp 64.23.201.9:6443: connect: connection refused" node="ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:33.681449 kubelet[2158]: I0213 20:15:33.681243 2158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ef69e0d22ecf7e43700a1b178e5798de-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.1-e-9d3732dae3\" (UID: \"ef69e0d22ecf7e43700a1b178e5798de\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:33.681449 kubelet[2158]: I0213 20:15:33.681277 2158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ef69e0d22ecf7e43700a1b178e5798de-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.1-e-9d3732dae3\" (UID: \"ef69e0d22ecf7e43700a1b178e5798de\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:33.681449 kubelet[2158]: I0213 20:15:33.681313 2158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ce21aaae380eb8369655eda09bd6edf-kubeconfig\") pod \"kube-scheduler-ci-4081.3.1-e-9d3732dae3\" (UID: \"5ce21aaae380eb8369655eda09bd6edf\") " pod="kube-system/kube-scheduler-ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:33.681449 kubelet[2158]: I0213 20:15:33.681334 2158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9c29d35398135a356597b7dde391159-ca-certs\") pod \"kube-apiserver-ci-4081.3.1-e-9d3732dae3\" (UID: \"e9c29d35398135a356597b7dde391159\") " pod="kube-system/kube-apiserver-ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:33.681449 kubelet[2158]: I0213 20:15:33.681355 2158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9c29d35398135a356597b7dde391159-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.1-e-9d3732dae3\" (UID: \"e9c29d35398135a356597b7dde391159\") " pod="kube-system/kube-apiserver-ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:33.681804 kubelet[2158]: I0213 20:15:33.681387 2158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ef69e0d22ecf7e43700a1b178e5798de-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.1-e-9d3732dae3\" (UID: \"ef69e0d22ecf7e43700a1b178e5798de\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:33.681804 kubelet[2158]: I0213 20:15:33.681404 2158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef69e0d22ecf7e43700a1b178e5798de-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.1-e-9d3732dae3\" (UID: \"ef69e0d22ecf7e43700a1b178e5798de\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:33.681804 kubelet[2158]: I0213 20:15:33.681423 2158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9c29d35398135a356597b7dde391159-k8s-certs\") pod \"kube-apiserver-ci-4081.3.1-e-9d3732dae3\" (UID: \"e9c29d35398135a356597b7dde391159\") " pod="kube-system/kube-apiserver-ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:33.682517 kubelet[2158]: I0213 20:15:33.682016 2158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ef69e0d22ecf7e43700a1b178e5798de-ca-certs\") pod \"kube-controller-manager-ci-4081.3.1-e-9d3732dae3\" (UID: \"ef69e0d22ecf7e43700a1b178e5798de\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:33.683539 kubelet[2158]: E0213 20:15:33.683443 2158 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.201.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-e-9d3732dae3?timeout=10s\": dial tcp 64.23.201.9:6443: connect: connection refused" interval="400ms" Feb 13 20:15:33.881827 kubelet[2158]: I0213 20:15:33.881663 2158 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:33.882309 kubelet[2158]: E0213 20:15:33.882170 2158 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.201.9:6443/api/v1/nodes\": dial tcp 64.23.201.9:6443: connect: connection refused" node="ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:33.930717 kubelet[2158]: E0213 20:15:33.930198 2158 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:33.931267 containerd[1475]: time="2025-02-13T20:15:33.931219802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.1-e-9d3732dae3,Uid:e9c29d35398135a356597b7dde391159,Namespace:kube-system,Attempt:0,}" Feb 13 20:15:33.940634 systemd-resolved[1328]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Feb 13 20:15:33.941870 kubelet[2158]: E0213 20:15:33.941069 2158 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:33.948835 containerd[1475]: time="2025-02-13T20:15:33.948724062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.1-e-9d3732dae3,Uid:5ce21aaae380eb8369655eda09bd6edf,Namespace:kube-system,Attempt:0,}" Feb 13 20:15:33.957405 kubelet[2158]: E0213 20:15:33.957279 2158 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:33.959343 containerd[1475]: time="2025-02-13T20:15:33.959295442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.1-e-9d3732dae3,Uid:ef69e0d22ecf7e43700a1b178e5798de,Namespace:kube-system,Attempt:0,}" Feb 13 20:15:34.084095 kubelet[2158]: E0213 20:15:34.084039 2158 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.201.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-e-9d3732dae3?timeout=10s\": dial tcp 64.23.201.9:6443: connect: connection refused" interval="800ms" Feb 13 20:15:34.284452 kubelet[2158]: I0213 20:15:34.284305 2158 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:34.285267 kubelet[2158]: E0213 20:15:34.284794 2158 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.201.9:6443/api/v1/nodes\": dial tcp 64.23.201.9:6443: connect: connection refused" node="ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:34.355853 kubelet[2158]: W0213 20:15:34.355699 2158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.201.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.201.9:6443: connect: connection refused Feb 13 20:15:34.355853 kubelet[2158]: E0213 20:15:34.355826 2158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.23.201.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.201.9:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:15:34.560085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount219974049.mount: Deactivated successfully. Feb 13 20:15:34.580400 containerd[1475]: time="2025-02-13T20:15:34.580277134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:15:34.583083 containerd[1475]: time="2025-02-13T20:15:34.583016092Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:15:34.585566 containerd[1475]: time="2025-02-13T20:15:34.585475914Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 20:15:34.587450 containerd[1475]: time="2025-02-13T20:15:34.587361426Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:15:34.589749 containerd[1475]: time="2025-02-13T20:15:34.589678007Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:15:34.593776 containerd[1475]: time="2025-02-13T20:15:34.593120217Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:15:34.595396 containerd[1475]: time="2025-02-13T20:15:34.594974656Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:15:34.600929 containerd[1475]: time="2025-02-13T20:15:34.600872793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:15:34.602721 containerd[1475]: time="2025-02-13T20:15:34.602659494Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 653.776559ms" Feb 13 20:15:34.606616 containerd[1475]: time="2025-02-13T20:15:34.606447791Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 647.067794ms" Feb 13 20:15:34.608072 containerd[1475]: time="2025-02-13T20:15:34.607733961Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 676.412398ms" Feb 13 20:15:34.668252 kubelet[2158]: W0213 20:15:34.668040 2158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.201.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-e-9d3732dae3&limit=500&resourceVersion=0": dial tcp 64.23.201.9:6443: connect: connection refused Feb 13 20:15:34.668252 kubelet[2158]: E0213 20:15:34.668179 2158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.23.201.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-e-9d3732dae3&limit=500&resourceVersion=0\": dial tcp 64.23.201.9:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:15:34.668866 kubelet[2158]: W0213 20:15:34.668578 2158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.201.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.201.9:6443: connect: connection refused Feb 13 20:15:34.668866 kubelet[2158]: E0213 20:15:34.668645 2158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.23.201.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.201.9:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:15:34.714337 kubelet[2158]: W0213 20:15:34.714074 2158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.201.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.23.201.9:6443: connect: connection refused Feb 13 20:15:34.714337 kubelet[2158]: E0213 20:15:34.714231 2158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.23.201.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.201.9:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:15:34.846299 containerd[1475]: time="2025-02-13T20:15:34.844586703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:15:34.848082 containerd[1475]: time="2025-02-13T20:15:34.846596870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:15:34.848082 containerd[1475]: time="2025-02-13T20:15:34.847560570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:34.848082 containerd[1475]: time="2025-02-13T20:15:34.847726091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:34.852810 containerd[1475]: time="2025-02-13T20:15:34.852593280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:15:34.853073 containerd[1475]: time="2025-02-13T20:15:34.852816525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:15:34.853073 containerd[1475]: time="2025-02-13T20:15:34.852867578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:34.853283 containerd[1475]: time="2025-02-13T20:15:34.853078560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:34.854618 containerd[1475]: time="2025-02-13T20:15:34.852979902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:15:34.854618 containerd[1475]: time="2025-02-13T20:15:34.854552955Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:15:34.854618 containerd[1475]: time="2025-02-13T20:15:34.854582472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:34.855272 containerd[1475]: time="2025-02-13T20:15:34.855096616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:34.889633 kubelet[2158]: E0213 20:15:34.887770 2158 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.201.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-e-9d3732dae3?timeout=10s\": dial tcp 64.23.201.9:6443: connect: connection refused" interval="1.6s" Feb 13 20:15:34.894078 systemd[1]: Started cri-containerd-b2a40394d6d166d8e0ac01b57dd905c5f6a757d115285f8a12ded50649c8c3b6.scope - libcontainer container b2a40394d6d166d8e0ac01b57dd905c5f6a757d115285f8a12ded50649c8c3b6. Feb 13 20:15:34.903940 systemd[1]: Started cri-containerd-d4c646983a0b7ba8ceac062d49a5ebe87d0345c562238d790b3979c3e2ef785f.scope - libcontainer container d4c646983a0b7ba8ceac062d49a5ebe87d0345c562238d790b3979c3e2ef785f. Feb 13 20:15:34.915245 systemd[1]: Started cri-containerd-4e3b9096118bf897443b43eb5b3462a749e8522ddf97fae85c21e8f76f419d7d.scope - libcontainer container 4e3b9096118bf897443b43eb5b3462a749e8522ddf97fae85c21e8f76f419d7d. Feb 13 20:15:35.012874 containerd[1475]: time="2025-02-13T20:15:35.012759708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.1-e-9d3732dae3,Uid:ef69e0d22ecf7e43700a1b178e5798de,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e3b9096118bf897443b43eb5b3462a749e8522ddf97fae85c21e8f76f419d7d\"" Feb 13 20:15:35.027371 kubelet[2158]: E0213 20:15:35.027330 2158 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:35.032345 containerd[1475]: time="2025-02-13T20:15:35.032285117Z" level=info msg="CreateContainer within sandbox \"4e3b9096118bf897443b43eb5b3462a749e8522ddf97fae85c21e8f76f419d7d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:15:35.045793 containerd[1475]: time="2025-02-13T20:15:35.045511188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.1-e-9d3732dae3,Uid:e9c29d35398135a356597b7dde391159,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4c646983a0b7ba8ceac062d49a5ebe87d0345c562238d790b3979c3e2ef785f\"" Feb 13 20:15:35.068217 containerd[1475]: time="2025-02-13T20:15:35.068136645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.1-e-9d3732dae3,Uid:5ce21aaae380eb8369655eda09bd6edf,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2a40394d6d166d8e0ac01b57dd905c5f6a757d115285f8a12ded50649c8c3b6\"" Feb 13 20:15:35.069769 kubelet[2158]: E0213 20:15:35.069603 2158 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:35.071867 kubelet[2158]: E0213 20:15:35.071831 2158 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:35.075762 containerd[1475]: time="2025-02-13T20:15:35.075696230Z" level=info msg="CreateContainer within sandbox \"d4c646983a0b7ba8ceac062d49a5ebe87d0345c562238d790b3979c3e2ef785f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:15:35.079247 containerd[1475]: time="2025-02-13T20:15:35.079035386Z" level=info msg="CreateContainer within sandbox \"b2a40394d6d166d8e0ac01b57dd905c5f6a757d115285f8a12ded50649c8c3b6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:15:35.086892 kubelet[2158]: I0213 20:15:35.086414 2158 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:35.086892 kubelet[2158]: E0213 20:15:35.086846 2158 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.201.9:6443/api/v1/nodes\": dial tcp 64.23.201.9:6443: connect: connection refused" node="ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:35.099959 containerd[1475]: time="2025-02-13T20:15:35.099808123Z" level=info msg="CreateContainer within sandbox \"4e3b9096118bf897443b43eb5b3462a749e8522ddf97fae85c21e8f76f419d7d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e7a91c7f2c9def174966d988572fdb62f1c6806954dc0c6df179cf322e131354\"" Feb 13 20:15:35.103807 containerd[1475]: time="2025-02-13T20:15:35.102817707Z" level=info msg="StartContainer for \"e7a91c7f2c9def174966d988572fdb62f1c6806954dc0c6df179cf322e131354\"" Feb 13 20:15:35.140535 containerd[1475]: time="2025-02-13T20:15:35.139792688Z" level=info msg="CreateContainer within sandbox \"d4c646983a0b7ba8ceac062d49a5ebe87d0345c562238d790b3979c3e2ef785f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4b687ef5fac29d2be718246feed95fa1e00536d2cdcc2f6ad9e51abf15d33d5a\"" Feb 13 20:15:35.143318 containerd[1475]: time="2025-02-13T20:15:35.142955766Z" level=info msg="StartContainer for \"4b687ef5fac29d2be718246feed95fa1e00536d2cdcc2f6ad9e51abf15d33d5a\"" Feb 13 20:15:35.148405 systemd[1]: Started cri-containerd-e7a91c7f2c9def174966d988572fdb62f1c6806954dc0c6df179cf322e131354.scope - libcontainer container e7a91c7f2c9def174966d988572fdb62f1c6806954dc0c6df179cf322e131354. Feb 13 20:15:35.159128 containerd[1475]: time="2025-02-13T20:15:35.159072014Z" level=info msg="CreateContainer within sandbox \"b2a40394d6d166d8e0ac01b57dd905c5f6a757d115285f8a12ded50649c8c3b6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4e64c548f6b0b66de2ef6c4ec15df84fd3444513dbfdeb309e7011540c5492c8\"" Feb 13 20:15:35.160652 containerd[1475]: time="2025-02-13T20:15:35.160598750Z" level=info msg="StartContainer for \"4e64c548f6b0b66de2ef6c4ec15df84fd3444513dbfdeb309e7011540c5492c8\"" Feb 13 20:15:35.206565 systemd[1]: Started cri-containerd-4b687ef5fac29d2be718246feed95fa1e00536d2cdcc2f6ad9e51abf15d33d5a.scope - libcontainer container 4b687ef5fac29d2be718246feed95fa1e00536d2cdcc2f6ad9e51abf15d33d5a. Feb 13 20:15:35.252169 systemd[1]: Started cri-containerd-4e64c548f6b0b66de2ef6c4ec15df84fd3444513dbfdeb309e7011540c5492c8.scope - libcontainer container 4e64c548f6b0b66de2ef6c4ec15df84fd3444513dbfdeb309e7011540c5492c8. Feb 13 20:15:35.261239 containerd[1475]: time="2025-02-13T20:15:35.261175600Z" level=info msg="StartContainer for \"e7a91c7f2c9def174966d988572fdb62f1c6806954dc0c6df179cf322e131354\" returns successfully" Feb 13 20:15:35.335682 containerd[1475]: time="2025-02-13T20:15:35.335625186Z" level=info msg="StartContainer for \"4b687ef5fac29d2be718246feed95fa1e00536d2cdcc2f6ad9e51abf15d33d5a\" returns successfully" Feb 13 20:15:35.379594 containerd[1475]: time="2025-02-13T20:15:35.379383398Z" level=info msg="StartContainer for \"4e64c548f6b0b66de2ef6c4ec15df84fd3444513dbfdeb309e7011540c5492c8\" returns successfully" Feb 13 20:15:35.436123 kubelet[2158]: E0213 20:15:35.436070 2158 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://64.23.201.9:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.23.201.9:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:15:35.527772 kubelet[2158]: E0213 20:15:35.527564 2158 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:35.529126 kubelet[2158]: E0213 20:15:35.528862 2158 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:35.532023 kubelet[2158]: E0213 20:15:35.531993 2158 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:36.535814 kubelet[2158]: E0213 20:15:36.535280 2158 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:36.689282 kubelet[2158]: I0213 20:15:36.688425 2158 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:37.537093 kubelet[2158]: E0213 20:15:37.536993 2158 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:37.643881 kubelet[2158]: E0213 20:15:37.643809 2158 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.1-e-9d3732dae3\" not found" node="ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:37.742775 kubelet[2158]: I0213 20:15:37.739929 2158 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:38.448658 kubelet[2158]: I0213 20:15:38.448570 2158 apiserver.go:52] "Watching apiserver" Feb 13 20:15:38.481624 kubelet[2158]: I0213 20:15:38.481515 2158 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 20:15:40.065584 systemd[1]: Reloading requested from client PID 2429 ('systemctl') (unit session-9.scope)... Feb 13 20:15:40.065603 systemd[1]: Reloading... Feb 13 20:15:40.217878 zram_generator::config[2471]: No configuration found. Feb 13 20:15:40.456260 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:15:40.689311 systemd[1]: Reloading finished in 623 ms. Feb 13 20:15:40.766444 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:15:40.788799 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:15:40.789533 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:15:40.789633 systemd[1]: kubelet.service: Consumed 1.340s CPU time, 112.0M memory peak, 0B memory swap peak. Feb 13 20:15:40.798156 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:15:41.041714 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:15:41.054336 (kubelet)[2519]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:15:41.151120 kubelet[2519]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:15:41.151771 kubelet[2519]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:15:41.152101 kubelet[2519]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:15:41.152728 kubelet[2519]: I0213 20:15:41.152670 2519 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:15:41.170786 kubelet[2519]: I0213 20:15:41.170029 2519 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 20:15:41.170786 kubelet[2519]: I0213 20:15:41.170072 2519 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:15:41.170786 kubelet[2519]: I0213 20:15:41.170442 2519 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 20:15:41.173860 kubelet[2519]: I0213 20:15:41.173831 2519 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:15:41.179402 kubelet[2519]: I0213 20:15:41.179364 2519 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:15:41.188815 kubelet[2519]: E0213 20:15:41.187333 2519 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:15:41.188815 kubelet[2519]: I0213 20:15:41.187373 2519 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:15:41.192657 kubelet[2519]: I0213 20:15:41.192622 2519 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:15:41.192986 kubelet[2519]: I0213 20:15:41.192973 2519 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 20:15:41.193297 kubelet[2519]: I0213 20:15:41.193238 2519 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:15:41.193667 kubelet[2519]: I0213 20:15:41.193396 2519 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.1-e-9d3732dae3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:15:41.193951 kubelet[2519]: I0213 20:15:41.193930 2519 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:15:41.194050 kubelet[2519]: I0213 20:15:41.194040 2519 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 20:15:41.194187 kubelet[2519]: I0213 20:15:41.194173 2519 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:15:41.196885 kubelet[2519]: I0213 20:15:41.196851 2519 kubelet.go:408] "Attempting to sync node with API server" Feb 13 20:15:41.197220 kubelet[2519]: I0213 20:15:41.197182 2519 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:15:41.197357 kubelet[2519]: I0213 20:15:41.197346 2519 kubelet.go:314] "Adding apiserver pod source" Feb 13 20:15:41.197459 kubelet[2519]: I0213 20:15:41.197449 2519 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:15:41.200959 kubelet[2519]: I0213 20:15:41.200928 2519 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:15:41.210119 kubelet[2519]: I0213 20:15:41.208563 2519 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:15:41.228514 kubelet[2519]: I0213 20:15:41.227394 2519 server.go:1269] "Started kubelet" Feb 13 20:15:41.234317 kubelet[2519]: I0213 20:15:41.234259 2519 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:15:41.237399 kubelet[2519]: I0213 20:15:41.229295 2519 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:15:41.238005 kubelet[2519]: I0213 20:15:41.237973 2519 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:15:41.239861 kubelet[2519]: I0213 20:15:41.238193 2519 server.go:460] "Adding debug handlers to kubelet server" Feb 13 20:15:41.241243 kubelet[2519]: I0213 20:15:41.241212 2519 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:15:41.247271 kubelet[2519]: I0213 20:15:41.245846 2519 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:15:41.250238 kubelet[2519]: E0213 20:15:41.250177 2519 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:15:41.252200 kubelet[2519]: I0213 20:15:41.252165 2519 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 20:15:41.252793 kubelet[2519]: I0213 20:15:41.252772 2519 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 20:15:41.253133 kubelet[2519]: I0213 20:15:41.253116 2519 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:15:41.254708 kubelet[2519]: I0213 20:15:41.254687 2519 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:15:41.255021 kubelet[2519]: I0213 20:15:41.254999 2519 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:15:41.262501 kubelet[2519]: I0213 20:15:41.261464 2519 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:15:41.320517 kubelet[2519]: I0213 20:15:41.319676 2519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:15:41.330802 kubelet[2519]: I0213 20:15:41.329199 2519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:15:41.330802 kubelet[2519]: I0213 20:15:41.329250 2519 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:15:41.330802 kubelet[2519]: I0213 20:15:41.329272 2519 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 20:15:41.330802 kubelet[2519]: E0213 20:15:41.329325 2519 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:15:41.382035 kubelet[2519]: I0213 20:15:41.381963 2519 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:15:41.382351 kubelet[2519]: I0213 20:15:41.382335 2519 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:15:41.382488 kubelet[2519]: I0213 20:15:41.382477 2519 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:15:41.383166 kubelet[2519]: I0213 20:15:41.382773 2519 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:15:41.383166 kubelet[2519]: I0213 20:15:41.382790 2519 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:15:41.383166 kubelet[2519]: I0213 20:15:41.382814 2519 policy_none.go:49] "None policy: Start" Feb 13 20:15:41.385823 kubelet[2519]: I0213 20:15:41.385486 2519 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:15:41.385823 kubelet[2519]: I0213 20:15:41.385518 2519 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:15:41.386004 kubelet[2519]: I0213 20:15:41.385876 2519 state_mem.go:75] "Updated machine memory state" Feb 13 20:15:41.399996 kubelet[2519]: I0213 20:15:41.399945 2519 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:15:41.400247 kubelet[2519]: I0213 20:15:41.400218 2519 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:15:41.400317 kubelet[2519]: I0213 20:15:41.400239 2519 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:15:41.401023 kubelet[2519]: I0213 20:15:41.400986 2519 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:15:41.454319 kubelet[2519]: I0213 20:15:41.454261 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9c29d35398135a356597b7dde391159-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.1-e-9d3732dae3\" (UID: \"e9c29d35398135a356597b7dde391159\") " pod="kube-system/kube-apiserver-ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:41.454780 kubelet[2519]: I0213 20:15:41.454433 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ef69e0d22ecf7e43700a1b178e5798de-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.1-e-9d3732dae3\" (UID: \"ef69e0d22ecf7e43700a1b178e5798de\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:41.454780 kubelet[2519]: I0213 20:15:41.454605 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ce21aaae380eb8369655eda09bd6edf-kubeconfig\") pod \"kube-scheduler-ci-4081.3.1-e-9d3732dae3\" (UID: \"5ce21aaae380eb8369655eda09bd6edf\") " pod="kube-system/kube-scheduler-ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:41.455177 kubelet[2519]: I0213 20:15:41.454843 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9c29d35398135a356597b7dde391159-ca-certs\") pod \"kube-apiserver-ci-4081.3.1-e-9d3732dae3\" (UID: \"e9c29d35398135a356597b7dde391159\") " pod="kube-system/kube-apiserver-ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:41.455177 kubelet[2519]: I0213 20:15:41.455068 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef69e0d22ecf7e43700a1b178e5798de-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.1-e-9d3732dae3\" (UID: \"ef69e0d22ecf7e43700a1b178e5798de\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:41.455177 kubelet[2519]: I0213 20:15:41.455107 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9c29d35398135a356597b7dde391159-k8s-certs\") pod \"kube-apiserver-ci-4081.3.1-e-9d3732dae3\" (UID: \"e9c29d35398135a356597b7dde391159\") " pod="kube-system/kube-apiserver-ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:41.455567 kubelet[2519]: I0213 20:15:41.455249 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ef69e0d22ecf7e43700a1b178e5798de-ca-certs\") pod \"kube-controller-manager-ci-4081.3.1-e-9d3732dae3\" (UID: \"ef69e0d22ecf7e43700a1b178e5798de\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:41.455567 kubelet[2519]: I0213 20:15:41.455277 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ef69e0d22ecf7e43700a1b178e5798de-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.1-e-9d3732dae3\" (UID: \"ef69e0d22ecf7e43700a1b178e5798de\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:41.455567 kubelet[2519]: I0213 20:15:41.455498 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ef69e0d22ecf7e43700a1b178e5798de-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.1-e-9d3732dae3\" (UID: \"ef69e0d22ecf7e43700a1b178e5798de\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:41.462240 kubelet[2519]: W0213 20:15:41.460589 2519 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:15:41.464444 kubelet[2519]: W0213 20:15:41.463568 2519 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:15:41.464840 kubelet[2519]: W0213 20:15:41.464810 2519 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:15:41.512863 kubelet[2519]: I0213 20:15:41.511413 2519 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:41.534664 kubelet[2519]: I0213 20:15:41.534223 2519 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:41.534664 kubelet[2519]: I0213 20:15:41.534373 2519 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:41.766006 kubelet[2519]: E0213 20:15:41.764586 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:41.766006 kubelet[2519]: E0213 20:15:41.765779 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:41.766283 kubelet[2519]: E0213 20:15:41.766201 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:42.200182 kubelet[2519]: I0213 20:15:42.200136 2519 apiserver.go:52] "Watching apiserver" Feb 13 20:15:42.253951 kubelet[2519]: I0213 20:15:42.253894 2519 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 20:15:42.381102 kubelet[2519]: E0213 20:15:42.381052 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:42.385072 kubelet[2519]: E0213 20:15:42.385019 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:42.431505 kubelet[2519]: W0213 20:15:42.431457 2519 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:15:42.431697 kubelet[2519]: E0213 20:15:42.431564 2519 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.1-e-9d3732dae3\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.1-e-9d3732dae3" Feb 13 20:15:42.431867 kubelet[2519]: E0213 20:15:42.431843 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:42.530723 kubelet[2519]: I0213 20:15:42.530509 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.1-e-9d3732dae3" podStartSLOduration=1.53045441 podStartE2EDuration="1.53045441s" podCreationTimestamp="2025-02-13 20:15:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:15:42.504895484 +0000 UTC m=+1.427246155" watchObservedRunningTime="2025-02-13 20:15:42.53045441 +0000 UTC m=+1.452805085" Feb 13 20:15:42.597976 kubelet[2519]: I0213 20:15:42.597087 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.1-e-9d3732dae3" podStartSLOduration=1.597064198 podStartE2EDuration="1.597064198s" podCreationTimestamp="2025-02-13 20:15:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:15:42.532421026 +0000 UTC m=+1.454771698" watchObservedRunningTime="2025-02-13 20:15:42.597064198 +0000 UTC m=+1.519414868" Feb 13 20:15:42.597976 kubelet[2519]: I0213 20:15:42.597248 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.1-e-9d3732dae3" podStartSLOduration=1.5972148229999998 podStartE2EDuration="1.597214823s" podCreationTimestamp="2025-02-13 20:15:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:15:42.59720198 +0000 UTC m=+1.519552675" watchObservedRunningTime="2025-02-13 20:15:42.597214823 +0000 UTC m=+1.519565494" Feb 13 20:15:43.136376 update_engine[1452]: I20250213 20:15:43.136243 1452 update_attempter.cc:509] Updating boot flags... Feb 13 20:15:43.235998 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2564) Feb 13 20:15:43.357428 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2563) Feb 13 20:15:43.396618 kubelet[2519]: E0213 20:15:43.396573 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:44.399634 kubelet[2519]: E0213 20:15:44.399566 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:44.816557 kubelet[2519]: E0213 20:15:44.816357 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:45.465971 kubelet[2519]: I0213 20:15:45.465883 2519 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:15:45.469398 kubelet[2519]: I0213 20:15:45.469063 2519 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:15:45.469525 containerd[1475]: time="2025-02-13T20:15:45.467785900Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:15:46.413753 kubelet[2519]: E0213 20:15:46.413672 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:46.429096 systemd[1]: Created slice kubepods-besteffort-pod2078e50f_92a3_48a4_a06d_6716baa97d30.slice - libcontainer container kubepods-besteffort-pod2078e50f_92a3_48a4_a06d_6716baa97d30.slice. Feb 13 20:15:46.561343 systemd[1]: Created slice kubepods-besteffort-pod3c2ab36f_77b4_45b9_8a93_1d89cdfbcbc8.slice - libcontainer container kubepods-besteffort-pod3c2ab36f_77b4_45b9_8a93_1d89cdfbcbc8.slice. Feb 13 20:15:46.598594 kubelet[2519]: I0213 20:15:46.598531 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2078e50f-92a3-48a4-a06d-6716baa97d30-lib-modules\") pod \"kube-proxy-8dxf7\" (UID: \"2078e50f-92a3-48a4-a06d-6716baa97d30\") " pod="kube-system/kube-proxy-8dxf7" Feb 13 20:15:46.599447 kubelet[2519]: I0213 20:15:46.599277 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6tnb\" (UniqueName: \"kubernetes.io/projected/2078e50f-92a3-48a4-a06d-6716baa97d30-kube-api-access-z6tnb\") pod \"kube-proxy-8dxf7\" (UID: \"2078e50f-92a3-48a4-a06d-6716baa97d30\") " pod="kube-system/kube-proxy-8dxf7" Feb 13 20:15:46.599447 kubelet[2519]: I0213 20:15:46.599348 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2078e50f-92a3-48a4-a06d-6716baa97d30-kube-proxy\") pod \"kube-proxy-8dxf7\" (UID: \"2078e50f-92a3-48a4-a06d-6716baa97d30\") " pod="kube-system/kube-proxy-8dxf7" Feb 13 20:15:46.599447 kubelet[2519]: I0213 20:15:46.599373 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2078e50f-92a3-48a4-a06d-6716baa97d30-xtables-lock\") pod \"kube-proxy-8dxf7\" (UID: \"2078e50f-92a3-48a4-a06d-6716baa97d30\") " pod="kube-system/kube-proxy-8dxf7" Feb 13 20:15:46.700769 kubelet[2519]: I0213 20:15:46.700581 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3c2ab36f-77b4-45b9-8a93-1d89cdfbcbc8-var-lib-calico\") pod \"tigera-operator-76c4976dd7-l6jmn\" (UID: \"3c2ab36f-77b4-45b9-8a93-1d89cdfbcbc8\") " pod="tigera-operator/tigera-operator-76c4976dd7-l6jmn" Feb 13 20:15:46.700769 kubelet[2519]: I0213 20:15:46.700710 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kfbd\" (UniqueName: \"kubernetes.io/projected/3c2ab36f-77b4-45b9-8a93-1d89cdfbcbc8-kube-api-access-7kfbd\") pod \"tigera-operator-76c4976dd7-l6jmn\" (UID: \"3c2ab36f-77b4-45b9-8a93-1d89cdfbcbc8\") " pod="tigera-operator/tigera-operator-76c4976dd7-l6jmn" Feb 13 20:15:46.741231 kubelet[2519]: E0213 20:15:46.740770 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:46.742145 containerd[1475]: time="2025-02-13T20:15:46.742078680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8dxf7,Uid:2078e50f-92a3-48a4-a06d-6716baa97d30,Namespace:kube-system,Attempt:0,}" Feb 13 20:15:46.788270 containerd[1475]: time="2025-02-13T20:15:46.787238353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:15:46.788270 containerd[1475]: time="2025-02-13T20:15:46.788067606Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:15:46.788270 containerd[1475]: time="2025-02-13T20:15:46.788086856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:46.788270 containerd[1475]: time="2025-02-13T20:15:46.788213296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:46.834075 systemd[1]: Started cri-containerd-992ddc8771c6726f62e0ecf01df7404d7710fc51a673ad56d7a5484b17306455.scope - libcontainer container 992ddc8771c6726f62e0ecf01df7404d7710fc51a673ad56d7a5484b17306455. Feb 13 20:15:46.867589 containerd[1475]: time="2025-02-13T20:15:46.866936764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-l6jmn,Uid:3c2ab36f-77b4-45b9-8a93-1d89cdfbcbc8,Namespace:tigera-operator,Attempt:0,}" Feb 13 20:15:46.900165 containerd[1475]: time="2025-02-13T20:15:46.900115580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8dxf7,Uid:2078e50f-92a3-48a4-a06d-6716baa97d30,Namespace:kube-system,Attempt:0,} returns sandbox id \"992ddc8771c6726f62e0ecf01df7404d7710fc51a673ad56d7a5484b17306455\"" Feb 13 20:15:46.901888 kubelet[2519]: E0213 20:15:46.901850 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:46.909889 containerd[1475]: time="2025-02-13T20:15:46.909685346Z" level=info msg="CreateContainer within sandbox \"992ddc8771c6726f62e0ecf01df7404d7710fc51a673ad56d7a5484b17306455\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:15:46.943270 containerd[1475]: time="2025-02-13T20:15:46.942860166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:15:46.943270 containerd[1475]: time="2025-02-13T20:15:46.942953156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:15:46.943270 containerd[1475]: time="2025-02-13T20:15:46.942970439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:46.943270 containerd[1475]: time="2025-02-13T20:15:46.943097290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:46.967925 containerd[1475]: time="2025-02-13T20:15:46.967630610Z" level=info msg="CreateContainer within sandbox \"992ddc8771c6726f62e0ecf01df7404d7710fc51a673ad56d7a5484b17306455\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"574a3afe1910bfdaf0f2ea591c6f1593ef1f483e5cedad01b50fb753396d24ea\"" Feb 13 20:15:46.971847 containerd[1475]: time="2025-02-13T20:15:46.971385951Z" level=info msg="StartContainer for \"574a3afe1910bfdaf0f2ea591c6f1593ef1f483e5cedad01b50fb753396d24ea\"" Feb 13 20:15:46.976256 systemd[1]: Started cri-containerd-00591c37de860ffcd86a3137e3596adae5d9bdd66f57c6f7169fd098aca6f5e2.scope - libcontainer container 00591c37de860ffcd86a3137e3596adae5d9bdd66f57c6f7169fd098aca6f5e2. Feb 13 20:15:47.035431 systemd[1]: Started cri-containerd-574a3afe1910bfdaf0f2ea591c6f1593ef1f483e5cedad01b50fb753396d24ea.scope - libcontainer container 574a3afe1910bfdaf0f2ea591c6f1593ef1f483e5cedad01b50fb753396d24ea. Feb 13 20:15:47.064664 containerd[1475]: time="2025-02-13T20:15:47.064065389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-l6jmn,Uid:3c2ab36f-77b4-45b9-8a93-1d89cdfbcbc8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"00591c37de860ffcd86a3137e3596adae5d9bdd66f57c6f7169fd098aca6f5e2\"" Feb 13 20:15:47.068703 containerd[1475]: time="2025-02-13T20:15:47.068659501Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 20:15:47.108045 containerd[1475]: time="2025-02-13T20:15:47.107985992Z" level=info msg="StartContainer for \"574a3afe1910bfdaf0f2ea591c6f1593ef1f483e5cedad01b50fb753396d24ea\" returns successfully" Feb 13 20:15:47.412544 kubelet[2519]: E0213 20:15:47.411171 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:47.415608 kubelet[2519]: E0213 20:15:47.415570 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:47.454349 kubelet[2519]: I0213 20:15:47.453506 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8dxf7" podStartSLOduration=1.4534786149999999 podStartE2EDuration="1.453478615s" podCreationTimestamp="2025-02-13 20:15:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:15:47.453016697 +0000 UTC m=+6.375367366" watchObservedRunningTime="2025-02-13 20:15:47.453478615 +0000 UTC m=+6.375829284" Feb 13 20:15:48.021106 sudo[1671]: pam_unix(sudo:session): session closed for user root Feb 13 20:15:48.027614 sshd[1668]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:48.034672 systemd[1]: sshd@8-64.23.201.9:22-147.75.109.163:53954.service: Deactivated successfully. Feb 13 20:15:48.040285 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:15:48.040895 systemd[1]: session-9.scope: Consumed 6.407s CPU time, 152.7M memory peak, 0B memory swap peak. Feb 13 20:15:48.043607 systemd-logind[1451]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:15:48.045844 systemd-logind[1451]: Removed session 9. Feb 13 20:15:49.089784 kubelet[2519]: E0213 20:15:49.089625 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:49.420566 kubelet[2519]: E0213 20:15:49.420422 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:49.766374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2897511549.mount: Deactivated successfully. Feb 13 20:15:50.422644 kubelet[2519]: E0213 20:15:50.422600 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:50.449861 containerd[1475]: time="2025-02-13T20:15:50.448670379Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:50.451814 containerd[1475]: time="2025-02-13T20:15:50.451728075Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 20:15:50.454269 containerd[1475]: time="2025-02-13T20:15:50.454190699Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:50.459288 containerd[1475]: time="2025-02-13T20:15:50.459237919Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:50.461726 containerd[1475]: time="2025-02-13T20:15:50.461395489Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.39268261s" Feb 13 20:15:50.461726 containerd[1475]: time="2025-02-13T20:15:50.461462074Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 20:15:50.516882 containerd[1475]: time="2025-02-13T20:15:50.516837331Z" level=info msg="CreateContainer within sandbox \"00591c37de860ffcd86a3137e3596adae5d9bdd66f57c6f7169fd098aca6f5e2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 20:15:50.560410 containerd[1475]: time="2025-02-13T20:15:50.560247624Z" level=info msg="CreateContainer within sandbox \"00591c37de860ffcd86a3137e3596adae5d9bdd66f57c6f7169fd098aca6f5e2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c7991813f57886e48fe460f53d1df5bdbf22dd72c9c86ebb6946e7c4753a268c\"" Feb 13 20:15:50.561207 containerd[1475]: time="2025-02-13T20:15:50.561146596Z" level=info msg="StartContainer for \"c7991813f57886e48fe460f53d1df5bdbf22dd72c9c86ebb6946e7c4753a268c\"" Feb 13 20:15:50.605075 systemd[1]: Started cri-containerd-c7991813f57886e48fe460f53d1df5bdbf22dd72c9c86ebb6946e7c4753a268c.scope - libcontainer container c7991813f57886e48fe460f53d1df5bdbf22dd72c9c86ebb6946e7c4753a268c. Feb 13 20:15:50.654640 containerd[1475]: time="2025-02-13T20:15:50.654588412Z" level=info msg="StartContainer for \"c7991813f57886e48fe460f53d1df5bdbf22dd72c9c86ebb6946e7c4753a268c\" returns successfully" Feb 13 20:15:54.164555 kubelet[2519]: I0213 20:15:54.164423 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-l6jmn" podStartSLOduration=4.746846339 podStartE2EDuration="8.164370548s" podCreationTimestamp="2025-02-13 20:15:46 +0000 UTC" firstStartedPulling="2025-02-13 20:15:47.067888898 +0000 UTC m=+5.990239548" lastFinishedPulling="2025-02-13 20:15:50.485413092 +0000 UTC m=+9.407763757" observedRunningTime="2025-02-13 20:15:51.475861228 +0000 UTC m=+10.398211899" watchObservedRunningTime="2025-02-13 20:15:54.164370548 +0000 UTC m=+13.086721217" Feb 13 20:15:54.195936 systemd[1]: Created slice kubepods-besteffort-podc2daaced_5e92_4a3e_87bf_4c5351f98f6e.slice - libcontainer container kubepods-besteffort-podc2daaced_5e92_4a3e_87bf_4c5351f98f6e.slice. Feb 13 20:15:54.339295 systemd[1]: Created slice kubepods-besteffort-pod8d875005_1176_4030_b347_4c52c13315a9.slice - libcontainer container kubepods-besteffort-pod8d875005_1176_4030_b347_4c52c13315a9.slice. Feb 13 20:15:54.361001 kubelet[2519]: I0213 20:15:54.360937 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c2daaced-5e92-4a3e-87bf-4c5351f98f6e-typha-certs\") pod \"calico-typha-79f5976469-zrgzs\" (UID: \"c2daaced-5e92-4a3e-87bf-4c5351f98f6e\") " pod="calico-system/calico-typha-79f5976469-zrgzs" Feb 13 20:15:54.361001 kubelet[2519]: I0213 20:15:54.361008 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2daaced-5e92-4a3e-87bf-4c5351f98f6e-tigera-ca-bundle\") pod \"calico-typha-79f5976469-zrgzs\" (UID: \"c2daaced-5e92-4a3e-87bf-4c5351f98f6e\") " pod="calico-system/calico-typha-79f5976469-zrgzs" Feb 13 20:15:54.361194 kubelet[2519]: I0213 20:15:54.361046 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r88g\" (UniqueName: \"kubernetes.io/projected/c2daaced-5e92-4a3e-87bf-4c5351f98f6e-kube-api-access-4r88g\") pod \"calico-typha-79f5976469-zrgzs\" (UID: \"c2daaced-5e92-4a3e-87bf-4c5351f98f6e\") " pod="calico-system/calico-typha-79f5976469-zrgzs" Feb 13 20:15:54.461707 kubelet[2519]: I0213 20:15:54.461565 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8d875005-1176-4030-b347-4c52c13315a9-cni-log-dir\") pod \"calico-node-4kvb2\" (UID: \"8d875005-1176-4030-b347-4c52c13315a9\") " pod="calico-system/calico-node-4kvb2" Feb 13 20:15:54.461707 kubelet[2519]: I0213 20:15:54.461620 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8d875005-1176-4030-b347-4c52c13315a9-flexvol-driver-host\") pod \"calico-node-4kvb2\" (UID: \"8d875005-1176-4030-b347-4c52c13315a9\") " pod="calico-system/calico-node-4kvb2" Feb 13 20:15:54.461707 kubelet[2519]: I0213 20:15:54.461669 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8d875005-1176-4030-b347-4c52c13315a9-node-certs\") pod \"calico-node-4kvb2\" (UID: \"8d875005-1176-4030-b347-4c52c13315a9\") " pod="calico-system/calico-node-4kvb2" Feb 13 20:15:54.461998 kubelet[2519]: I0213 20:15:54.461722 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d875005-1176-4030-b347-4c52c13315a9-lib-modules\") pod \"calico-node-4kvb2\" (UID: \"8d875005-1176-4030-b347-4c52c13315a9\") " pod="calico-system/calico-node-4kvb2" Feb 13 20:15:54.461998 kubelet[2519]: I0213 20:15:54.461771 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8d875005-1176-4030-b347-4c52c13315a9-cni-bin-dir\") pod \"calico-node-4kvb2\" (UID: \"8d875005-1176-4030-b347-4c52c13315a9\") " pod="calico-system/calico-node-4kvb2" Feb 13 20:15:54.461998 kubelet[2519]: I0213 20:15:54.461797 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8d875005-1176-4030-b347-4c52c13315a9-cni-net-dir\") pod \"calico-node-4kvb2\" (UID: \"8d875005-1176-4030-b347-4c52c13315a9\") " pod="calico-system/calico-node-4kvb2" Feb 13 20:15:54.461998 kubelet[2519]: I0213 20:15:54.461820 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8d875005-1176-4030-b347-4c52c13315a9-policysync\") pod \"calico-node-4kvb2\" (UID: \"8d875005-1176-4030-b347-4c52c13315a9\") " pod="calico-system/calico-node-4kvb2" Feb 13 20:15:54.461998 kubelet[2519]: I0213 20:15:54.461842 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d875005-1176-4030-b347-4c52c13315a9-tigera-ca-bundle\") pod \"calico-node-4kvb2\" (UID: \"8d875005-1176-4030-b347-4c52c13315a9\") " pod="calico-system/calico-node-4kvb2" Feb 13 20:15:54.462301 kubelet[2519]: I0213 20:15:54.461864 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8d875005-1176-4030-b347-4c52c13315a9-var-lib-calico\") pod \"calico-node-4kvb2\" (UID: \"8d875005-1176-4030-b347-4c52c13315a9\") " pod="calico-system/calico-node-4kvb2" Feb 13 20:15:54.462301 kubelet[2519]: I0213 20:15:54.461886 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v445j\" (UniqueName: \"kubernetes.io/projected/8d875005-1176-4030-b347-4c52c13315a9-kube-api-access-v445j\") pod \"calico-node-4kvb2\" (UID: \"8d875005-1176-4030-b347-4c52c13315a9\") " pod="calico-system/calico-node-4kvb2" Feb 13 20:15:54.462301 kubelet[2519]: I0213 20:15:54.461913 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8d875005-1176-4030-b347-4c52c13315a9-var-run-calico\") pod \"calico-node-4kvb2\" (UID: \"8d875005-1176-4030-b347-4c52c13315a9\") " pod="calico-system/calico-node-4kvb2" Feb 13 20:15:54.462301 kubelet[2519]: I0213 20:15:54.461940 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d875005-1176-4030-b347-4c52c13315a9-xtables-lock\") pod \"calico-node-4kvb2\" (UID: \"8d875005-1176-4030-b347-4c52c13315a9\") " pod="calico-system/calico-node-4kvb2" Feb 13 20:15:54.519769 kubelet[2519]: E0213 20:15:54.518676 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:54.520531 containerd[1475]: time="2025-02-13T20:15:54.520485785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79f5976469-zrgzs,Uid:c2daaced-5e92-4a3e-87bf-4c5351f98f6e,Namespace:calico-system,Attempt:0,}" Feb 13 20:15:54.527517 kubelet[2519]: E0213 20:15:54.527427 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-624nw" podUID="f105c06f-1a6f-4ec2-924d-9b57627c66c2" Feb 13 20:15:54.565173 kubelet[2519]: E0213 20:15:54.565102 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.565173 kubelet[2519]: W0213 20:15:54.565130 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.565614 kubelet[2519]: E0213 20:15:54.565267 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.566153 kubelet[2519]: E0213 20:15:54.566134 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.566388 kubelet[2519]: W0213 20:15:54.566208 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.566388 kubelet[2519]: E0213 20:15:54.566227 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.567332 kubelet[2519]: E0213 20:15:54.567030 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.567332 kubelet[2519]: W0213 20:15:54.567045 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.567332 kubelet[2519]: E0213 20:15:54.567234 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.568892 kubelet[2519]: E0213 20:15:54.568647 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.568892 kubelet[2519]: W0213 20:15:54.568665 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.568892 kubelet[2519]: E0213 20:15:54.568690 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.571327 kubelet[2519]: E0213 20:15:54.571108 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.571327 kubelet[2519]: W0213 20:15:54.571145 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.572462 kubelet[2519]: E0213 20:15:54.571545 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.574853 kubelet[2519]: E0213 20:15:54.573065 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.574853 kubelet[2519]: W0213 20:15:54.573331 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.575129 kubelet[2519]: E0213 20:15:54.575054 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.575267 kubelet[2519]: E0213 20:15:54.575211 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.575393 kubelet[2519]: W0213 20:15:54.575330 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.580831 kubelet[2519]: E0213 20:15:54.576978 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.582571 kubelet[2519]: W0213 20:15:54.581990 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.582571 kubelet[2519]: E0213 20:15:54.582054 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.586678 kubelet[2519]: E0213 20:15:54.586627 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.589562 kubelet[2519]: E0213 20:15:54.588467 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.589562 kubelet[2519]: W0213 20:15:54.588605 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.589562 kubelet[2519]: E0213 20:15:54.588632 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.593023 kubelet[2519]: E0213 20:15:54.592880 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.593023 kubelet[2519]: W0213 20:15:54.592908 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.593023 kubelet[2519]: E0213 20:15:54.592948 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.594900 containerd[1475]: time="2025-02-13T20:15:54.592854369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:15:54.594900 containerd[1475]: time="2025-02-13T20:15:54.592949554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:15:54.594900 containerd[1475]: time="2025-02-13T20:15:54.592969270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:54.594900 containerd[1475]: time="2025-02-13T20:15:54.593108689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:54.595565 kubelet[2519]: E0213 20:15:54.595303 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.595565 kubelet[2519]: W0213 20:15:54.595329 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.595565 kubelet[2519]: E0213 20:15:54.595371 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.595881 kubelet[2519]: E0213 20:15:54.595868 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.596034 kubelet[2519]: W0213 20:15:54.595953 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.596034 kubelet[2519]: E0213 20:15:54.595975 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.614608 kubelet[2519]: E0213 20:15:54.611967 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.614608 kubelet[2519]: W0213 20:15:54.611996 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.614608 kubelet[2519]: E0213 20:15:54.613023 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.614608 kubelet[2519]: E0213 20:15:54.614175 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.614608 kubelet[2519]: W0213 20:15:54.614192 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.614608 kubelet[2519]: E0213 20:15:54.614217 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.615930 kubelet[2519]: E0213 20:15:54.615886 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.615930 kubelet[2519]: W0213 20:15:54.615912 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.615930 kubelet[2519]: E0213 20:15:54.615934 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.617119 kubelet[2519]: E0213 20:15:54.617096 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.617119 kubelet[2519]: W0213 20:15:54.617116 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.617511 kubelet[2519]: E0213 20:15:54.617193 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.620858 kubelet[2519]: E0213 20:15:54.620819 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.620858 kubelet[2519]: W0213 20:15:54.620848 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.621206 kubelet[2519]: E0213 20:15:54.621186 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.621262 kubelet[2519]: W0213 20:15:54.621207 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.622802 kubelet[2519]: E0213 20:15:54.621607 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.622802 kubelet[2519]: W0213 20:15:54.621624 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.622802 kubelet[2519]: E0213 20:15:54.621648 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.622802 kubelet[2519]: E0213 20:15:54.622345 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.622802 kubelet[2519]: W0213 20:15:54.622358 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.622802 kubelet[2519]: E0213 20:15:54.622374 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.624688 kubelet[2519]: E0213 20:15:54.624616 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.624688 kubelet[2519]: E0213 20:15:54.624670 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.624953 kubelet[2519]: E0213 20:15:54.624720 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.625786 kubelet[2519]: W0213 20:15:54.625721 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.625896 kubelet[2519]: E0213 20:15:54.625796 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.628360 kubelet[2519]: E0213 20:15:54.628290 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.628360 kubelet[2519]: W0213 20:15:54.628324 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.628360 kubelet[2519]: E0213 20:15:54.628352 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.630494 kubelet[2519]: E0213 20:15:54.630435 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.630494 kubelet[2519]: W0213 20:15:54.630462 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.630494 kubelet[2519]: E0213 20:15:54.630483 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.632974 kubelet[2519]: E0213 20:15:54.631813 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.632974 kubelet[2519]: W0213 20:15:54.631830 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.632974 kubelet[2519]: E0213 20:15:54.631850 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.632974 kubelet[2519]: E0213 20:15:54.632255 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.632974 kubelet[2519]: W0213 20:15:54.632267 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.632974 kubelet[2519]: E0213 20:15:54.632279 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.636155 kubelet[2519]: E0213 20:15:54.633878 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.636155 kubelet[2519]: W0213 20:15:54.633901 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.636155 kubelet[2519]: E0213 20:15:54.633920 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.636155 kubelet[2519]: E0213 20:15:54.634196 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.636155 kubelet[2519]: W0213 20:15:54.634208 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.636155 kubelet[2519]: E0213 20:15:54.634222 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.636155 kubelet[2519]: E0213 20:15:54.635376 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.636155 kubelet[2519]: W0213 20:15:54.635390 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.636155 kubelet[2519]: E0213 20:15:54.635405 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.636155 kubelet[2519]: E0213 20:15:54.635943 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.636700 kubelet[2519]: W0213 20:15:54.635958 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.636700 kubelet[2519]: E0213 20:15:54.635975 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.636700 kubelet[2519]: E0213 20:15:54.636534 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.636700 kubelet[2519]: W0213 20:15:54.636546 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.636700 kubelet[2519]: E0213 20:15:54.636558 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.637003 kubelet[2519]: E0213 20:15:54.636805 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.637003 kubelet[2519]: W0213 20:15:54.636814 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.637003 kubelet[2519]: E0213 20:15:54.636826 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.637288 kubelet[2519]: E0213 20:15:54.637266 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.637288 kubelet[2519]: W0213 20:15:54.637283 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.637421 kubelet[2519]: E0213 20:15:54.637296 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.637612 kubelet[2519]: E0213 20:15:54.637541 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.637612 kubelet[2519]: W0213 20:15:54.637560 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.637612 kubelet[2519]: E0213 20:15:54.637575 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.638603 kubelet[2519]: E0213 20:15:54.638055 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.638603 kubelet[2519]: W0213 20:15:54.638067 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.638603 kubelet[2519]: E0213 20:15:54.638081 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.639210 kubelet[2519]: E0213 20:15:54.639012 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.639210 kubelet[2519]: W0213 20:15:54.639028 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.639210 kubelet[2519]: E0213 20:15:54.639070 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.645292 kubelet[2519]: E0213 20:15:54.644929 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:54.646160 containerd[1475]: time="2025-02-13T20:15:54.646086625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4kvb2,Uid:8d875005-1176-4030-b347-4c52c13315a9,Namespace:calico-system,Attempt:0,}" Feb 13 20:15:54.664113 kubelet[2519]: E0213 20:15:54.664070 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.664113 kubelet[2519]: W0213 20:15:54.664094 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.664381 kubelet[2519]: E0213 20:15:54.664340 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.664414 kubelet[2519]: I0213 20:15:54.664378 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f105c06f-1a6f-4ec2-924d-9b57627c66c2-kubelet-dir\") pod \"csi-node-driver-624nw\" (UID: \"f105c06f-1a6f-4ec2-924d-9b57627c66c2\") " pod="calico-system/csi-node-driver-624nw" Feb 13 20:15:54.665363 systemd[1]: Started cri-containerd-32f3ec3bec78235c7f834f1b74fdd41ddf4984734c655b18a3ed10205a0c876a.scope - libcontainer container 32f3ec3bec78235c7f834f1b74fdd41ddf4984734c655b18a3ed10205a0c876a. Feb 13 20:15:54.667027 kubelet[2519]: E0213 20:15:54.666998 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.667106 kubelet[2519]: W0213 20:15:54.667026 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.667106 kubelet[2519]: E0213 20:15:54.667062 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.667106 kubelet[2519]: I0213 20:15:54.667097 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g9nn\" (UniqueName: \"kubernetes.io/projected/f105c06f-1a6f-4ec2-924d-9b57627c66c2-kube-api-access-8g9nn\") pod \"csi-node-driver-624nw\" (UID: \"f105c06f-1a6f-4ec2-924d-9b57627c66c2\") " pod="calico-system/csi-node-driver-624nw" Feb 13 20:15:54.670494 kubelet[2519]: E0213 20:15:54.669512 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.670494 kubelet[2519]: W0213 20:15:54.669539 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.670494 kubelet[2519]: E0213 20:15:54.669568 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.670494 kubelet[2519]: I0213 20:15:54.669600 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f105c06f-1a6f-4ec2-924d-9b57627c66c2-varrun\") pod \"csi-node-driver-624nw\" (UID: \"f105c06f-1a6f-4ec2-924d-9b57627c66c2\") " pod="calico-system/csi-node-driver-624nw" Feb 13 20:15:54.671396 kubelet[2519]: E0213 20:15:54.671362 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.671504 kubelet[2519]: W0213 20:15:54.671399 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.671504 kubelet[2519]: E0213 20:15:54.671454 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.671667 kubelet[2519]: I0213 20:15:54.671521 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f105c06f-1a6f-4ec2-924d-9b57627c66c2-socket-dir\") pod \"csi-node-driver-624nw\" (UID: \"f105c06f-1a6f-4ec2-924d-9b57627c66c2\") " pod="calico-system/csi-node-driver-624nw" Feb 13 20:15:54.673330 kubelet[2519]: E0213 20:15:54.672970 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.673330 kubelet[2519]: W0213 20:15:54.673176 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.673330 kubelet[2519]: E0213 20:15:54.673213 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.674667 kubelet[2519]: E0213 20:15:54.674640 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.674667 kubelet[2519]: W0213 20:15:54.674664 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.675085 kubelet[2519]: E0213 20:15:54.674695 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.675689 kubelet[2519]: E0213 20:15:54.675557 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.675689 kubelet[2519]: W0213 20:15:54.675584 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.676322 kubelet[2519]: E0213 20:15:54.675826 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.676322 kubelet[2519]: E0213 20:15:54.676132 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.676322 kubelet[2519]: W0213 20:15:54.676145 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.676322 kubelet[2519]: E0213 20:15:54.676159 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.676322 kubelet[2519]: I0213 20:15:54.676193 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f105c06f-1a6f-4ec2-924d-9b57627c66c2-registration-dir\") pod \"csi-node-driver-624nw\" (UID: \"f105c06f-1a6f-4ec2-924d-9b57627c66c2\") " pod="calico-system/csi-node-driver-624nw" Feb 13 20:15:54.676812 kubelet[2519]: E0213 20:15:54.676787 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.677138 kubelet[2519]: W0213 20:15:54.676819 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.677138 kubelet[2519]: E0213 20:15:54.676841 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.677382 kubelet[2519]: E0213 20:15:54.677354 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.677382 kubelet[2519]: W0213 20:15:54.677367 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.677382 kubelet[2519]: E0213 20:15:54.677380 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.678166 kubelet[2519]: E0213 20:15:54.678032 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.678166 kubelet[2519]: W0213 20:15:54.678045 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.678166 kubelet[2519]: E0213 20:15:54.678065 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.678973 kubelet[2519]: E0213 20:15:54.678914 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.678973 kubelet[2519]: W0213 20:15:54.678931 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.678973 kubelet[2519]: E0213 20:15:54.678945 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.679475 kubelet[2519]: E0213 20:15:54.679303 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.679475 kubelet[2519]: W0213 20:15:54.679314 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.679475 kubelet[2519]: E0213 20:15:54.679327 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.680689 kubelet[2519]: E0213 20:15:54.680465 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.680689 kubelet[2519]: W0213 20:15:54.680479 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.680689 kubelet[2519]: E0213 20:15:54.680497 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.680940 kubelet[2519]: E0213 20:15:54.680905 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.680940 kubelet[2519]: W0213 20:15:54.680923 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.680940 kubelet[2519]: E0213 20:15:54.680937 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.734886 containerd[1475]: time="2025-02-13T20:15:54.732487081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:15:54.734886 containerd[1475]: time="2025-02-13T20:15:54.732573353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:15:54.734886 containerd[1475]: time="2025-02-13T20:15:54.732603113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:54.738656 containerd[1475]: time="2025-02-13T20:15:54.736964398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:54.780695 kubelet[2519]: E0213 20:15:54.780453 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.780695 kubelet[2519]: W0213 20:15:54.780483 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.780695 kubelet[2519]: E0213 20:15:54.780512 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.781635 kubelet[2519]: E0213 20:15:54.781415 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.781635 kubelet[2519]: W0213 20:15:54.781439 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.781635 kubelet[2519]: E0213 20:15:54.781480 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.783903 kubelet[2519]: E0213 20:15:54.783630 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.783903 kubelet[2519]: W0213 20:15:54.783670 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.783903 kubelet[2519]: E0213 20:15:54.783710 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.784645 kubelet[2519]: E0213 20:15:54.784434 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.784645 kubelet[2519]: W0213 20:15:54.784455 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.784645 kubelet[2519]: E0213 20:15:54.784611 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.786489 kubelet[2519]: E0213 20:15:54.785336 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.786489 kubelet[2519]: W0213 20:15:54.785356 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.786915 kubelet[2519]: E0213 20:15:54.786780 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.786915 kubelet[2519]: W0213 20:15:54.786801 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.787869 kubelet[2519]: E0213 20:15:54.787760 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.788772 kubelet[2519]: W0213 20:15:54.788166 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.788772 kubelet[2519]: E0213 20:15:54.788327 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.789724 kubelet[2519]: E0213 20:15:54.789703 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.789937 kubelet[2519]: W0213 20:15:54.789780 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.789937 kubelet[2519]: E0213 20:15:54.789810 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.790247 kubelet[2519]: E0213 20:15:54.790164 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.791904 systemd[1]: Started cri-containerd-25277a1f992b661a90226552eddef45a4e869358fa4c3a77be60e9e7224a5f2b.scope - libcontainer container 25277a1f992b661a90226552eddef45a4e869358fa4c3a77be60e9e7224a5f2b. Feb 13 20:15:54.792399 kubelet[2519]: E0213 20:15:54.792166 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.792399 kubelet[2519]: W0213 20:15:54.792199 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.792399 kubelet[2519]: E0213 20:15:54.792226 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.793428 kubelet[2519]: E0213 20:15:54.793182 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.793428 kubelet[2519]: W0213 20:15:54.793208 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.793428 kubelet[2519]: E0213 20:15:54.793235 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.793932 kubelet[2519]: E0213 20:15:54.793910 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.794893 kubelet[2519]: W0213 20:15:54.794683 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.794893 kubelet[2519]: E0213 20:15:54.794728 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.796151 kubelet[2519]: E0213 20:15:54.796119 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.796550 kubelet[2519]: W0213 20:15:54.796239 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.796550 kubelet[2519]: E0213 20:15:54.796448 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.797439 kubelet[2519]: E0213 20:15:54.797419 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.797799 kubelet[2519]: W0213 20:15:54.797499 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.797799 kubelet[2519]: E0213 20:15:54.797526 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.798788 kubelet[2519]: E0213 20:15:54.798725 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.798989 kubelet[2519]: W0213 20:15:54.798967 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.799215 kubelet[2519]: E0213 20:15:54.799161 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.799457 kubelet[2519]: E0213 20:15:54.799403 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.801425 kubelet[2519]: E0213 20:15:54.801396 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.801662 kubelet[2519]: W0213 20:15:54.801584 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.801882 kubelet[2519]: E0213 20:15:54.801806 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.803478 kubelet[2519]: E0213 20:15:54.803144 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.803478 kubelet[2519]: W0213 20:15:54.803174 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.803478 kubelet[2519]: E0213 20:15:54.803219 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.805235 kubelet[2519]: E0213 20:15:54.805062 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.805235 kubelet[2519]: W0213 20:15:54.805090 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.806324 kubelet[2519]: E0213 20:15:54.806054 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.807230 kubelet[2519]: E0213 20:15:54.806781 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.807230 kubelet[2519]: W0213 20:15:54.806808 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.808870 kubelet[2519]: E0213 20:15:54.808638 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.808870 kubelet[2519]: E0213 20:15:54.808817 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.808870 kubelet[2519]: W0213 20:15:54.808834 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.809387 kubelet[2519]: E0213 20:15:54.809286 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.811355 kubelet[2519]: E0213 20:15:54.811274 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.812323 kubelet[2519]: W0213 20:15:54.812128 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.813187 kubelet[2519]: E0213 20:15:54.812809 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.814088 kubelet[2519]: E0213 20:15:54.813962 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.814595 kubelet[2519]: W0213 20:15:54.814348 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.815624 kubelet[2519]: E0213 20:15:54.814965 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.819163 kubelet[2519]: E0213 20:15:54.818882 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.819163 kubelet[2519]: W0213 20:15:54.818916 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.819163 kubelet[2519]: E0213 20:15:54.818951 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.821445 kubelet[2519]: E0213 20:15:54.819990 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.821445 kubelet[2519]: W0213 20:15:54.820026 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.821445 kubelet[2519]: E0213 20:15:54.820138 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.822532 kubelet[2519]: E0213 20:15:54.822407 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.824106 kubelet[2519]: W0213 20:15:54.823548 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.824106 kubelet[2519]: E0213 20:15:54.823608 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.825383 kubelet[2519]: E0213 20:15:54.825271 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.825383 kubelet[2519]: W0213 20:15:54.825297 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.825383 kubelet[2519]: E0213 20:15:54.825326 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.841154 kubelet[2519]: E0213 20:15:54.840991 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.841154 kubelet[2519]: W0213 20:15:54.841023 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.842281 kubelet[2519]: E0213 20:15:54.841912 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.848325 kubelet[2519]: E0213 20:15:54.847713 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:54.890857 containerd[1475]: time="2025-02-13T20:15:54.890642652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4kvb2,Uid:8d875005-1176-4030-b347-4c52c13315a9,Namespace:calico-system,Attempt:0,} returns sandbox id \"25277a1f992b661a90226552eddef45a4e869358fa4c3a77be60e9e7224a5f2b\"" Feb 13 20:15:54.892321 kubelet[2519]: E0213 20:15:54.892281 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:54.895525 containerd[1475]: time="2025-02-13T20:15:54.895419809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 20:15:54.943386 kubelet[2519]: E0213 20:15:54.942930 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.943386 kubelet[2519]: W0213 20:15:54.942963 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.943386 kubelet[2519]: E0213 20:15:54.943084 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.943696 kubelet[2519]: E0213 20:15:54.943637 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.944330 kubelet[2519]: W0213 20:15:54.943876 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.944330 kubelet[2519]: E0213 20:15:54.943915 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.944987 kubelet[2519]: E0213 20:15:54.944494 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.944987 kubelet[2519]: W0213 20:15:54.944512 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.944987 kubelet[2519]: E0213 20:15:54.944536 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.944987 kubelet[2519]: E0213 20:15:54.944850 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.944987 kubelet[2519]: W0213 20:15:54.944863 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.944987 kubelet[2519]: E0213 20:15:54.944973 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.946556 kubelet[2519]: E0213 20:15:54.945942 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:54.946556 kubelet[2519]: W0213 20:15:54.945960 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:54.946556 kubelet[2519]: E0213 20:15:54.945998 2519 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:54.948492 containerd[1475]: time="2025-02-13T20:15:54.945715978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79f5976469-zrgzs,Uid:c2daaced-5e92-4a3e-87bf-4c5351f98f6e,Namespace:calico-system,Attempt:0,} returns sandbox id \"32f3ec3bec78235c7f834f1b74fdd41ddf4984734c655b18a3ed10205a0c876a\"" Feb 13 20:15:54.948609 kubelet[2519]: E0213 20:15:54.947037 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:56.331554 kubelet[2519]: E0213 20:15:56.331485 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-624nw" podUID="f105c06f-1a6f-4ec2-924d-9b57627c66c2" Feb 13 20:15:56.333218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1198989533.mount: Deactivated successfully. Feb 13 20:15:56.514811 containerd[1475]: time="2025-02-13T20:15:56.514380591Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:56.516697 containerd[1475]: time="2025-02-13T20:15:56.516379914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 20:15:56.519915 containerd[1475]: time="2025-02-13T20:15:56.519864307Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:56.525018 containerd[1475]: time="2025-02-13T20:15:56.524929848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:56.527253 containerd[1475]: time="2025-02-13T20:15:56.526215846Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.63072727s" Feb 13 20:15:56.527253 containerd[1475]: time="2025-02-13T20:15:56.526273462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 20:15:56.530825 containerd[1475]: time="2025-02-13T20:15:56.529982303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 20:15:56.533905 containerd[1475]: time="2025-02-13T20:15:56.533656248Z" level=info msg="CreateContainer within sandbox \"25277a1f992b661a90226552eddef45a4e869358fa4c3a77be60e9e7224a5f2b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 20:15:56.574395 containerd[1475]: time="2025-02-13T20:15:56.574334569Z" level=info msg="CreateContainer within sandbox \"25277a1f992b661a90226552eddef45a4e869358fa4c3a77be60e9e7224a5f2b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"63c61d82c82903a6f1fb11b04b31aa732ae79b50284f07e507a24b82e73181b6\"" Feb 13 20:15:56.579171 containerd[1475]: time="2025-02-13T20:15:56.576518116Z" level=info msg="StartContainer for \"63c61d82c82903a6f1fb11b04b31aa732ae79b50284f07e507a24b82e73181b6\"" Feb 13 20:15:56.637076 systemd[1]: Started cri-containerd-63c61d82c82903a6f1fb11b04b31aa732ae79b50284f07e507a24b82e73181b6.scope - libcontainer container 63c61d82c82903a6f1fb11b04b31aa732ae79b50284f07e507a24b82e73181b6. Feb 13 20:15:56.687399 containerd[1475]: time="2025-02-13T20:15:56.687329753Z" level=info msg="StartContainer for \"63c61d82c82903a6f1fb11b04b31aa732ae79b50284f07e507a24b82e73181b6\" returns successfully" Feb 13 20:15:56.704808 systemd[1]: cri-containerd-63c61d82c82903a6f1fb11b04b31aa732ae79b50284f07e507a24b82e73181b6.scope: Deactivated successfully. Feb 13 20:15:56.749584 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63c61d82c82903a6f1fb11b04b31aa732ae79b50284f07e507a24b82e73181b6-rootfs.mount: Deactivated successfully. Feb 13 20:15:56.763730 containerd[1475]: time="2025-02-13T20:15:56.763645119Z" level=info msg="shim disconnected" id=63c61d82c82903a6f1fb11b04b31aa732ae79b50284f07e507a24b82e73181b6 namespace=k8s.io Feb 13 20:15:56.763730 containerd[1475]: time="2025-02-13T20:15:56.763722410Z" level=warning msg="cleaning up after shim disconnected" id=63c61d82c82903a6f1fb11b04b31aa732ae79b50284f07e507a24b82e73181b6 namespace=k8s.io Feb 13 20:15:56.764020 containerd[1475]: time="2025-02-13T20:15:56.763777317Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:15:57.463691 kubelet[2519]: E0213 20:15:57.463111 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:15:58.330332 kubelet[2519]: E0213 20:15:58.330254 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-624nw" podUID="f105c06f-1a6f-4ec2-924d-9b57627c66c2" Feb 13 20:15:59.679341 containerd[1475]: time="2025-02-13T20:15:59.678534314Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:59.680720 containerd[1475]: time="2025-02-13T20:15:59.680629662Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Feb 13 20:15:59.686795 containerd[1475]: time="2025-02-13T20:15:59.685020893Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:59.690796 containerd[1475]: time="2025-02-13T20:15:59.690704040Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:59.692016 containerd[1475]: time="2025-02-13T20:15:59.691962606Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.161932711s" Feb 13 20:15:59.692016 containerd[1475]: time="2025-02-13T20:15:59.692017683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 20:15:59.695954 containerd[1475]: time="2025-02-13T20:15:59.695904584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 20:15:59.722954 containerd[1475]: time="2025-02-13T20:15:59.722900829Z" level=info msg="CreateContainer within sandbox \"32f3ec3bec78235c7f834f1b74fdd41ddf4984734c655b18a3ed10205a0c876a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 20:15:59.757093 containerd[1475]: time="2025-02-13T20:15:59.756899806Z" level=info msg="CreateContainer within sandbox \"32f3ec3bec78235c7f834f1b74fdd41ddf4984734c655b18a3ed10205a0c876a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9b99c2d8c70b7f7b17ab74f8e0f801966b53a0c6df15a0db7dd50bb3ad5f09c7\"" Feb 13 20:15:59.759829 containerd[1475]: time="2025-02-13T20:15:59.759788108Z" level=info msg="StartContainer for \"9b99c2d8c70b7f7b17ab74f8e0f801966b53a0c6df15a0db7dd50bb3ad5f09c7\"" Feb 13 20:15:59.824146 systemd[1]: Started cri-containerd-9b99c2d8c70b7f7b17ab74f8e0f801966b53a0c6df15a0db7dd50bb3ad5f09c7.scope - libcontainer container 9b99c2d8c70b7f7b17ab74f8e0f801966b53a0c6df15a0db7dd50bb3ad5f09c7. Feb 13 20:15:59.894552 containerd[1475]: time="2025-02-13T20:15:59.894476895Z" level=info msg="StartContainer for \"9b99c2d8c70b7f7b17ab74f8e0f801966b53a0c6df15a0db7dd50bb3ad5f09c7\" returns successfully" Feb 13 20:16:00.330382 kubelet[2519]: E0213 20:16:00.330301 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-624nw" podUID="f105c06f-1a6f-4ec2-924d-9b57627c66c2" Feb 13 20:16:00.477836 kubelet[2519]: E0213 20:16:00.477729 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:16:00.512580 kubelet[2519]: I0213 20:16:00.511859 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-79f5976469-zrgzs" podStartSLOduration=1.769140219 podStartE2EDuration="6.511830834s" podCreationTimestamp="2025-02-13 20:15:54 +0000 UTC" firstStartedPulling="2025-02-13 20:15:54.951932276 +0000 UTC m=+13.874282923" lastFinishedPulling="2025-02-13 20:15:59.694622877 +0000 UTC m=+18.616973538" observedRunningTime="2025-02-13 20:16:00.511613245 +0000 UTC m=+19.433963916" watchObservedRunningTime="2025-02-13 20:16:00.511830834 +0000 UTC m=+19.434181505" Feb 13 20:16:01.484021 kubelet[2519]: I0213 20:16:01.477932 2519 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:16:01.484021 kubelet[2519]: E0213 20:16:01.478456 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:16:02.331664 kubelet[2519]: E0213 20:16:02.330464 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-624nw" podUID="f105c06f-1a6f-4ec2-924d-9b57627c66c2" Feb 13 20:16:04.330510 kubelet[2519]: E0213 20:16:04.330451 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-624nw" podUID="f105c06f-1a6f-4ec2-924d-9b57627c66c2" Feb 13 20:16:04.455811 containerd[1475]: time="2025-02-13T20:16:04.455116844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:04.457624 containerd[1475]: time="2025-02-13T20:16:04.457388989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 20:16:04.460568 containerd[1475]: time="2025-02-13T20:16:04.460510177Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:04.469485 containerd[1475]: time="2025-02-13T20:16:04.468801457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:04.470756 containerd[1475]: time="2025-02-13T20:16:04.470676735Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.774713839s" Feb 13 20:16:04.470756 containerd[1475]: time="2025-02-13T20:16:04.470752311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 20:16:04.476358 containerd[1475]: time="2025-02-13T20:16:04.476290452Z" level=info msg="CreateContainer within sandbox \"25277a1f992b661a90226552eddef45a4e869358fa4c3a77be60e9e7224a5f2b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 20:16:04.535644 containerd[1475]: time="2025-02-13T20:16:04.535443333Z" level=info msg="CreateContainer within sandbox \"25277a1f992b661a90226552eddef45a4e869358fa4c3a77be60e9e7224a5f2b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2d95ef5d586c11493253d5d7077c4199970a4a095174158e9ab258174cc5bd10\"" Feb 13 20:16:04.537884 containerd[1475]: time="2025-02-13T20:16:04.537833510Z" level=info msg="StartContainer for \"2d95ef5d586c11493253d5d7077c4199970a4a095174158e9ab258174cc5bd10\"" Feb 13 20:16:04.674042 systemd[1]: Started cri-containerd-2d95ef5d586c11493253d5d7077c4199970a4a095174158e9ab258174cc5bd10.scope - libcontainer container 2d95ef5d586c11493253d5d7077c4199970a4a095174158e9ab258174cc5bd10. Feb 13 20:16:04.719715 containerd[1475]: time="2025-02-13T20:16:04.719656469Z" level=info msg="StartContainer for \"2d95ef5d586c11493253d5d7077c4199970a4a095174158e9ab258174cc5bd10\" returns successfully" Feb 13 20:16:05.388159 systemd[1]: cri-containerd-2d95ef5d586c11493253d5d7077c4199970a4a095174158e9ab258174cc5bd10.scope: Deactivated successfully. Feb 13 20:16:05.450406 containerd[1475]: time="2025-02-13T20:16:05.447538230Z" level=info msg="shim disconnected" id=2d95ef5d586c11493253d5d7077c4199970a4a095174158e9ab258174cc5bd10 namespace=k8s.io Feb 13 20:16:05.450406 containerd[1475]: time="2025-02-13T20:16:05.447617614Z" level=warning msg="cleaning up after shim disconnected" id=2d95ef5d586c11493253d5d7077c4199970a4a095174158e9ab258174cc5bd10 namespace=k8s.io Feb 13 20:16:05.450406 containerd[1475]: time="2025-02-13T20:16:05.447631107Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:16:05.449241 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d95ef5d586c11493253d5d7077c4199970a4a095174158e9ab258174cc5bd10-rootfs.mount: Deactivated successfully. Feb 13 20:16:05.471896 containerd[1475]: time="2025-02-13T20:16:05.471826047Z" level=warning msg="cleanup warnings time=\"2025-02-13T20:16:05Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 20:16:05.497771 kubelet[2519]: E0213 20:16:05.495902 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:16:05.504821 containerd[1475]: time="2025-02-13T20:16:05.503434565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 20:16:05.507622 kubelet[2519]: I0213 20:16:05.507552 2519 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 20:16:05.598444 systemd[1]: Created slice kubepods-besteffort-pod82fe64ff_9e75_4fc3_a4a3_1b4e07cd7dd2.slice - libcontainer container kubepods-besteffort-pod82fe64ff_9e75_4fc3_a4a3_1b4e07cd7dd2.slice. Feb 13 20:16:05.619149 systemd[1]: Created slice kubepods-burstable-pod58af9d69_676e_41db_a9d1_fb2841461113.slice - libcontainer container kubepods-burstable-pod58af9d69_676e_41db_a9d1_fb2841461113.slice. Feb 13 20:16:05.628179 systemd[1]: Created slice kubepods-burstable-podcb4743e4_e761_441d_ae53_4d9924d89649.slice - libcontainer container kubepods-burstable-podcb4743e4_e761_441d_ae53_4d9924d89649.slice. Feb 13 20:16:05.641017 systemd[1]: Created slice kubepods-besteffort-poda042f4c6_cdbb_48a4_9920_8318d966e49f.slice - libcontainer container kubepods-besteffort-poda042f4c6_cdbb_48a4_9920_8318d966e49f.slice. Feb 13 20:16:05.655561 systemd[1]: Created slice kubepods-besteffort-pod648198c7_66dc_48d6_8b8d_fd320bc90666.slice - libcontainer container kubepods-besteffort-pod648198c7_66dc_48d6_8b8d_fd320bc90666.slice. Feb 13 20:16:05.676988 kubelet[2519]: I0213 20:16:05.676898 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a042f4c6-cdbb-48a4-9920-8318d966e49f-calico-apiserver-certs\") pod \"calico-apiserver-8479cf5b7f-mz564\" (UID: \"a042f4c6-cdbb-48a4-9920-8318d966e49f\") " pod="calico-apiserver/calico-apiserver-8479cf5b7f-mz564" Feb 13 20:16:05.676988 kubelet[2519]: I0213 20:16:05.676975 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnrlz\" (UniqueName: \"kubernetes.io/projected/82fe64ff-9e75-4fc3-a4a3-1b4e07cd7dd2-kube-api-access-gnrlz\") pod \"calico-apiserver-8479cf5b7f-4q5j2\" (UID: \"82fe64ff-9e75-4fc3-a4a3-1b4e07cd7dd2\") " pod="calico-apiserver/calico-apiserver-8479cf5b7f-4q5j2" Feb 13 20:16:05.677482 kubelet[2519]: I0213 20:16:05.677019 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/648198c7-66dc-48d6-8b8d-fd320bc90666-tigera-ca-bundle\") pod \"calico-kube-controllers-7887f69768-ls7rf\" (UID: \"648198c7-66dc-48d6-8b8d-fd320bc90666\") " pod="calico-system/calico-kube-controllers-7887f69768-ls7rf" Feb 13 20:16:05.677482 kubelet[2519]: I0213 20:16:05.677045 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcsxb\" (UniqueName: \"kubernetes.io/projected/58af9d69-676e-41db-a9d1-fb2841461113-kube-api-access-qcsxb\") pod \"coredns-6f6b679f8f-62bhh\" (UID: \"58af9d69-676e-41db-a9d1-fb2841461113\") " pod="kube-system/coredns-6f6b679f8f-62bhh" Feb 13 20:16:05.677482 kubelet[2519]: I0213 20:16:05.677076 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb4743e4-e761-441d-ae53-4d9924d89649-config-volume\") pod \"coredns-6f6b679f8f-fbch8\" (UID: \"cb4743e4-e761-441d-ae53-4d9924d89649\") " pod="kube-system/coredns-6f6b679f8f-fbch8" Feb 13 20:16:05.677482 kubelet[2519]: I0213 20:16:05.677106 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/82fe64ff-9e75-4fc3-a4a3-1b4e07cd7dd2-calico-apiserver-certs\") pod \"calico-apiserver-8479cf5b7f-4q5j2\" (UID: \"82fe64ff-9e75-4fc3-a4a3-1b4e07cd7dd2\") " pod="calico-apiserver/calico-apiserver-8479cf5b7f-4q5j2" Feb 13 20:16:05.677482 kubelet[2519]: I0213 20:16:05.677138 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckdj9\" (UniqueName: \"kubernetes.io/projected/a042f4c6-cdbb-48a4-9920-8318d966e49f-kube-api-access-ckdj9\") pod \"calico-apiserver-8479cf5b7f-mz564\" (UID: \"a042f4c6-cdbb-48a4-9920-8318d966e49f\") " pod="calico-apiserver/calico-apiserver-8479cf5b7f-mz564" Feb 13 20:16:05.677793 kubelet[2519]: I0213 20:16:05.677167 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58af9d69-676e-41db-a9d1-fb2841461113-config-volume\") pod \"coredns-6f6b679f8f-62bhh\" (UID: \"58af9d69-676e-41db-a9d1-fb2841461113\") " pod="kube-system/coredns-6f6b679f8f-62bhh" Feb 13 20:16:05.677793 kubelet[2519]: I0213 20:16:05.677195 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqwnc\" (UniqueName: \"kubernetes.io/projected/648198c7-66dc-48d6-8b8d-fd320bc90666-kube-api-access-wqwnc\") pod \"calico-kube-controllers-7887f69768-ls7rf\" (UID: \"648198c7-66dc-48d6-8b8d-fd320bc90666\") " pod="calico-system/calico-kube-controllers-7887f69768-ls7rf" Feb 13 20:16:05.677793 kubelet[2519]: I0213 20:16:05.677230 2519 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tfsm\" (UniqueName: \"kubernetes.io/projected/cb4743e4-e761-441d-ae53-4d9924d89649-kube-api-access-2tfsm\") pod \"coredns-6f6b679f8f-fbch8\" (UID: \"cb4743e4-e761-441d-ae53-4d9924d89649\") " pod="kube-system/coredns-6f6b679f8f-fbch8" Feb 13 20:16:05.913211 containerd[1475]: time="2025-02-13T20:16:05.913051305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8479cf5b7f-4q5j2,Uid:82fe64ff-9e75-4fc3-a4a3-1b4e07cd7dd2,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:16:05.924425 kubelet[2519]: E0213 20:16:05.924333 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:16:05.925490 containerd[1475]: time="2025-02-13T20:16:05.925122500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-62bhh,Uid:58af9d69-676e-41db-a9d1-fb2841461113,Namespace:kube-system,Attempt:0,}" Feb 13 20:16:05.936012 kubelet[2519]: E0213 20:16:05.935959 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:16:05.937125 containerd[1475]: time="2025-02-13T20:16:05.936677024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fbch8,Uid:cb4743e4-e761-441d-ae53-4d9924d89649,Namespace:kube-system,Attempt:0,}" Feb 13 20:16:05.969265 containerd[1475]: time="2025-02-13T20:16:05.969200647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7887f69768-ls7rf,Uid:648198c7-66dc-48d6-8b8d-fd320bc90666,Namespace:calico-system,Attempt:0,}" Feb 13 20:16:05.977584 containerd[1475]: time="2025-02-13T20:16:05.977064469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8479cf5b7f-mz564,Uid:a042f4c6-cdbb-48a4-9920-8318d966e49f,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:16:06.342699 systemd[1]: Created slice kubepods-besteffort-podf105c06f_1a6f_4ec2_924d_9b57627c66c2.slice - libcontainer container kubepods-besteffort-podf105c06f_1a6f_4ec2_924d_9b57627c66c2.slice. Feb 13 20:16:06.349888 containerd[1475]: time="2025-02-13T20:16:06.349517249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-624nw,Uid:f105c06f-1a6f-4ec2-924d-9b57627c66c2,Namespace:calico-system,Attempt:0,}" Feb 13 20:16:06.460905 containerd[1475]: time="2025-02-13T20:16:06.460830148Z" level=error msg="Failed to destroy network for sandbox \"b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:06.476799 containerd[1475]: time="2025-02-13T20:16:06.476713247Z" level=error msg="encountered an error cleaning up failed sandbox \"b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:06.479509 containerd[1475]: time="2025-02-13T20:16:06.479449478Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8479cf5b7f-mz564,Uid:a042f4c6-cdbb-48a4-9920-8318d966e49f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:06.481931 containerd[1475]: time="2025-02-13T20:16:06.477311401Z" level=error msg="Failed to destroy network for sandbox \"843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:06.482410 containerd[1475]: time="2025-02-13T20:16:06.482330146Z" level=error msg="encountered an error cleaning up failed sandbox \"843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:06.482611 containerd[1475]: time="2025-02-13T20:16:06.482518203Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fbch8,Uid:cb4743e4-e761-441d-ae53-4d9924d89649,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:06.493948 containerd[1475]: time="2025-02-13T20:16:06.493782974Z" level=error msg="Failed to destroy network for sandbox \"f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:06.494709 containerd[1475]: time="2025-02-13T20:16:06.494294502Z" level=error msg="encountered an error cleaning up failed sandbox \"f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:06.494709 containerd[1475]: time="2025-02-13T20:16:06.494373394Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-62bhh,Uid:58af9d69-676e-41db-a9d1-fb2841461113,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:06.497767 containerd[1475]: time="2025-02-13T20:16:06.494648887Z" level=error msg="Failed to destroy network for sandbox \"5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:06.497767 containerd[1475]: time="2025-02-13T20:16:06.497548151Z" level=error msg="encountered an error cleaning up failed sandbox \"5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:06.497767 containerd[1475]: time="2025-02-13T20:16:06.497631476Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7887f69768-ls7rf,Uid:648198c7-66dc-48d6-8b8d-fd320bc90666,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:06.506614 containerd[1475]: time="2025-02-13T20:16:06.506560541Z" level=error msg="Failed to destroy network for sandbox \"a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:06.508095 containerd[1475]: time="2025-02-13T20:16:06.508043759Z" level=error msg="encountered an error cleaning up failed sandbox \"a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:06.508335 containerd[1475]: time="2025-02-13T20:16:06.508297624Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8479cf5b7f-4q5j2,Uid:82fe64ff-9e75-4fc3-a4a3-1b4e07cd7dd2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:06.522146 kubelet[2519]: E0213 20:16:06.520910 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:06.522146 kubelet[2519]: E0213 20:16:06.520947 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:06.522146 kubelet[2519]: E0213 20:16:06.521015 2519 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8479cf5b7f-4q5j2" Feb 13 20:16:06.522146 kubelet[2519]: E0213 20:16:06.521042 2519 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8479cf5b7f-4q5j2" Feb 13 20:16:06.522706 kubelet[2519]: E0213 20:16:06.521058 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:06.522706 kubelet[2519]: E0213 20:16:06.521088 2519 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-fbch8" Feb 13 20:16:06.522706 kubelet[2519]: E0213 20:16:06.521093 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8479cf5b7f-4q5j2_calico-apiserver(82fe64ff-9e75-4fc3-a4a3-1b4e07cd7dd2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8479cf5b7f-4q5j2_calico-apiserver(82fe64ff-9e75-4fc3-a4a3-1b4e07cd7dd2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8479cf5b7f-4q5j2" podUID="82fe64ff-9e75-4fc3-a4a3-1b4e07cd7dd2" Feb 13 20:16:06.522931 kubelet[2519]: E0213 20:16:06.521118 2519 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-fbch8" Feb 13 20:16:06.522931 kubelet[2519]: E0213 20:16:06.521171 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-fbch8_kube-system(cb4743e4-e761-441d-ae53-4d9924d89649)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-fbch8_kube-system(cb4743e4-e761-441d-ae53-4d9924d89649)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-fbch8" podUID="cb4743e4-e761-441d-ae53-4d9924d89649" Feb 13 20:16:06.522931 kubelet[2519]: E0213 20:16:06.520903 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:06.523186 kubelet[2519]: E0213 20:16:06.521235 2519 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8479cf5b7f-mz564" Feb 13 20:16:06.523186 kubelet[2519]: E0213 20:16:06.521257 2519 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8479cf5b7f-mz564" Feb 13 20:16:06.523186 kubelet[2519]: E0213 20:16:06.521295 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8479cf5b7f-mz564_calico-apiserver(a042f4c6-cdbb-48a4-9920-8318d966e49f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8479cf5b7f-mz564_calico-apiserver(a042f4c6-cdbb-48a4-9920-8318d966e49f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8479cf5b7f-mz564" podUID="a042f4c6-cdbb-48a4-9920-8318d966e49f" Feb 13 20:16:06.523441 kubelet[2519]: E0213 20:16:06.521333 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:06.523441 kubelet[2519]: E0213 20:16:06.521360 2519 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7887f69768-ls7rf" Feb 13 20:16:06.523441 kubelet[2519]: E0213 20:16:06.521383 2519 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7887f69768-ls7rf" Feb 13 20:16:06.524679 kubelet[2519]: E0213 20:16:06.521454 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7887f69768-ls7rf_calico-system(648198c7-66dc-48d6-8b8d-fd320bc90666)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7887f69768-ls7rf_calico-system(648198c7-66dc-48d6-8b8d-fd320bc90666)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7887f69768-ls7rf" podUID="648198c7-66dc-48d6-8b8d-fd320bc90666" Feb 13 20:16:06.524679 kubelet[2519]: E0213 20:16:06.521016 2519 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-62bhh" Feb 13 20:16:06.524679 kubelet[2519]: E0213 20:16:06.521798 2519 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-62bhh" Feb 13 20:16:06.524970 kubelet[2519]: E0213 20:16:06.521856 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-62bhh_kube-system(58af9d69-676e-41db-a9d1-fb2841461113)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-62bhh_kube-system(58af9d69-676e-41db-a9d1-fb2841461113)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-62bhh" podUID="58af9d69-676e-41db-a9d1-fb2841461113" Feb 13 20:16:06.529582 kubelet[2519]: I0213 20:16:06.529428 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" Feb 13 20:16:06.532463 kubelet[2519]: I0213 20:16:06.532407 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" Feb 13 20:16:06.541929 containerd[1475]: time="2025-02-13T20:16:06.540389453Z" level=info msg="StopPodSandbox for \"5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5\"" Feb 13 20:16:06.542267 containerd[1475]: time="2025-02-13T20:16:06.542232722Z" level=info msg="StopPodSandbox for \"843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a\"" Feb 13 20:16:06.543438 containerd[1475]: time="2025-02-13T20:16:06.543343112Z" level=info msg="Ensure that sandbox 843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a in task-service has been cleanup successfully" Feb 13 20:16:06.544420 containerd[1475]: time="2025-02-13T20:16:06.544247274Z" level=info msg="Ensure that sandbox 5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5 in task-service has been cleanup successfully" Feb 13 20:16:06.546990 kubelet[2519]: I0213 20:16:06.546959 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" Feb 13 20:16:06.547866 containerd[1475]: time="2025-02-13T20:16:06.547825995Z" level=info msg="StopPodSandbox for \"b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe\"" Feb 13 20:16:06.548233 containerd[1475]: time="2025-02-13T20:16:06.548193779Z" level=info msg="Ensure that sandbox b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe in task-service has been cleanup successfully" Feb 13 20:16:06.561235 kubelet[2519]: I0213 20:16:06.561165 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" Feb 13 20:16:06.562165 containerd[1475]: time="2025-02-13T20:16:06.562014560Z" level=info msg="StopPodSandbox for \"f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610\"" Feb 13 20:16:06.563982 containerd[1475]: time="2025-02-13T20:16:06.563887325Z" level=info msg="Ensure that sandbox f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610 in task-service has been cleanup successfully" Feb 13 20:16:06.581510 containerd[1475]: time="2025-02-13T20:16:06.581316179Z" level=error msg="Failed to destroy network for sandbox \"679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:06.582923 containerd[1475]: time="2025-02-13T20:16:06.582111083Z" level=error msg="encountered an error cleaning up failed sandbox \"679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:06.583763 containerd[1475]: time="2025-02-13T20:16:06.583107075Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-624nw,Uid:f105c06f-1a6f-4ec2-924d-9b57627c66c2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:06.583905 kubelet[2519]: E0213 20:16:06.583371 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:06.583905 kubelet[2519]: E0213 20:16:06.583445 2519 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-624nw" Feb 13 20:16:06.583905 kubelet[2519]: E0213 20:16:06.583476 2519 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-624nw" Feb 13 20:16:06.585757 kubelet[2519]: E0213 20:16:06.584553 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-624nw_calico-system(f105c06f-1a6f-4ec2-924d-9b57627c66c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-624nw_calico-system(f105c06f-1a6f-4ec2-924d-9b57627c66c2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-624nw" podUID="f105c06f-1a6f-4ec2-924d-9b57627c66c2" Feb 13 20:16:06.649754 containerd[1475]: time="2025-02-13T20:16:06.649690244Z" level=error msg="StopPodSandbox for \"5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5\" failed" error="failed to destroy network for sandbox \"5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:06.650360 kubelet[2519]: E0213 20:16:06.650137 2519 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" Feb 13 20:16:06.650360 kubelet[2519]: E0213 20:16:06.650200 2519 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5"} Feb 13 20:16:06.650360 kubelet[2519]: E0213 20:16:06.650282 2519 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"648198c7-66dc-48d6-8b8d-fd320bc90666\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:16:06.650360 kubelet[2519]: E0213 20:16:06.650307 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"648198c7-66dc-48d6-8b8d-fd320bc90666\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7887f69768-ls7rf" podUID="648198c7-66dc-48d6-8b8d-fd320bc90666" Feb 13 20:16:06.656926 containerd[1475]: time="2025-02-13T20:16:06.656289052Z" level=error msg="StopPodSandbox for \"843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a\" failed" error="failed to destroy network for sandbox \"843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:06.657245 kubelet[2519]: E0213 20:16:06.657190 2519 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" Feb 13 20:16:06.657345 kubelet[2519]: E0213 20:16:06.657264 2519 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a"} Feb 13 20:16:06.657345 kubelet[2519]: E0213 20:16:06.657315 2519 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cb4743e4-e761-441d-ae53-4d9924d89649\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:16:06.657449 kubelet[2519]: E0213 20:16:06.657360 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cb4743e4-e761-441d-ae53-4d9924d89649\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-fbch8" podUID="cb4743e4-e761-441d-ae53-4d9924d89649" Feb 13 20:16:06.669311 containerd[1475]: time="2025-02-13T20:16:06.668879549Z" level=error msg="StopPodSandbox for \"f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610\" failed" error="failed to destroy network for sandbox \"f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:06.669558 kubelet[2519]: E0213 20:16:06.669132 2519 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" Feb 13 20:16:06.669558 kubelet[2519]: E0213 20:16:06.669190 2519 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610"} Feb 13 20:16:06.669558 kubelet[2519]: E0213 20:16:06.669229 2519 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"58af9d69-676e-41db-a9d1-fb2841461113\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:16:06.669558 kubelet[2519]: E0213 20:16:06.669253 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"58af9d69-676e-41db-a9d1-fb2841461113\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-62bhh" podUID="58af9d69-676e-41db-a9d1-fb2841461113" Feb 13 20:16:06.670922 containerd[1475]: time="2025-02-13T20:16:06.670860526Z" level=error msg="StopPodSandbox for \"b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe\" failed" error="failed to destroy network for sandbox \"b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:06.671300 kubelet[2519]: E0213 20:16:06.671153 2519 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" Feb 13 20:16:06.671300 kubelet[2519]: E0213 20:16:06.671211 2519 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe"} Feb 13 20:16:06.671300 kubelet[2519]: E0213 20:16:06.671248 2519 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a042f4c6-cdbb-48a4-9920-8318d966e49f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:16:06.671300 kubelet[2519]: E0213 20:16:06.671272 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a042f4c6-cdbb-48a4-9920-8318d966e49f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8479cf5b7f-mz564" podUID="a042f4c6-cdbb-48a4-9920-8318d966e49f" Feb 13 20:16:06.814545 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610-shm.mount: Deactivated successfully. Feb 13 20:16:06.815071 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175-shm.mount: Deactivated successfully. Feb 13 20:16:07.568458 kubelet[2519]: I0213 20:16:07.568086 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" Feb 13 20:16:07.570011 containerd[1475]: time="2025-02-13T20:16:07.569379783Z" level=info msg="StopPodSandbox for \"679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d\"" Feb 13 20:16:07.570011 containerd[1475]: time="2025-02-13T20:16:07.569603588Z" level=info msg="Ensure that sandbox 679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d in task-service has been cleanup successfully" Feb 13 20:16:07.578624 kubelet[2519]: I0213 20:16:07.577674 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" Feb 13 20:16:07.579784 containerd[1475]: time="2025-02-13T20:16:07.579716585Z" level=info msg="StopPodSandbox for \"a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175\"" Feb 13 20:16:07.580210 containerd[1475]: time="2025-02-13T20:16:07.580178159Z" level=info msg="Ensure that sandbox a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175 in task-service has been cleanup successfully" Feb 13 20:16:07.655339 containerd[1475]: time="2025-02-13T20:16:07.655260280Z" level=error msg="StopPodSandbox for \"679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d\" failed" error="failed to destroy network for sandbox \"679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:07.655966 kubelet[2519]: E0213 20:16:07.655921 2519 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" Feb 13 20:16:07.656174 kubelet[2519]: E0213 20:16:07.655975 2519 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d"} Feb 13 20:16:07.656174 kubelet[2519]: E0213 20:16:07.656016 2519 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f105c06f-1a6f-4ec2-924d-9b57627c66c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:16:07.656174 kubelet[2519]: E0213 20:16:07.656049 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f105c06f-1a6f-4ec2-924d-9b57627c66c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-624nw" podUID="f105c06f-1a6f-4ec2-924d-9b57627c66c2" Feb 13 20:16:07.658536 containerd[1475]: time="2025-02-13T20:16:07.658486950Z" level=error msg="StopPodSandbox for \"a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175\" failed" error="failed to destroy network for sandbox \"a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:16:07.660782 kubelet[2519]: E0213 20:16:07.658732 2519 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" Feb 13 20:16:07.660782 kubelet[2519]: E0213 20:16:07.658871 2519 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175"} Feb 13 20:16:07.660782 kubelet[2519]: E0213 20:16:07.658906 2519 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"82fe64ff-9e75-4fc3-a4a3-1b4e07cd7dd2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:16:07.660782 kubelet[2519]: E0213 20:16:07.658944 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"82fe64ff-9e75-4fc3-a4a3-1b4e07cd7dd2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8479cf5b7f-4q5j2" podUID="82fe64ff-9e75-4fc3-a4a3-1b4e07cd7dd2" Feb 13 20:16:12.537797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1758133821.mount: Deactivated successfully. Feb 13 20:16:12.663171 containerd[1475]: time="2025-02-13T20:16:12.648876909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:12.677237 containerd[1475]: time="2025-02-13T20:16:12.656316513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 20:16:12.689089 containerd[1475]: time="2025-02-13T20:16:12.689037717Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:12.692987 containerd[1475]: time="2025-02-13T20:16:12.692908824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:12.699797 containerd[1475]: time="2025-02-13T20:16:12.699695824Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.190589414s" Feb 13 20:16:12.700173 containerd[1475]: time="2025-02-13T20:16:12.700023348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 20:16:12.737245 containerd[1475]: time="2025-02-13T20:16:12.737195270Z" level=info msg="CreateContainer within sandbox \"25277a1f992b661a90226552eddef45a4e869358fa4c3a77be60e9e7224a5f2b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 20:16:12.795141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3640906575.mount: Deactivated successfully. Feb 13 20:16:12.940555 containerd[1475]: time="2025-02-13T20:16:12.940455594Z" level=info msg="CreateContainer within sandbox \"25277a1f992b661a90226552eddef45a4e869358fa4c3a77be60e9e7224a5f2b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7bc85994d715af3ab973090023616c75a1f5f240b45be2c4f0b42e20a1743ab0\"" Feb 13 20:16:12.941349 containerd[1475]: time="2025-02-13T20:16:12.941314091Z" level=info msg="StartContainer for \"7bc85994d715af3ab973090023616c75a1f5f240b45be2c4f0b42e20a1743ab0\"" Feb 13 20:16:13.115019 systemd[1]: Started cri-containerd-7bc85994d715af3ab973090023616c75a1f5f240b45be2c4f0b42e20a1743ab0.scope - libcontainer container 7bc85994d715af3ab973090023616c75a1f5f240b45be2c4f0b42e20a1743ab0. Feb 13 20:16:13.216424 containerd[1475]: time="2025-02-13T20:16:13.216359968Z" level=info msg="StartContainer for \"7bc85994d715af3ab973090023616c75a1f5f240b45be2c4f0b42e20a1743ab0\" returns successfully" Feb 13 20:16:13.331265 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 20:16:13.333875 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 20:16:13.724680 kubelet[2519]: E0213 20:16:13.724072 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:16:13.930801 kubelet[2519]: I0213 20:16:13.930707 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4kvb2" podStartSLOduration=2.118316554 podStartE2EDuration="19.930675287s" podCreationTimestamp="2025-02-13 20:15:54 +0000 UTC" firstStartedPulling="2025-02-13 20:15:54.895000737 +0000 UTC m=+13.817351384" lastFinishedPulling="2025-02-13 20:16:12.707359445 +0000 UTC m=+31.629710117" observedRunningTime="2025-02-13 20:16:13.899286213 +0000 UTC m=+32.821636883" watchObservedRunningTime="2025-02-13 20:16:13.930675287 +0000 UTC m=+32.853025956" Feb 13 20:16:14.696371 kubelet[2519]: E0213 20:16:14.696228 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:16:15.699382 kubelet[2519]: E0213 20:16:15.698829 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:16:17.334690 containerd[1475]: time="2025-02-13T20:16:17.333074834Z" level=info msg="StopPodSandbox for \"5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5\"" Feb 13 20:16:17.785336 containerd[1475]: 2025-02-13 20:16:17.452 [INFO][3835] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" Feb 13 20:16:17.785336 containerd[1475]: 2025-02-13 20:16:17.453 [INFO][3835] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" iface="eth0" netns="/var/run/netns/cni-ab2b7e4f-0f4d-1a3f-02dd-d20e8d943c23" Feb 13 20:16:17.785336 containerd[1475]: 2025-02-13 20:16:17.453 [INFO][3835] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" iface="eth0" netns="/var/run/netns/cni-ab2b7e4f-0f4d-1a3f-02dd-d20e8d943c23" Feb 13 20:16:17.785336 containerd[1475]: 2025-02-13 20:16:17.457 [INFO][3835] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" iface="eth0" netns="/var/run/netns/cni-ab2b7e4f-0f4d-1a3f-02dd-d20e8d943c23" Feb 13 20:16:17.785336 containerd[1475]: 2025-02-13 20:16:17.457 [INFO][3835] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" Feb 13 20:16:17.785336 containerd[1475]: 2025-02-13 20:16:17.457 [INFO][3835] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" Feb 13 20:16:17.785336 containerd[1475]: 2025-02-13 20:16:17.758 [INFO][3842] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" HandleID="k8s-pod-network.5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--kube--controllers--7887f69768--ls7rf-eth0" Feb 13 20:16:17.785336 containerd[1475]: 2025-02-13 20:16:17.761 [INFO][3842] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:17.785336 containerd[1475]: 2025-02-13 20:16:17.762 [INFO][3842] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:17.785336 containerd[1475]: 2025-02-13 20:16:17.777 [WARNING][3842] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" HandleID="k8s-pod-network.5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--kube--controllers--7887f69768--ls7rf-eth0" Feb 13 20:16:17.785336 containerd[1475]: 2025-02-13 20:16:17.777 [INFO][3842] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" HandleID="k8s-pod-network.5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--kube--controllers--7887f69768--ls7rf-eth0" Feb 13 20:16:17.785336 containerd[1475]: 2025-02-13 20:16:17.780 [INFO][3842] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:17.785336 containerd[1475]: 2025-02-13 20:16:17.782 [INFO][3835] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" Feb 13 20:16:17.789035 containerd[1475]: time="2025-02-13T20:16:17.787888392Z" level=info msg="TearDown network for sandbox \"5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5\" successfully" Feb 13 20:16:17.789035 containerd[1475]: time="2025-02-13T20:16:17.787932669Z" level=info msg="StopPodSandbox for \"5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5\" returns successfully" Feb 13 20:16:17.791658 systemd[1]: run-netns-cni\x2dab2b7e4f\x2d0f4d\x2d1a3f\x2d02dd\x2dd20e8d943c23.mount: Deactivated successfully. Feb 13 20:16:17.794109 containerd[1475]: time="2025-02-13T20:16:17.794060289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7887f69768-ls7rf,Uid:648198c7-66dc-48d6-8b8d-fd320bc90666,Namespace:calico-system,Attempt:1,}" Feb 13 20:16:18.064420 systemd-networkd[1378]: cali7046ea251e5: Link UP Feb 13 20:16:18.066304 systemd-networkd[1378]: cali7046ea251e5: Gained carrier Feb 13 20:16:18.092179 containerd[1475]: 2025-02-13 20:16:17.885 [INFO][3869] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 20:16:18.092179 containerd[1475]: 2025-02-13 20:16:17.904 [INFO][3869] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--e--9d3732dae3-k8s-calico--kube--controllers--7887f69768--ls7rf-eth0 calico-kube-controllers-7887f69768- calico-system 648198c7-66dc-48d6-8b8d-fd320bc90666 757 0 2025-02-13 20:15:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7887f69768 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.1-e-9d3732dae3 calico-kube-controllers-7887f69768-ls7rf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7046ea251e5 [] []}} ContainerID="7c074e44d7abc934522835f118a16be057c0e4dbd7bf9e667003e11ed0a81f96" Namespace="calico-system" Pod="calico-kube-controllers-7887f69768-ls7rf" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-calico--kube--controllers--7887f69768--ls7rf-" Feb 13 20:16:18.092179 containerd[1475]: 2025-02-13 20:16:17.905 [INFO][3869] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7c074e44d7abc934522835f118a16be057c0e4dbd7bf9e667003e11ed0a81f96" Namespace="calico-system" Pod="calico-kube-controllers-7887f69768-ls7rf" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-calico--kube--controllers--7887f69768--ls7rf-eth0" Feb 13 20:16:18.092179 containerd[1475]: 2025-02-13 20:16:17.971 [INFO][3880] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c074e44d7abc934522835f118a16be057c0e4dbd7bf9e667003e11ed0a81f96" HandleID="k8s-pod-network.7c074e44d7abc934522835f118a16be057c0e4dbd7bf9e667003e11ed0a81f96" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--kube--controllers--7887f69768--ls7rf-eth0" Feb 13 20:16:18.092179 containerd[1475]: 2025-02-13 20:16:17.987 [INFO][3880] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7c074e44d7abc934522835f118a16be057c0e4dbd7bf9e667003e11ed0a81f96" HandleID="k8s-pod-network.7c074e44d7abc934522835f118a16be057c0e4dbd7bf9e667003e11ed0a81f96" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--kube--controllers--7887f69768--ls7rf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000384480), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.1-e-9d3732dae3", "pod":"calico-kube-controllers-7887f69768-ls7rf", "timestamp":"2025-02-13 20:16:17.971534978 +0000 UTC"}, Hostname:"ci-4081.3.1-e-9d3732dae3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:16:18.092179 containerd[1475]: 2025-02-13 20:16:17.988 [INFO][3880] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:18.092179 containerd[1475]: 2025-02-13 20:16:17.988 [INFO][3880] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:18.092179 containerd[1475]: 2025-02-13 20:16:17.988 [INFO][3880] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-e-9d3732dae3' Feb 13 20:16:18.092179 containerd[1475]: 2025-02-13 20:16:17.994 [INFO][3880] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7c074e44d7abc934522835f118a16be057c0e4dbd7bf9e667003e11ed0a81f96" host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:18.092179 containerd[1475]: 2025-02-13 20:16:18.008 [INFO][3880] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:18.092179 containerd[1475]: 2025-02-13 20:16:18.015 [INFO][3880] ipam/ipam.go 489: Trying affinity for 192.168.45.0/26 host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:18.092179 containerd[1475]: 2025-02-13 20:16:18.018 [INFO][3880] ipam/ipam.go 155: Attempting to load block cidr=192.168.45.0/26 host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:18.092179 containerd[1475]: 2025-02-13 20:16:18.021 [INFO][3880] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.45.0/26 host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:18.092179 containerd[1475]: 2025-02-13 20:16:18.021 [INFO][3880] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.45.0/26 handle="k8s-pod-network.7c074e44d7abc934522835f118a16be057c0e4dbd7bf9e667003e11ed0a81f96" host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:18.092179 containerd[1475]: 2025-02-13 20:16:18.024 [INFO][3880] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7c074e44d7abc934522835f118a16be057c0e4dbd7bf9e667003e11ed0a81f96 Feb 13 20:16:18.092179 containerd[1475]: 2025-02-13 20:16:18.033 [INFO][3880] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.45.0/26 handle="k8s-pod-network.7c074e44d7abc934522835f118a16be057c0e4dbd7bf9e667003e11ed0a81f96" host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:18.092179 containerd[1475]: 2025-02-13 20:16:18.043 [INFO][3880] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.45.1/26] block=192.168.45.0/26 handle="k8s-pod-network.7c074e44d7abc934522835f118a16be057c0e4dbd7bf9e667003e11ed0a81f96" host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:18.092179 containerd[1475]: 2025-02-13 20:16:18.043 [INFO][3880] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.45.1/26] handle="k8s-pod-network.7c074e44d7abc934522835f118a16be057c0e4dbd7bf9e667003e11ed0a81f96" host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:18.092179 containerd[1475]: 2025-02-13 20:16:18.043 [INFO][3880] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:18.092179 containerd[1475]: 2025-02-13 20:16:18.043 [INFO][3880] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.45.1/26] IPv6=[] ContainerID="7c074e44d7abc934522835f118a16be057c0e4dbd7bf9e667003e11ed0a81f96" HandleID="k8s-pod-network.7c074e44d7abc934522835f118a16be057c0e4dbd7bf9e667003e11ed0a81f96" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--kube--controllers--7887f69768--ls7rf-eth0" Feb 13 20:16:18.103547 containerd[1475]: 2025-02-13 20:16:18.048 [INFO][3869] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7c074e44d7abc934522835f118a16be057c0e4dbd7bf9e667003e11ed0a81f96" Namespace="calico-system" Pod="calico-kube-controllers-7887f69768-ls7rf" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-calico--kube--controllers--7887f69768--ls7rf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--e--9d3732dae3-k8s-calico--kube--controllers--7887f69768--ls7rf-eth0", GenerateName:"calico-kube-controllers-7887f69768-", Namespace:"calico-system", SelfLink:"", UID:"648198c7-66dc-48d6-8b8d-fd320bc90666", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7887f69768", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-e-9d3732dae3", ContainerID:"", Pod:"calico-kube-controllers-7887f69768-ls7rf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.45.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7046ea251e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:18.103547 containerd[1475]: 2025-02-13 20:16:18.048 [INFO][3869] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.45.1/32] ContainerID="7c074e44d7abc934522835f118a16be057c0e4dbd7bf9e667003e11ed0a81f96" Namespace="calico-system" Pod="calico-kube-controllers-7887f69768-ls7rf" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-calico--kube--controllers--7887f69768--ls7rf-eth0" Feb 13 20:16:18.103547 containerd[1475]: 2025-02-13 20:16:18.048 [INFO][3869] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7046ea251e5 ContainerID="7c074e44d7abc934522835f118a16be057c0e4dbd7bf9e667003e11ed0a81f96" Namespace="calico-system" Pod="calico-kube-controllers-7887f69768-ls7rf" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-calico--kube--controllers--7887f69768--ls7rf-eth0" Feb 13 20:16:18.103547 containerd[1475]: 2025-02-13 20:16:18.066 [INFO][3869] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c074e44d7abc934522835f118a16be057c0e4dbd7bf9e667003e11ed0a81f96" Namespace="calico-system" Pod="calico-kube-controllers-7887f69768-ls7rf" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-calico--kube--controllers--7887f69768--ls7rf-eth0" Feb 13 20:16:18.103547 containerd[1475]: 2025-02-13 20:16:18.066 [INFO][3869] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7c074e44d7abc934522835f118a16be057c0e4dbd7bf9e667003e11ed0a81f96" Namespace="calico-system" Pod="calico-kube-controllers-7887f69768-ls7rf" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-calico--kube--controllers--7887f69768--ls7rf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--e--9d3732dae3-k8s-calico--kube--controllers--7887f69768--ls7rf-eth0", GenerateName:"calico-kube-controllers-7887f69768-", Namespace:"calico-system", SelfLink:"", UID:"648198c7-66dc-48d6-8b8d-fd320bc90666", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7887f69768", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-e-9d3732dae3", ContainerID:"7c074e44d7abc934522835f118a16be057c0e4dbd7bf9e667003e11ed0a81f96", Pod:"calico-kube-controllers-7887f69768-ls7rf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.45.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7046ea251e5", MAC:"1e:04:4b:f1:74:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:18.103547 containerd[1475]: 2025-02-13 20:16:18.087 [INFO][3869] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7c074e44d7abc934522835f118a16be057c0e4dbd7bf9e667003e11ed0a81f96" Namespace="calico-system" Pod="calico-kube-controllers-7887f69768-ls7rf" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-calico--kube--controllers--7887f69768--ls7rf-eth0" Feb 13 20:16:18.142951 containerd[1475]: time="2025-02-13T20:16:18.142803487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:16:18.142951 containerd[1475]: time="2025-02-13T20:16:18.142872244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:16:18.142951 containerd[1475]: time="2025-02-13T20:16:18.142884719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:18.143378 containerd[1475]: time="2025-02-13T20:16:18.143302497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:18.185054 systemd[1]: Started cri-containerd-7c074e44d7abc934522835f118a16be057c0e4dbd7bf9e667003e11ed0a81f96.scope - libcontainer container 7c074e44d7abc934522835f118a16be057c0e4dbd7bf9e667003e11ed0a81f96. Feb 13 20:16:18.254402 containerd[1475]: time="2025-02-13T20:16:18.254220731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7887f69768-ls7rf,Uid:648198c7-66dc-48d6-8b8d-fd320bc90666,Namespace:calico-system,Attempt:1,} returns sandbox id \"7c074e44d7abc934522835f118a16be057c0e4dbd7bf9e667003e11ed0a81f96\"" Feb 13 20:16:18.289797 containerd[1475]: time="2025-02-13T20:16:18.289727669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 20:16:19.333006 containerd[1475]: time="2025-02-13T20:16:19.332124455Z" level=info msg="StopPodSandbox for \"f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610\"" Feb 13 20:16:19.333006 containerd[1475]: time="2025-02-13T20:16:19.332784163Z" level=info msg="StopPodSandbox for \"a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175\"" Feb 13 20:16:19.338656 containerd[1475]: time="2025-02-13T20:16:19.337925541Z" level=info msg="StopPodSandbox for \"843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a\"" Feb 13 20:16:19.608223 containerd[1475]: 2025-02-13 20:16:19.470 [INFO][3999] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" Feb 13 20:16:19.608223 containerd[1475]: 2025-02-13 20:16:19.473 [INFO][3999] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" iface="eth0" netns="/var/run/netns/cni-767fddef-c367-4102-2134-af85bb78f755" Feb 13 20:16:19.608223 containerd[1475]: 2025-02-13 20:16:19.474 [INFO][3999] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" iface="eth0" netns="/var/run/netns/cni-767fddef-c367-4102-2134-af85bb78f755" Feb 13 20:16:19.608223 containerd[1475]: 2025-02-13 20:16:19.474 [INFO][3999] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" iface="eth0" netns="/var/run/netns/cni-767fddef-c367-4102-2134-af85bb78f755" Feb 13 20:16:19.608223 containerd[1475]: 2025-02-13 20:16:19.475 [INFO][3999] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" Feb 13 20:16:19.608223 containerd[1475]: 2025-02-13 20:16:19.475 [INFO][3999] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" Feb 13 20:16:19.608223 containerd[1475]: 2025-02-13 20:16:19.578 [INFO][4024] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" HandleID="k8s-pod-network.f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" Workload="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--62bhh-eth0" Feb 13 20:16:19.608223 containerd[1475]: 2025-02-13 20:16:19.579 [INFO][4024] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:19.608223 containerd[1475]: 2025-02-13 20:16:19.579 [INFO][4024] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:19.608223 containerd[1475]: 2025-02-13 20:16:19.591 [WARNING][4024] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" HandleID="k8s-pod-network.f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" Workload="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--62bhh-eth0" Feb 13 20:16:19.608223 containerd[1475]: 2025-02-13 20:16:19.591 [INFO][4024] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" HandleID="k8s-pod-network.f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" Workload="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--62bhh-eth0" Feb 13 20:16:19.608223 containerd[1475]: 2025-02-13 20:16:19.596 [INFO][4024] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:19.608223 containerd[1475]: 2025-02-13 20:16:19.600 [INFO][3999] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" Feb 13 20:16:19.610505 containerd[1475]: time="2025-02-13T20:16:19.608558174Z" level=info msg="TearDown network for sandbox \"f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610\" successfully" Feb 13 20:16:19.617928 containerd[1475]: time="2025-02-13T20:16:19.608722319Z" level=info msg="StopPodSandbox for \"f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610\" returns successfully" Feb 13 20:16:19.618890 kubelet[2519]: E0213 20:16:19.618691 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:16:19.620115 systemd[1]: run-netns-cni\x2d767fddef\x2dc367\x2d4102\x2d2134\x2daf85bb78f755.mount: Deactivated successfully. Feb 13 20:16:19.624892 containerd[1475]: time="2025-02-13T20:16:19.623508647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-62bhh,Uid:58af9d69-676e-41db-a9d1-fb2841461113,Namespace:kube-system,Attempt:1,}" Feb 13 20:16:19.665611 containerd[1475]: 2025-02-13 20:16:19.478 [INFO][4007] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" Feb 13 20:16:19.665611 containerd[1475]: 2025-02-13 20:16:19.478 [INFO][4007] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" iface="eth0" netns="/var/run/netns/cni-641e4bd7-21e4-1502-ef75-69f7df89b969" Feb 13 20:16:19.665611 containerd[1475]: 2025-02-13 20:16:19.480 [INFO][4007] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" iface="eth0" netns="/var/run/netns/cni-641e4bd7-21e4-1502-ef75-69f7df89b969" Feb 13 20:16:19.665611 containerd[1475]: 2025-02-13 20:16:19.480 [INFO][4007] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" iface="eth0" netns="/var/run/netns/cni-641e4bd7-21e4-1502-ef75-69f7df89b969" Feb 13 20:16:19.665611 containerd[1475]: 2025-02-13 20:16:19.481 [INFO][4007] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" Feb 13 20:16:19.665611 containerd[1475]: 2025-02-13 20:16:19.482 [INFO][4007] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" Feb 13 20:16:19.665611 containerd[1475]: 2025-02-13 20:16:19.586 [INFO][4026] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" HandleID="k8s-pod-network.a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--4q5j2-eth0" Feb 13 20:16:19.665611 containerd[1475]: 2025-02-13 20:16:19.587 [INFO][4026] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:19.665611 containerd[1475]: 2025-02-13 20:16:19.597 [INFO][4026] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:19.665611 containerd[1475]: 2025-02-13 20:16:19.625 [WARNING][4026] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" HandleID="k8s-pod-network.a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--4q5j2-eth0" Feb 13 20:16:19.665611 containerd[1475]: 2025-02-13 20:16:19.625 [INFO][4026] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" HandleID="k8s-pod-network.a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--4q5j2-eth0" Feb 13 20:16:19.665611 containerd[1475]: 2025-02-13 20:16:19.630 [INFO][4026] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:19.665611 containerd[1475]: 2025-02-13 20:16:19.649 [INFO][4007] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" Feb 13 20:16:19.671671 containerd[1475]: time="2025-02-13T20:16:19.666950748Z" level=info msg="TearDown network for sandbox \"a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175\" successfully" Feb 13 20:16:19.671671 containerd[1475]: time="2025-02-13T20:16:19.666998706Z" level=info msg="StopPodSandbox for \"a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175\" returns successfully" Feb 13 20:16:19.671671 containerd[1475]: time="2025-02-13T20:16:19.671145974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8479cf5b7f-4q5j2,Uid:82fe64ff-9e75-4fc3-a4a3-1b4e07cd7dd2,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:16:19.677791 containerd[1475]: 2025-02-13 20:16:19.547 [INFO][4010] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" Feb 13 20:16:19.677791 containerd[1475]: 2025-02-13 20:16:19.547 [INFO][4010] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" iface="eth0" netns="/var/run/netns/cni-77dff875-54ee-909f-8e0d-0d75b543588f" Feb 13 20:16:19.677791 containerd[1475]: 2025-02-13 20:16:19.552 [INFO][4010] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" iface="eth0" netns="/var/run/netns/cni-77dff875-54ee-909f-8e0d-0d75b543588f" Feb 13 20:16:19.677791 containerd[1475]: 2025-02-13 20:16:19.552 [INFO][4010] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" iface="eth0" netns="/var/run/netns/cni-77dff875-54ee-909f-8e0d-0d75b543588f" Feb 13 20:16:19.677791 containerd[1475]: 2025-02-13 20:16:19.553 [INFO][4010] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" Feb 13 20:16:19.677791 containerd[1475]: 2025-02-13 20:16:19.553 [INFO][4010] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" Feb 13 20:16:19.677791 containerd[1475]: 2025-02-13 20:16:19.648 [INFO][4035] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" HandleID="k8s-pod-network.843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" Workload="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--fbch8-eth0" Feb 13 20:16:19.677791 containerd[1475]: 2025-02-13 20:16:19.648 [INFO][4035] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:19.677791 containerd[1475]: 2025-02-13 20:16:19.648 [INFO][4035] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:19.677791 containerd[1475]: 2025-02-13 20:16:19.662 [WARNING][4035] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" HandleID="k8s-pod-network.843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" Workload="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--fbch8-eth0" Feb 13 20:16:19.677791 containerd[1475]: 2025-02-13 20:16:19.662 [INFO][4035] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" HandleID="k8s-pod-network.843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" Workload="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--fbch8-eth0" Feb 13 20:16:19.677791 containerd[1475]: 2025-02-13 20:16:19.664 [INFO][4035] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:19.677791 containerd[1475]: 2025-02-13 20:16:19.672 [INFO][4010] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" Feb 13 20:16:19.677273 systemd[1]: run-netns-cni\x2d641e4bd7\x2d21e4\x2d1502\x2def75\x2d69f7df89b969.mount: Deactivated successfully. Feb 13 20:16:19.688758 containerd[1475]: time="2025-02-13T20:16:19.688584341Z" level=info msg="TearDown network for sandbox \"843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a\" successfully" Feb 13 20:16:19.688758 containerd[1475]: time="2025-02-13T20:16:19.688662802Z" level=info msg="StopPodSandbox for \"843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a\" returns successfully" Feb 13 20:16:19.689654 systemd[1]: run-netns-cni\x2d77dff875\x2d54ee\x2d909f\x2d8e0d\x2d0d75b543588f.mount: Deactivated successfully. Feb 13 20:16:19.697778 kubelet[2519]: E0213 20:16:19.696896 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:16:19.700457 containerd[1475]: time="2025-02-13T20:16:19.700415139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fbch8,Uid:cb4743e4-e761-441d-ae53-4d9924d89649,Namespace:kube-system,Attempt:1,}" Feb 13 20:16:19.926977 systemd-networkd[1378]: cali7046ea251e5: Gained IPv6LL Feb 13 20:16:20.291451 systemd-networkd[1378]: cali341109248de: Link UP Feb 13 20:16:20.296190 systemd-networkd[1378]: cali341109248de: Gained carrier Feb 13 20:16:20.343305 containerd[1475]: 2025-02-13 20:16:19.974 [INFO][4056] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 20:16:20.343305 containerd[1475]: 2025-02-13 20:16:20.025 [INFO][4056] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--62bhh-eth0 coredns-6f6b679f8f- kube-system 58af9d69-676e-41db-a9d1-fb2841461113 769 0 2025-02-13 20:15:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.1-e-9d3732dae3 coredns-6f6b679f8f-62bhh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali341109248de [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="063d95aa3cc6ba34c834adb56ecf142f5d72ac341121350c2940397fa55fdbe7" Namespace="kube-system" Pod="coredns-6f6b679f8f-62bhh" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--62bhh-" Feb 13 20:16:20.343305 containerd[1475]: 2025-02-13 20:16:20.026 [INFO][4056] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="063d95aa3cc6ba34c834adb56ecf142f5d72ac341121350c2940397fa55fdbe7" Namespace="kube-system" Pod="coredns-6f6b679f8f-62bhh" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--62bhh-eth0" Feb 13 20:16:20.343305 containerd[1475]: 2025-02-13 20:16:20.150 [INFO][4106] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="063d95aa3cc6ba34c834adb56ecf142f5d72ac341121350c2940397fa55fdbe7" HandleID="k8s-pod-network.063d95aa3cc6ba34c834adb56ecf142f5d72ac341121350c2940397fa55fdbe7" Workload="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--62bhh-eth0" Feb 13 20:16:20.343305 containerd[1475]: 2025-02-13 20:16:20.172 [INFO][4106] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="063d95aa3cc6ba34c834adb56ecf142f5d72ac341121350c2940397fa55fdbe7" HandleID="k8s-pod-network.063d95aa3cc6ba34c834adb56ecf142f5d72ac341121350c2940397fa55fdbe7" Workload="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--62bhh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a420), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.1-e-9d3732dae3", "pod":"coredns-6f6b679f8f-62bhh", "timestamp":"2025-02-13 20:16:20.150527179 +0000 UTC"}, Hostname:"ci-4081.3.1-e-9d3732dae3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:16:20.343305 containerd[1475]: 2025-02-13 20:16:20.172 [INFO][4106] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:20.343305 containerd[1475]: 2025-02-13 20:16:20.172 [INFO][4106] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:20.343305 containerd[1475]: 2025-02-13 20:16:20.172 [INFO][4106] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-e-9d3732dae3' Feb 13 20:16:20.343305 containerd[1475]: 2025-02-13 20:16:20.177 [INFO][4106] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.063d95aa3cc6ba34c834adb56ecf142f5d72ac341121350c2940397fa55fdbe7" host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:20.343305 containerd[1475]: 2025-02-13 20:16:20.193 [INFO][4106] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:20.343305 containerd[1475]: 2025-02-13 20:16:20.211 [INFO][4106] ipam/ipam.go 489: Trying affinity for 192.168.45.0/26 host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:20.343305 containerd[1475]: 2025-02-13 20:16:20.220 [INFO][4106] ipam/ipam.go 155: Attempting to load block cidr=192.168.45.0/26 host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:20.343305 containerd[1475]: 2025-02-13 20:16:20.230 [INFO][4106] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.45.0/26 host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:20.343305 containerd[1475]: 2025-02-13 20:16:20.230 [INFO][4106] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.45.0/26 handle="k8s-pod-network.063d95aa3cc6ba34c834adb56ecf142f5d72ac341121350c2940397fa55fdbe7" host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:20.343305 containerd[1475]: 2025-02-13 20:16:20.234 [INFO][4106] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.063d95aa3cc6ba34c834adb56ecf142f5d72ac341121350c2940397fa55fdbe7 Feb 13 20:16:20.343305 containerd[1475]: 2025-02-13 20:16:20.247 [INFO][4106] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.45.0/26 handle="k8s-pod-network.063d95aa3cc6ba34c834adb56ecf142f5d72ac341121350c2940397fa55fdbe7" host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:20.343305 containerd[1475]: 2025-02-13 20:16:20.263 [INFO][4106] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.45.2/26] block=192.168.45.0/26 handle="k8s-pod-network.063d95aa3cc6ba34c834adb56ecf142f5d72ac341121350c2940397fa55fdbe7" host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:20.343305 containerd[1475]: 2025-02-13 20:16:20.263 [INFO][4106] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.45.2/26] handle="k8s-pod-network.063d95aa3cc6ba34c834adb56ecf142f5d72ac341121350c2940397fa55fdbe7" host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:20.343305 containerd[1475]: 2025-02-13 20:16:20.263 [INFO][4106] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:20.343305 containerd[1475]: 2025-02-13 20:16:20.264 [INFO][4106] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.45.2/26] IPv6=[] ContainerID="063d95aa3cc6ba34c834adb56ecf142f5d72ac341121350c2940397fa55fdbe7" HandleID="k8s-pod-network.063d95aa3cc6ba34c834adb56ecf142f5d72ac341121350c2940397fa55fdbe7" Workload="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--62bhh-eth0" Feb 13 20:16:20.345403 containerd[1475]: 2025-02-13 20:16:20.270 [INFO][4056] cni-plugin/k8s.go 386: Populated endpoint ContainerID="063d95aa3cc6ba34c834adb56ecf142f5d72ac341121350c2940397fa55fdbe7" Namespace="kube-system" Pod="coredns-6f6b679f8f-62bhh" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--62bhh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--62bhh-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"58af9d69-676e-41db-a9d1-fb2841461113", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-e-9d3732dae3", ContainerID:"", Pod:"coredns-6f6b679f8f-62bhh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali341109248de", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:20.345403 containerd[1475]: 2025-02-13 20:16:20.277 [INFO][4056] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.45.2/32] ContainerID="063d95aa3cc6ba34c834adb56ecf142f5d72ac341121350c2940397fa55fdbe7" Namespace="kube-system" Pod="coredns-6f6b679f8f-62bhh" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--62bhh-eth0" Feb 13 20:16:20.345403 containerd[1475]: 2025-02-13 20:16:20.278 [INFO][4056] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali341109248de ContainerID="063d95aa3cc6ba34c834adb56ecf142f5d72ac341121350c2940397fa55fdbe7" Namespace="kube-system" Pod="coredns-6f6b679f8f-62bhh" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--62bhh-eth0" Feb 13 20:16:20.345403 containerd[1475]: 2025-02-13 20:16:20.295 [INFO][4056] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="063d95aa3cc6ba34c834adb56ecf142f5d72ac341121350c2940397fa55fdbe7" Namespace="kube-system" Pod="coredns-6f6b679f8f-62bhh" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--62bhh-eth0" Feb 13 20:16:20.345403 containerd[1475]: 2025-02-13 20:16:20.299 [INFO][4056] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="063d95aa3cc6ba34c834adb56ecf142f5d72ac341121350c2940397fa55fdbe7" Namespace="kube-system" Pod="coredns-6f6b679f8f-62bhh" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--62bhh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--62bhh-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"58af9d69-676e-41db-a9d1-fb2841461113", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-e-9d3732dae3", ContainerID:"063d95aa3cc6ba34c834adb56ecf142f5d72ac341121350c2940397fa55fdbe7", Pod:"coredns-6f6b679f8f-62bhh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali341109248de", MAC:"b6:79:7b:78:19:6a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:20.345403 containerd[1475]: 2025-02-13 20:16:20.336 [INFO][4056] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="063d95aa3cc6ba34c834adb56ecf142f5d72ac341121350c2940397fa55fdbe7" Namespace="kube-system" Pod="coredns-6f6b679f8f-62bhh" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--62bhh-eth0" Feb 13 20:16:20.433149 systemd-networkd[1378]: calic96250e6c82: Link UP Feb 13 20:16:20.433538 systemd-networkd[1378]: calic96250e6c82: Gained carrier Feb 13 20:16:20.465173 containerd[1475]: time="2025-02-13T20:16:20.462029183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:16:20.465173 containerd[1475]: time="2025-02-13T20:16:20.462110167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:16:20.465173 containerd[1475]: time="2025-02-13T20:16:20.462135333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:20.465173 containerd[1475]: time="2025-02-13T20:16:20.462283115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:20.509361 containerd[1475]: 2025-02-13 20:16:19.919 [INFO][4065] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 20:16:20.509361 containerd[1475]: 2025-02-13 20:16:19.994 [INFO][4065] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--4q5j2-eth0 calico-apiserver-8479cf5b7f- calico-apiserver 82fe64ff-9e75-4fc3-a4a3-1b4e07cd7dd2 770 0 2025-02-13 20:15:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8479cf5b7f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.1-e-9d3732dae3 calico-apiserver-8479cf5b7f-4q5j2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic96250e6c82 [] []}} ContainerID="e3dd7c08a9679977275bab49d352371ed82333ee72b6f151592b38757a06a9be" Namespace="calico-apiserver" Pod="calico-apiserver-8479cf5b7f-4q5j2" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--4q5j2-" Feb 13 20:16:20.509361 containerd[1475]: 2025-02-13 20:16:19.997 [INFO][4065] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e3dd7c08a9679977275bab49d352371ed82333ee72b6f151592b38757a06a9be" Namespace="calico-apiserver" Pod="calico-apiserver-8479cf5b7f-4q5j2" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--4q5j2-eth0" Feb 13 20:16:20.509361 containerd[1475]: 2025-02-13 20:16:20.173 [INFO][4099] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e3dd7c08a9679977275bab49d352371ed82333ee72b6f151592b38757a06a9be" HandleID="k8s-pod-network.e3dd7c08a9679977275bab49d352371ed82333ee72b6f151592b38757a06a9be" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--4q5j2-eth0" Feb 13 20:16:20.509361 containerd[1475]: 2025-02-13 20:16:20.205 [INFO][4099] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e3dd7c08a9679977275bab49d352371ed82333ee72b6f151592b38757a06a9be" HandleID="k8s-pod-network.e3dd7c08a9679977275bab49d352371ed82333ee72b6f151592b38757a06a9be" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--4q5j2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001eb8e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.1-e-9d3732dae3", "pod":"calico-apiserver-8479cf5b7f-4q5j2", "timestamp":"2025-02-13 20:16:20.173009857 +0000 UTC"}, Hostname:"ci-4081.3.1-e-9d3732dae3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:16:20.509361 containerd[1475]: 2025-02-13 20:16:20.206 [INFO][4099] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:20.509361 containerd[1475]: 2025-02-13 20:16:20.263 [INFO][4099] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:20.509361 containerd[1475]: 2025-02-13 20:16:20.264 [INFO][4099] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-e-9d3732dae3' Feb 13 20:16:20.509361 containerd[1475]: 2025-02-13 20:16:20.281 [INFO][4099] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e3dd7c08a9679977275bab49d352371ed82333ee72b6f151592b38757a06a9be" host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:20.509361 containerd[1475]: 2025-02-13 20:16:20.292 [INFO][4099] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:20.509361 containerd[1475]: 2025-02-13 20:16:20.336 [INFO][4099] ipam/ipam.go 489: Trying affinity for 192.168.45.0/26 host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:20.509361 containerd[1475]: 2025-02-13 20:16:20.342 [INFO][4099] ipam/ipam.go 155: Attempting to load block cidr=192.168.45.0/26 host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:20.509361 containerd[1475]: 2025-02-13 20:16:20.355 [INFO][4099] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.45.0/26 host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:20.509361 containerd[1475]: 2025-02-13 20:16:20.355 [INFO][4099] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.45.0/26 handle="k8s-pod-network.e3dd7c08a9679977275bab49d352371ed82333ee72b6f151592b38757a06a9be" host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:20.509361 containerd[1475]: 2025-02-13 20:16:20.361 [INFO][4099] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e3dd7c08a9679977275bab49d352371ed82333ee72b6f151592b38757a06a9be Feb 13 20:16:20.509361 containerd[1475]: 2025-02-13 20:16:20.373 [INFO][4099] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.45.0/26 handle="k8s-pod-network.e3dd7c08a9679977275bab49d352371ed82333ee72b6f151592b38757a06a9be" host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:20.509361 containerd[1475]: 2025-02-13 20:16:20.403 [INFO][4099] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.45.3/26] block=192.168.45.0/26 handle="k8s-pod-network.e3dd7c08a9679977275bab49d352371ed82333ee72b6f151592b38757a06a9be" host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:20.509361 containerd[1475]: 2025-02-13 20:16:20.403 [INFO][4099] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.45.3/26] handle="k8s-pod-network.e3dd7c08a9679977275bab49d352371ed82333ee72b6f151592b38757a06a9be" host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:20.509361 containerd[1475]: 2025-02-13 20:16:20.403 [INFO][4099] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:20.509361 containerd[1475]: 2025-02-13 20:16:20.403 [INFO][4099] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.45.3/26] IPv6=[] ContainerID="e3dd7c08a9679977275bab49d352371ed82333ee72b6f151592b38757a06a9be" HandleID="k8s-pod-network.e3dd7c08a9679977275bab49d352371ed82333ee72b6f151592b38757a06a9be" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--4q5j2-eth0" Feb 13 20:16:20.510871 containerd[1475]: 2025-02-13 20:16:20.417 [INFO][4065] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e3dd7c08a9679977275bab49d352371ed82333ee72b6f151592b38757a06a9be" Namespace="calico-apiserver" Pod="calico-apiserver-8479cf5b7f-4q5j2" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--4q5j2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--4q5j2-eth0", GenerateName:"calico-apiserver-8479cf5b7f-", Namespace:"calico-apiserver", SelfLink:"", UID:"82fe64ff-9e75-4fc3-a4a3-1b4e07cd7dd2", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8479cf5b7f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-e-9d3732dae3", ContainerID:"", Pod:"calico-apiserver-8479cf5b7f-4q5j2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic96250e6c82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:20.510871 containerd[1475]: 2025-02-13 20:16:20.417 [INFO][4065] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.45.3/32] ContainerID="e3dd7c08a9679977275bab49d352371ed82333ee72b6f151592b38757a06a9be" Namespace="calico-apiserver" Pod="calico-apiserver-8479cf5b7f-4q5j2" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--4q5j2-eth0" Feb 13 20:16:20.510871 containerd[1475]: 2025-02-13 20:16:20.417 [INFO][4065] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic96250e6c82 ContainerID="e3dd7c08a9679977275bab49d352371ed82333ee72b6f151592b38757a06a9be" Namespace="calico-apiserver" Pod="calico-apiserver-8479cf5b7f-4q5j2" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--4q5j2-eth0" Feb 13 20:16:20.510871 containerd[1475]: 2025-02-13 20:16:20.433 [INFO][4065] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e3dd7c08a9679977275bab49d352371ed82333ee72b6f151592b38757a06a9be" Namespace="calico-apiserver" Pod="calico-apiserver-8479cf5b7f-4q5j2" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--4q5j2-eth0" Feb 13 20:16:20.510871 containerd[1475]: 2025-02-13 20:16:20.449 [INFO][4065] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e3dd7c08a9679977275bab49d352371ed82333ee72b6f151592b38757a06a9be" Namespace="calico-apiserver" Pod="calico-apiserver-8479cf5b7f-4q5j2" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--4q5j2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--4q5j2-eth0", GenerateName:"calico-apiserver-8479cf5b7f-", Namespace:"calico-apiserver", SelfLink:"", UID:"82fe64ff-9e75-4fc3-a4a3-1b4e07cd7dd2", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8479cf5b7f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-e-9d3732dae3", ContainerID:"e3dd7c08a9679977275bab49d352371ed82333ee72b6f151592b38757a06a9be", Pod:"calico-apiserver-8479cf5b7f-4q5j2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic96250e6c82", MAC:"92:c4:fb:bc:bb:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:20.510871 containerd[1475]: 2025-02-13 20:16:20.482 [INFO][4065] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e3dd7c08a9679977275bab49d352371ed82333ee72b6f151592b38757a06a9be" Namespace="calico-apiserver" Pod="calico-apiserver-8479cf5b7f-4q5j2" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--4q5j2-eth0" Feb 13 20:16:20.513523 systemd[1]: Started cri-containerd-063d95aa3cc6ba34c834adb56ecf142f5d72ac341121350c2940397fa55fdbe7.scope - libcontainer container 063d95aa3cc6ba34c834adb56ecf142f5d72ac341121350c2940397fa55fdbe7. Feb 13 20:16:20.560064 systemd-networkd[1378]: calidceff31beac: Link UP Feb 13 20:16:20.563602 systemd-networkd[1378]: calidceff31beac: Gained carrier Feb 13 20:16:20.650792 containerd[1475]: 2025-02-13 20:16:20.024 [INFO][4076] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 20:16:20.650792 containerd[1475]: 2025-02-13 20:16:20.098 [INFO][4076] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--fbch8-eth0 coredns-6f6b679f8f- kube-system cb4743e4-e761-441d-ae53-4d9924d89649 771 0 2025-02-13 20:15:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.1-e-9d3732dae3 coredns-6f6b679f8f-fbch8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidceff31beac [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="15f1d230f79314ca45170eb2abe1f8008829b7782fc3216898b709e923d4460f" Namespace="kube-system" Pod="coredns-6f6b679f8f-fbch8" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--fbch8-" Feb 13 20:16:20.650792 containerd[1475]: 2025-02-13 20:16:20.099 [INFO][4076] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="15f1d230f79314ca45170eb2abe1f8008829b7782fc3216898b709e923d4460f" Namespace="kube-system" Pod="coredns-6f6b679f8f-fbch8" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--fbch8-eth0" Feb 13 20:16:20.650792 containerd[1475]: 2025-02-13 20:16:20.219 [INFO][4112] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15f1d230f79314ca45170eb2abe1f8008829b7782fc3216898b709e923d4460f" HandleID="k8s-pod-network.15f1d230f79314ca45170eb2abe1f8008829b7782fc3216898b709e923d4460f" Workload="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--fbch8-eth0" Feb 13 20:16:20.650792 containerd[1475]: 2025-02-13 20:16:20.277 [INFO][4112] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="15f1d230f79314ca45170eb2abe1f8008829b7782fc3216898b709e923d4460f" HandleID="k8s-pod-network.15f1d230f79314ca45170eb2abe1f8008829b7782fc3216898b709e923d4460f" Workload="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--fbch8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00041cf50), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.1-e-9d3732dae3", "pod":"coredns-6f6b679f8f-fbch8", "timestamp":"2025-02-13 20:16:20.219288147 +0000 UTC"}, Hostname:"ci-4081.3.1-e-9d3732dae3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:16:20.650792 containerd[1475]: 2025-02-13 20:16:20.277 [INFO][4112] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:20.650792 containerd[1475]: 2025-02-13 20:16:20.404 [INFO][4112] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:20.650792 containerd[1475]: 2025-02-13 20:16:20.404 [INFO][4112] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-e-9d3732dae3' Feb 13 20:16:20.650792 containerd[1475]: 2025-02-13 20:16:20.410 [INFO][4112] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.15f1d230f79314ca45170eb2abe1f8008829b7782fc3216898b709e923d4460f" host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:20.650792 containerd[1475]: 2025-02-13 20:16:20.438 [INFO][4112] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:20.650792 containerd[1475]: 2025-02-13 20:16:20.457 [INFO][4112] ipam/ipam.go 489: Trying affinity for 192.168.45.0/26 host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:20.650792 containerd[1475]: 2025-02-13 20:16:20.463 [INFO][4112] ipam/ipam.go 155: Attempting to load block cidr=192.168.45.0/26 host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:20.650792 containerd[1475]: 2025-02-13 20:16:20.479 [INFO][4112] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.45.0/26 host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:20.650792 containerd[1475]: 2025-02-13 20:16:20.480 [INFO][4112] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.45.0/26 handle="k8s-pod-network.15f1d230f79314ca45170eb2abe1f8008829b7782fc3216898b709e923d4460f" host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:20.650792 containerd[1475]: 2025-02-13 20:16:20.485 [INFO][4112] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.15f1d230f79314ca45170eb2abe1f8008829b7782fc3216898b709e923d4460f Feb 13 20:16:20.650792 containerd[1475]: 2025-02-13 20:16:20.503 [INFO][4112] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.45.0/26 handle="k8s-pod-network.15f1d230f79314ca45170eb2abe1f8008829b7782fc3216898b709e923d4460f" host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:20.650792 containerd[1475]: 2025-02-13 20:16:20.526 [INFO][4112] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.45.4/26] block=192.168.45.0/26 handle="k8s-pod-network.15f1d230f79314ca45170eb2abe1f8008829b7782fc3216898b709e923d4460f" host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:20.650792 containerd[1475]: 2025-02-13 20:16:20.526 [INFO][4112] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.45.4/26] handle="k8s-pod-network.15f1d230f79314ca45170eb2abe1f8008829b7782fc3216898b709e923d4460f" host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:20.650792 containerd[1475]: 2025-02-13 20:16:20.527 [INFO][4112] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:20.650792 containerd[1475]: 2025-02-13 20:16:20.527 [INFO][4112] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.45.4/26] IPv6=[] ContainerID="15f1d230f79314ca45170eb2abe1f8008829b7782fc3216898b709e923d4460f" HandleID="k8s-pod-network.15f1d230f79314ca45170eb2abe1f8008829b7782fc3216898b709e923d4460f" Workload="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--fbch8-eth0" Feb 13 20:16:20.653283 containerd[1475]: 2025-02-13 20:16:20.536 [INFO][4076] cni-plugin/k8s.go 386: Populated endpoint ContainerID="15f1d230f79314ca45170eb2abe1f8008829b7782fc3216898b709e923d4460f" Namespace="kube-system" Pod="coredns-6f6b679f8f-fbch8" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--fbch8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--fbch8-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"cb4743e4-e761-441d-ae53-4d9924d89649", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-e-9d3732dae3", ContainerID:"", Pod:"coredns-6f6b679f8f-fbch8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidceff31beac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:20.653283 containerd[1475]: 2025-02-13 20:16:20.536 [INFO][4076] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.45.4/32] ContainerID="15f1d230f79314ca45170eb2abe1f8008829b7782fc3216898b709e923d4460f" Namespace="kube-system" Pod="coredns-6f6b679f8f-fbch8" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--fbch8-eth0" Feb 13 20:16:20.653283 containerd[1475]: 2025-02-13 20:16:20.536 [INFO][4076] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidceff31beac ContainerID="15f1d230f79314ca45170eb2abe1f8008829b7782fc3216898b709e923d4460f" Namespace="kube-system" Pod="coredns-6f6b679f8f-fbch8" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--fbch8-eth0" Feb 13 20:16:20.653283 containerd[1475]: 2025-02-13 20:16:20.576 [INFO][4076] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15f1d230f79314ca45170eb2abe1f8008829b7782fc3216898b709e923d4460f" Namespace="kube-system" Pod="coredns-6f6b679f8f-fbch8" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--fbch8-eth0" Feb 13 20:16:20.653283 containerd[1475]: 2025-02-13 20:16:20.581 [INFO][4076] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="15f1d230f79314ca45170eb2abe1f8008829b7782fc3216898b709e923d4460f" Namespace="kube-system" Pod="coredns-6f6b679f8f-fbch8" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--fbch8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--fbch8-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"cb4743e4-e761-441d-ae53-4d9924d89649", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-e-9d3732dae3", ContainerID:"15f1d230f79314ca45170eb2abe1f8008829b7782fc3216898b709e923d4460f", Pod:"coredns-6f6b679f8f-fbch8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidceff31beac", MAC:"26:e2:a1:0e:42:e3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:20.653283 containerd[1475]: 2025-02-13 20:16:20.632 [INFO][4076] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="15f1d230f79314ca45170eb2abe1f8008829b7782fc3216898b709e923d4460f" Namespace="kube-system" Pod="coredns-6f6b679f8f-fbch8" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--fbch8-eth0" Feb 13 20:16:20.657907 containerd[1475]: time="2025-02-13T20:16:20.657589945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:16:20.657907 containerd[1475]: time="2025-02-13T20:16:20.657674617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:16:20.657907 containerd[1475]: time="2025-02-13T20:16:20.657702197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:20.662439 containerd[1475]: time="2025-02-13T20:16:20.660193924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:20.710785 containerd[1475]: time="2025-02-13T20:16:20.710701006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-62bhh,Uid:58af9d69-676e-41db-a9d1-fb2841461113,Namespace:kube-system,Attempt:1,} returns sandbox id \"063d95aa3cc6ba34c834adb56ecf142f5d72ac341121350c2940397fa55fdbe7\"" Feb 13 20:16:20.712715 kubelet[2519]: E0213 20:16:20.712588 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:16:20.738695 containerd[1475]: time="2025-02-13T20:16:20.738650853Z" level=info msg="CreateContainer within sandbox \"063d95aa3cc6ba34c834adb56ecf142f5d72ac341121350c2940397fa55fdbe7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:16:20.741818 containerd[1475]: time="2025-02-13T20:16:20.740480022Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:16:20.741818 containerd[1475]: time="2025-02-13T20:16:20.740557247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:16:20.741818 containerd[1475]: time="2025-02-13T20:16:20.740586287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:20.741818 containerd[1475]: time="2025-02-13T20:16:20.741009546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:20.799186 systemd[1]: Started cri-containerd-15f1d230f79314ca45170eb2abe1f8008829b7782fc3216898b709e923d4460f.scope - libcontainer container 15f1d230f79314ca45170eb2abe1f8008829b7782fc3216898b709e923d4460f. Feb 13 20:16:20.802580 systemd[1]: Started cri-containerd-e3dd7c08a9679977275bab49d352371ed82333ee72b6f151592b38757a06a9be.scope - libcontainer container e3dd7c08a9679977275bab49d352371ed82333ee72b6f151592b38757a06a9be. Feb 13 20:16:20.830167 containerd[1475]: time="2025-02-13T20:16:20.830014090Z" level=info msg="CreateContainer within sandbox \"063d95aa3cc6ba34c834adb56ecf142f5d72ac341121350c2940397fa55fdbe7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c2c3ab9340de6d801042a48265e1a5feb44af0a479c0254ef38c3707022ae3a7\"" Feb 13 20:16:20.834038 containerd[1475]: time="2025-02-13T20:16:20.833991655Z" level=info msg="StartContainer for \"c2c3ab9340de6d801042a48265e1a5feb44af0a479c0254ef38c3707022ae3a7\"" Feb 13 20:16:20.940197 containerd[1475]: time="2025-02-13T20:16:20.940129263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fbch8,Uid:cb4743e4-e761-441d-ae53-4d9924d89649,Namespace:kube-system,Attempt:1,} returns sandbox id \"15f1d230f79314ca45170eb2abe1f8008829b7782fc3216898b709e923d4460f\"" Feb 13 20:16:20.946104 kubelet[2519]: E0213 20:16:20.946063 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:16:20.949558 systemd[1]: Started cri-containerd-c2c3ab9340de6d801042a48265e1a5feb44af0a479c0254ef38c3707022ae3a7.scope - libcontainer container c2c3ab9340de6d801042a48265e1a5feb44af0a479c0254ef38c3707022ae3a7. Feb 13 20:16:20.959085 containerd[1475]: time="2025-02-13T20:16:20.959010636Z" level=info msg="CreateContainer within sandbox \"15f1d230f79314ca45170eb2abe1f8008829b7782fc3216898b709e923d4460f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:16:21.058001 containerd[1475]: time="2025-02-13T20:16:21.057299527Z" level=info msg="CreateContainer within sandbox \"15f1d230f79314ca45170eb2abe1f8008829b7782fc3216898b709e923d4460f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4ee2bc47bbb2362ec19847958baa19ba15d0b911c31cc281edddb03923201faf\"" Feb 13 20:16:21.062980 containerd[1475]: time="2025-02-13T20:16:21.062812685Z" level=info msg="StartContainer for \"4ee2bc47bbb2362ec19847958baa19ba15d0b911c31cc281edddb03923201faf\"" Feb 13 20:16:21.083479 containerd[1475]: time="2025-02-13T20:16:21.083257166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8479cf5b7f-4q5j2,Uid:82fe64ff-9e75-4fc3-a4a3-1b4e07cd7dd2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e3dd7c08a9679977275bab49d352371ed82333ee72b6f151592b38757a06a9be\"" Feb 13 20:16:21.093687 containerd[1475]: time="2025-02-13T20:16:21.093482113Z" level=info msg="StartContainer for \"c2c3ab9340de6d801042a48265e1a5feb44af0a479c0254ef38c3707022ae3a7\" returns successfully" Feb 13 20:16:21.173134 systemd[1]: Started cri-containerd-4ee2bc47bbb2362ec19847958baa19ba15d0b911c31cc281edddb03923201faf.scope - libcontainer container 4ee2bc47bbb2362ec19847958baa19ba15d0b911c31cc281edddb03923201faf. Feb 13 20:16:21.304731 containerd[1475]: time="2025-02-13T20:16:21.304236439Z" level=info msg="StartContainer for \"4ee2bc47bbb2362ec19847958baa19ba15d0b911c31cc281edddb03923201faf\" returns successfully" Feb 13 20:16:21.768826 kubelet[2519]: E0213 20:16:21.767352 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:16:21.782348 kubelet[2519]: E0213 20:16:21.781906 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:16:22.091003 kubelet[2519]: I0213 20:16:22.090598 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-fbch8" podStartSLOduration=36.090554606 podStartE2EDuration="36.090554606s" podCreationTimestamp="2025-02-13 20:15:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:16:22.012961255 +0000 UTC m=+40.935311925" watchObservedRunningTime="2025-02-13 20:16:22.090554606 +0000 UTC m=+41.012905277" Feb 13 20:16:22.104424 systemd-networkd[1378]: calidceff31beac: Gained IPv6LL Feb 13 20:16:22.294970 systemd-networkd[1378]: cali341109248de: Gained IPv6LL Feb 13 20:16:22.332339 containerd[1475]: time="2025-02-13T20:16:22.331120600Z" level=info msg="StopPodSandbox for \"679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d\"" Feb 13 20:16:22.333565 containerd[1475]: time="2025-02-13T20:16:22.333425136Z" level=info msg="StopPodSandbox for \"b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe\"" Feb 13 20:16:22.423952 systemd-networkd[1378]: calic96250e6c82: Gained IPv6LL Feb 13 20:16:22.523514 kubelet[2519]: I0213 20:16:22.523293 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-62bhh" podStartSLOduration=36.523213853 podStartE2EDuration="36.523213853s" podCreationTimestamp="2025-02-13 20:15:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:16:22.092291879 +0000 UTC m=+41.014642549" watchObservedRunningTime="2025-02-13 20:16:22.523213853 +0000 UTC m=+41.445564762" Feb 13 20:16:22.538733 containerd[1475]: time="2025-02-13T20:16:22.538044393Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:22.543613 containerd[1475]: time="2025-02-13T20:16:22.543426621Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 20:16:22.548202 containerd[1475]: time="2025-02-13T20:16:22.548047648Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:22.563574 containerd[1475]: time="2025-02-13T20:16:22.563507837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:22.567012 containerd[1475]: time="2025-02-13T20:16:22.566810327Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 4.276726142s" Feb 13 20:16:22.567012 containerd[1475]: time="2025-02-13T20:16:22.566872568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 20:16:22.574892 containerd[1475]: time="2025-02-13T20:16:22.572366236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:16:22.684766 containerd[1475]: time="2025-02-13T20:16:22.684610308Z" level=info msg="CreateContainer within sandbox \"7c074e44d7abc934522835f118a16be057c0e4dbd7bf9e667003e11ed0a81f96\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 20:16:22.734574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1494187315.mount: Deactivated successfully. Feb 13 20:16:22.749549 containerd[1475]: 2025-02-13 20:16:22.521 [INFO][4415] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" Feb 13 20:16:22.749549 containerd[1475]: 2025-02-13 20:16:22.522 [INFO][4415] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" iface="eth0" netns="/var/run/netns/cni-adbe38ed-188a-1242-fa3f-0b7f8eafe2fd" Feb 13 20:16:22.749549 containerd[1475]: 2025-02-13 20:16:22.523 [INFO][4415] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" iface="eth0" netns="/var/run/netns/cni-adbe38ed-188a-1242-fa3f-0b7f8eafe2fd" Feb 13 20:16:22.749549 containerd[1475]: 2025-02-13 20:16:22.525 [INFO][4415] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" iface="eth0" netns="/var/run/netns/cni-adbe38ed-188a-1242-fa3f-0b7f8eafe2fd" Feb 13 20:16:22.749549 containerd[1475]: 2025-02-13 20:16:22.525 [INFO][4415] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" Feb 13 20:16:22.749549 containerd[1475]: 2025-02-13 20:16:22.525 [INFO][4415] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" Feb 13 20:16:22.749549 containerd[1475]: 2025-02-13 20:16:22.688 [INFO][4433] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" HandleID="k8s-pod-network.b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--mz564-eth0" Feb 13 20:16:22.749549 containerd[1475]: 2025-02-13 20:16:22.688 [INFO][4433] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:22.749549 containerd[1475]: 2025-02-13 20:16:22.688 [INFO][4433] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:22.749549 containerd[1475]: 2025-02-13 20:16:22.722 [WARNING][4433] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" HandleID="k8s-pod-network.b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--mz564-eth0" Feb 13 20:16:22.749549 containerd[1475]: 2025-02-13 20:16:22.722 [INFO][4433] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" HandleID="k8s-pod-network.b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--mz564-eth0" Feb 13 20:16:22.749549 containerd[1475]: 2025-02-13 20:16:22.736 [INFO][4433] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:22.749549 containerd[1475]: 2025-02-13 20:16:22.743 [INFO][4415] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" Feb 13 20:16:22.754775 containerd[1475]: time="2025-02-13T20:16:22.752829659Z" level=info msg="TearDown network for sandbox \"b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe\" successfully" Feb 13 20:16:22.754775 containerd[1475]: time="2025-02-13T20:16:22.752873801Z" level=info msg="StopPodSandbox for \"b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe\" returns successfully" Feb 13 20:16:22.758378 containerd[1475]: time="2025-02-13T20:16:22.758061894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8479cf5b7f-mz564,Uid:a042f4c6-cdbb-48a4-9920-8318d966e49f,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:16:22.759274 systemd[1]: run-netns-cni\x2dadbe38ed\x2d188a\x2d1242\x2dfa3f\x2d0b7f8eafe2fd.mount: Deactivated successfully. Feb 13 20:16:22.764785 containerd[1475]: time="2025-02-13T20:16:22.762118704Z" level=info msg="CreateContainer within sandbox \"7c074e44d7abc934522835f118a16be057c0e4dbd7bf9e667003e11ed0a81f96\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e152fe345c38f5dbb1298758e54ff647f822ece5a269613aa5a3831e97469ec0\"" Feb 13 20:16:22.769209 containerd[1475]: time="2025-02-13T20:16:22.769162226Z" level=info msg="StartContainer for \"e152fe345c38f5dbb1298758e54ff647f822ece5a269613aa5a3831e97469ec0\"" Feb 13 20:16:22.815000 kubelet[2519]: E0213 20:16:22.810902 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:16:22.824553 kubelet[2519]: E0213 20:16:22.816557 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:16:22.911039 containerd[1475]: 2025-02-13 20:16:22.596 [INFO][4416] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" Feb 13 20:16:22.911039 containerd[1475]: 2025-02-13 20:16:22.596 [INFO][4416] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" iface="eth0" netns="/var/run/netns/cni-cf7a3083-ee28-128b-a1af-f771999dcfd6" Feb 13 20:16:22.911039 containerd[1475]: 2025-02-13 20:16:22.598 [INFO][4416] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" iface="eth0" netns="/var/run/netns/cni-cf7a3083-ee28-128b-a1af-f771999dcfd6" Feb 13 20:16:22.911039 containerd[1475]: 2025-02-13 20:16:22.603 [INFO][4416] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" iface="eth0" netns="/var/run/netns/cni-cf7a3083-ee28-128b-a1af-f771999dcfd6" Feb 13 20:16:22.911039 containerd[1475]: 2025-02-13 20:16:22.603 [INFO][4416] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" Feb 13 20:16:22.911039 containerd[1475]: 2025-02-13 20:16:22.603 [INFO][4416] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" Feb 13 20:16:22.911039 containerd[1475]: 2025-02-13 20:16:22.833 [INFO][4440] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" HandleID="k8s-pod-network.679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" Workload="ci--4081.3.1--e--9d3732dae3-k8s-csi--node--driver--624nw-eth0" Feb 13 20:16:22.911039 containerd[1475]: 2025-02-13 20:16:22.833 [INFO][4440] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:22.911039 containerd[1475]: 2025-02-13 20:16:22.834 [INFO][4440] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:22.911039 containerd[1475]: 2025-02-13 20:16:22.884 [WARNING][4440] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" HandleID="k8s-pod-network.679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" Workload="ci--4081.3.1--e--9d3732dae3-k8s-csi--node--driver--624nw-eth0" Feb 13 20:16:22.911039 containerd[1475]: 2025-02-13 20:16:22.884 [INFO][4440] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" HandleID="k8s-pod-network.679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" Workload="ci--4081.3.1--e--9d3732dae3-k8s-csi--node--driver--624nw-eth0" Feb 13 20:16:22.911039 containerd[1475]: 2025-02-13 20:16:22.890 [INFO][4440] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:22.911039 containerd[1475]: 2025-02-13 20:16:22.904 [INFO][4416] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" Feb 13 20:16:22.911039 containerd[1475]: time="2025-02-13T20:16:22.910196817Z" level=info msg="TearDown network for sandbox \"679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d\" successfully" Feb 13 20:16:22.911039 containerd[1475]: time="2025-02-13T20:16:22.910345401Z" level=info msg="StopPodSandbox for \"679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d\" returns successfully" Feb 13 20:16:22.913017 containerd[1475]: time="2025-02-13T20:16:22.912246035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-624nw,Uid:f105c06f-1a6f-4ec2-924d-9b57627c66c2,Namespace:calico-system,Attempt:1,}" Feb 13 20:16:23.094099 systemd[1]: Started cri-containerd-e152fe345c38f5dbb1298758e54ff647f822ece5a269613aa5a3831e97469ec0.scope - libcontainer container e152fe345c38f5dbb1298758e54ff647f822ece5a269613aa5a3831e97469ec0. Feb 13 20:16:23.264177 kubelet[2519]: I0213 20:16:23.262588 2519 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:16:23.264177 kubelet[2519]: E0213 20:16:23.263277 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:16:23.419282 containerd[1475]: time="2025-02-13T20:16:23.419214998Z" level=info msg="StartContainer for \"e152fe345c38f5dbb1298758e54ff647f822ece5a269613aa5a3831e97469ec0\" returns successfully" Feb 13 20:16:23.467057 systemd-networkd[1378]: cali9d7b2dae214: Link UP Feb 13 20:16:23.471628 systemd-networkd[1378]: cali9d7b2dae214: Gained carrier Feb 13 20:16:23.508276 containerd[1475]: 2025-02-13 20:16:23.028 [INFO][4460] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 20:16:23.508276 containerd[1475]: 2025-02-13 20:16:23.096 [INFO][4460] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--mz564-eth0 calico-apiserver-8479cf5b7f- calico-apiserver a042f4c6-cdbb-48a4-9920-8318d966e49f 810 0 2025-02-13 20:15:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8479cf5b7f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.1-e-9d3732dae3 calico-apiserver-8479cf5b7f-mz564 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9d7b2dae214 [] []}} ContainerID="33a58bac0d84535335712a9bb4b21f55a7e4753af993f5ecb03cd7ce5a075bb8" Namespace="calico-apiserver" Pod="calico-apiserver-8479cf5b7f-mz564" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--mz564-" Feb 13 20:16:23.508276 containerd[1475]: 2025-02-13 20:16:23.096 [INFO][4460] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="33a58bac0d84535335712a9bb4b21f55a7e4753af993f5ecb03cd7ce5a075bb8" Namespace="calico-apiserver" Pod="calico-apiserver-8479cf5b7f-mz564" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--mz564-eth0" Feb 13 20:16:23.508276 containerd[1475]: 2025-02-13 20:16:23.230 [INFO][4503] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33a58bac0d84535335712a9bb4b21f55a7e4753af993f5ecb03cd7ce5a075bb8" HandleID="k8s-pod-network.33a58bac0d84535335712a9bb4b21f55a7e4753af993f5ecb03cd7ce5a075bb8" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--mz564-eth0" Feb 13 20:16:23.508276 containerd[1475]: 2025-02-13 20:16:23.297 [INFO][4503] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="33a58bac0d84535335712a9bb4b21f55a7e4753af993f5ecb03cd7ce5a075bb8" HandleID="k8s-pod-network.33a58bac0d84535335712a9bb4b21f55a7e4753af993f5ecb03cd7ce5a075bb8" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--mz564-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c3a90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.1-e-9d3732dae3", "pod":"calico-apiserver-8479cf5b7f-mz564", "timestamp":"2025-02-13 20:16:23.22972273 +0000 UTC"}, Hostname:"ci-4081.3.1-e-9d3732dae3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:16:23.508276 containerd[1475]: 2025-02-13 20:16:23.297 [INFO][4503] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:23.508276 containerd[1475]: 2025-02-13 20:16:23.298 [INFO][4503] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:23.508276 containerd[1475]: 2025-02-13 20:16:23.298 [INFO][4503] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-e-9d3732dae3' Feb 13 20:16:23.508276 containerd[1475]: 2025-02-13 20:16:23.303 [INFO][4503] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.33a58bac0d84535335712a9bb4b21f55a7e4753af993f5ecb03cd7ce5a075bb8" host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:23.508276 containerd[1475]: 2025-02-13 20:16:23.313 [INFO][4503] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:23.508276 containerd[1475]: 2025-02-13 20:16:23.327 [INFO][4503] ipam/ipam.go 489: Trying affinity for 192.168.45.0/26 host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:23.508276 containerd[1475]: 2025-02-13 20:16:23.342 [INFO][4503] ipam/ipam.go 155: Attempting to load block cidr=192.168.45.0/26 host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:23.508276 containerd[1475]: 2025-02-13 20:16:23.349 [INFO][4503] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.45.0/26 host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:23.508276 containerd[1475]: 2025-02-13 20:16:23.349 [INFO][4503] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.45.0/26 handle="k8s-pod-network.33a58bac0d84535335712a9bb4b21f55a7e4753af993f5ecb03cd7ce5a075bb8" host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:23.508276 containerd[1475]: 2025-02-13 20:16:23.356 [INFO][4503] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.33a58bac0d84535335712a9bb4b21f55a7e4753af993f5ecb03cd7ce5a075bb8 Feb 13 20:16:23.508276 containerd[1475]: 2025-02-13 20:16:23.383 [INFO][4503] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.45.0/26 handle="k8s-pod-network.33a58bac0d84535335712a9bb4b21f55a7e4753af993f5ecb03cd7ce5a075bb8" host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:23.508276 containerd[1475]: 2025-02-13 20:16:23.447 [INFO][4503] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.45.5/26] block=192.168.45.0/26 handle="k8s-pod-network.33a58bac0d84535335712a9bb4b21f55a7e4753af993f5ecb03cd7ce5a075bb8" host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:23.508276 containerd[1475]: 2025-02-13 20:16:23.447 [INFO][4503] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.45.5/26] handle="k8s-pod-network.33a58bac0d84535335712a9bb4b21f55a7e4753af993f5ecb03cd7ce5a075bb8" host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:23.508276 containerd[1475]: 2025-02-13 20:16:23.447 [INFO][4503] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:23.508276 containerd[1475]: 2025-02-13 20:16:23.447 [INFO][4503] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.45.5/26] IPv6=[] ContainerID="33a58bac0d84535335712a9bb4b21f55a7e4753af993f5ecb03cd7ce5a075bb8" HandleID="k8s-pod-network.33a58bac0d84535335712a9bb4b21f55a7e4753af993f5ecb03cd7ce5a075bb8" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--mz564-eth0" Feb 13 20:16:23.512380 containerd[1475]: 2025-02-13 20:16:23.455 [INFO][4460] cni-plugin/k8s.go 386: Populated endpoint ContainerID="33a58bac0d84535335712a9bb4b21f55a7e4753af993f5ecb03cd7ce5a075bb8" Namespace="calico-apiserver" Pod="calico-apiserver-8479cf5b7f-mz564" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--mz564-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--mz564-eth0", GenerateName:"calico-apiserver-8479cf5b7f-", Namespace:"calico-apiserver", SelfLink:"", UID:"a042f4c6-cdbb-48a4-9920-8318d966e49f", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8479cf5b7f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-e-9d3732dae3", ContainerID:"", Pod:"calico-apiserver-8479cf5b7f-mz564", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d7b2dae214", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:23.512380 containerd[1475]: 2025-02-13 20:16:23.456 [INFO][4460] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.45.5/32] ContainerID="33a58bac0d84535335712a9bb4b21f55a7e4753af993f5ecb03cd7ce5a075bb8" Namespace="calico-apiserver" Pod="calico-apiserver-8479cf5b7f-mz564" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--mz564-eth0" Feb 13 20:16:23.512380 containerd[1475]: 2025-02-13 20:16:23.456 [INFO][4460] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d7b2dae214 ContainerID="33a58bac0d84535335712a9bb4b21f55a7e4753af993f5ecb03cd7ce5a075bb8" Namespace="calico-apiserver" Pod="calico-apiserver-8479cf5b7f-mz564" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--mz564-eth0" Feb 13 20:16:23.512380 containerd[1475]: 2025-02-13 20:16:23.472 [INFO][4460] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="33a58bac0d84535335712a9bb4b21f55a7e4753af993f5ecb03cd7ce5a075bb8" Namespace="calico-apiserver" Pod="calico-apiserver-8479cf5b7f-mz564" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--mz564-eth0" Feb 13 20:16:23.512380 containerd[1475]: 2025-02-13 20:16:23.474 [INFO][4460] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="33a58bac0d84535335712a9bb4b21f55a7e4753af993f5ecb03cd7ce5a075bb8" Namespace="calico-apiserver" Pod="calico-apiserver-8479cf5b7f-mz564" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--mz564-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--mz564-eth0", GenerateName:"calico-apiserver-8479cf5b7f-", Namespace:"calico-apiserver", SelfLink:"", UID:"a042f4c6-cdbb-48a4-9920-8318d966e49f", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8479cf5b7f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-e-9d3732dae3", ContainerID:"33a58bac0d84535335712a9bb4b21f55a7e4753af993f5ecb03cd7ce5a075bb8", Pod:"calico-apiserver-8479cf5b7f-mz564", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d7b2dae214", MAC:"02:c6:8d:44:a6:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:23.512380 containerd[1475]: 2025-02-13 20:16:23.499 [INFO][4460] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="33a58bac0d84535335712a9bb4b21f55a7e4753af993f5ecb03cd7ce5a075bb8" Namespace="calico-apiserver" Pod="calico-apiserver-8479cf5b7f-mz564" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--mz564-eth0" Feb 13 20:16:23.579841 systemd-networkd[1378]: cali8887062fda0: Link UP Feb 13 20:16:23.582971 systemd-networkd[1378]: cali8887062fda0: Gained carrier Feb 13 20:16:23.593781 containerd[1475]: time="2025-02-13T20:16:23.590878763Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:16:23.593781 containerd[1475]: time="2025-02-13T20:16:23.590969723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:16:23.593781 containerd[1475]: time="2025-02-13T20:16:23.590988155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:23.593781 containerd[1475]: time="2025-02-13T20:16:23.591125846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:23.634208 containerd[1475]: 2025-02-13 20:16:23.123 [INFO][4476] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 20:16:23.634208 containerd[1475]: 2025-02-13 20:16:23.154 [INFO][4476] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--e--9d3732dae3-k8s-csi--node--driver--624nw-eth0 csi-node-driver- calico-system f105c06f-1a6f-4ec2-924d-9b57627c66c2 811 0 2025-02-13 20:15:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.1-e-9d3732dae3 csi-node-driver-624nw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8887062fda0 [] []}} ContainerID="c273863be31e651009d6821129f6ee69d775d9f482f963f0c24b7c8ec9c4ba9c" Namespace="calico-system" Pod="csi-node-driver-624nw" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-csi--node--driver--624nw-" Feb 13 20:16:23.634208 containerd[1475]: 2025-02-13 20:16:23.154 [INFO][4476] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c273863be31e651009d6821129f6ee69d775d9f482f963f0c24b7c8ec9c4ba9c" Namespace="calico-system" Pod="csi-node-driver-624nw" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-csi--node--driver--624nw-eth0" Feb 13 20:16:23.634208 containerd[1475]: 2025-02-13 20:16:23.325 [INFO][4513] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c273863be31e651009d6821129f6ee69d775d9f482f963f0c24b7c8ec9c4ba9c" HandleID="k8s-pod-network.c273863be31e651009d6821129f6ee69d775d9f482f963f0c24b7c8ec9c4ba9c" Workload="ci--4081.3.1--e--9d3732dae3-k8s-csi--node--driver--624nw-eth0" Feb 13 20:16:23.634208 containerd[1475]: 2025-02-13 20:16:23.356 [INFO][4513] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c273863be31e651009d6821129f6ee69d775d9f482f963f0c24b7c8ec9c4ba9c" HandleID="k8s-pod-network.c273863be31e651009d6821129f6ee69d775d9f482f963f0c24b7c8ec9c4ba9c" Workload="ci--4081.3.1--e--9d3732dae3-k8s-csi--node--driver--624nw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00040dc20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.1-e-9d3732dae3", "pod":"csi-node-driver-624nw", "timestamp":"2025-02-13 20:16:23.325372841 +0000 UTC"}, Hostname:"ci-4081.3.1-e-9d3732dae3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:16:23.634208 containerd[1475]: 2025-02-13 20:16:23.357 [INFO][4513] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:23.634208 containerd[1475]: 2025-02-13 20:16:23.447 [INFO][4513] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:23.634208 containerd[1475]: 2025-02-13 20:16:23.447 [INFO][4513] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-e-9d3732dae3' Feb 13 20:16:23.634208 containerd[1475]: 2025-02-13 20:16:23.471 [INFO][4513] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c273863be31e651009d6821129f6ee69d775d9f482f963f0c24b7c8ec9c4ba9c" host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:23.634208 containerd[1475]: 2025-02-13 20:16:23.500 [INFO][4513] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:23.634208 containerd[1475]: 2025-02-13 20:16:23.515 [INFO][4513] ipam/ipam.go 489: Trying affinity for 192.168.45.0/26 host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:23.634208 containerd[1475]: 2025-02-13 20:16:23.522 [INFO][4513] ipam/ipam.go 155: Attempting to load block cidr=192.168.45.0/26 host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:23.634208 containerd[1475]: 2025-02-13 20:16:23.528 [INFO][4513] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.45.0/26 host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:23.634208 containerd[1475]: 2025-02-13 20:16:23.528 [INFO][4513] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.45.0/26 handle="k8s-pod-network.c273863be31e651009d6821129f6ee69d775d9f482f963f0c24b7c8ec9c4ba9c" host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:23.634208 containerd[1475]: 2025-02-13 20:16:23.532 [INFO][4513] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c273863be31e651009d6821129f6ee69d775d9f482f963f0c24b7c8ec9c4ba9c Feb 13 20:16:23.634208 containerd[1475]: 2025-02-13 20:16:23.545 [INFO][4513] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.45.0/26 handle="k8s-pod-network.c273863be31e651009d6821129f6ee69d775d9f482f963f0c24b7c8ec9c4ba9c" host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:23.634208 containerd[1475]: 2025-02-13 20:16:23.568 [INFO][4513] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.45.6/26] block=192.168.45.0/26 handle="k8s-pod-network.c273863be31e651009d6821129f6ee69d775d9f482f963f0c24b7c8ec9c4ba9c" host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:23.634208 containerd[1475]: 2025-02-13 20:16:23.569 [INFO][4513] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.45.6/26] handle="k8s-pod-network.c273863be31e651009d6821129f6ee69d775d9f482f963f0c24b7c8ec9c4ba9c" host="ci-4081.3.1-e-9d3732dae3" Feb 13 20:16:23.634208 containerd[1475]: 2025-02-13 20:16:23.569 [INFO][4513] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:23.634208 containerd[1475]: 2025-02-13 20:16:23.569 [INFO][4513] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.45.6/26] IPv6=[] ContainerID="c273863be31e651009d6821129f6ee69d775d9f482f963f0c24b7c8ec9c4ba9c" HandleID="k8s-pod-network.c273863be31e651009d6821129f6ee69d775d9f482f963f0c24b7c8ec9c4ba9c" Workload="ci--4081.3.1--e--9d3732dae3-k8s-csi--node--driver--624nw-eth0" Feb 13 20:16:23.635429 containerd[1475]: 2025-02-13 20:16:23.573 [INFO][4476] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c273863be31e651009d6821129f6ee69d775d9f482f963f0c24b7c8ec9c4ba9c" Namespace="calico-system" Pod="csi-node-driver-624nw" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-csi--node--driver--624nw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--e--9d3732dae3-k8s-csi--node--driver--624nw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f105c06f-1a6f-4ec2-924d-9b57627c66c2", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-e-9d3732dae3", ContainerID:"", Pod:"csi-node-driver-624nw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.45.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8887062fda0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:23.635429 containerd[1475]: 2025-02-13 20:16:23.574 [INFO][4476] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.45.6/32] ContainerID="c273863be31e651009d6821129f6ee69d775d9f482f963f0c24b7c8ec9c4ba9c" Namespace="calico-system" Pod="csi-node-driver-624nw" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-csi--node--driver--624nw-eth0" Feb 13 20:16:23.635429 containerd[1475]: 2025-02-13 20:16:23.574 [INFO][4476] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8887062fda0 ContainerID="c273863be31e651009d6821129f6ee69d775d9f482f963f0c24b7c8ec9c4ba9c" Namespace="calico-system" Pod="csi-node-driver-624nw" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-csi--node--driver--624nw-eth0" Feb 13 20:16:23.635429 containerd[1475]: 2025-02-13 20:16:23.582 [INFO][4476] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c273863be31e651009d6821129f6ee69d775d9f482f963f0c24b7c8ec9c4ba9c" Namespace="calico-system" Pod="csi-node-driver-624nw" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-csi--node--driver--624nw-eth0" Feb 13 20:16:23.635429 containerd[1475]: 2025-02-13 20:16:23.590 [INFO][4476] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c273863be31e651009d6821129f6ee69d775d9f482f963f0c24b7c8ec9c4ba9c" Namespace="calico-system" Pod="csi-node-driver-624nw" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-csi--node--driver--624nw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--e--9d3732dae3-k8s-csi--node--driver--624nw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f105c06f-1a6f-4ec2-924d-9b57627c66c2", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-e-9d3732dae3", ContainerID:"c273863be31e651009d6821129f6ee69d775d9f482f963f0c24b7c8ec9c4ba9c", Pod:"csi-node-driver-624nw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.45.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8887062fda0", MAC:"92:12:e1:97:3d:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:23.635429 containerd[1475]: 2025-02-13 20:16:23.626 [INFO][4476] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c273863be31e651009d6821129f6ee69d775d9f482f963f0c24b7c8ec9c4ba9c" Namespace="calico-system" Pod="csi-node-driver-624nw" WorkloadEndpoint="ci--4081.3.1--e--9d3732dae3-k8s-csi--node--driver--624nw-eth0" Feb 13 20:16:23.647600 systemd[1]: Started cri-containerd-33a58bac0d84535335712a9bb4b21f55a7e4753af993f5ecb03cd7ce5a075bb8.scope - libcontainer container 33a58bac0d84535335712a9bb4b21f55a7e4753af993f5ecb03cd7ce5a075bb8. Feb 13 20:16:23.691938 systemd[1]: run-netns-cni\x2dcf7a3083\x2dee28\x2d128b\x2da1af\x2df771999dcfd6.mount: Deactivated successfully. Feb 13 20:16:23.772392 containerd[1475]: time="2025-02-13T20:16:23.771621554Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:16:23.772392 containerd[1475]: time="2025-02-13T20:16:23.771733368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:16:23.772392 containerd[1475]: time="2025-02-13T20:16:23.771780181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:23.772392 containerd[1475]: time="2025-02-13T20:16:23.771952294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:23.821777 kubelet[2519]: E0213 20:16:23.820239 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:16:23.825480 kubelet[2519]: E0213 20:16:23.825432 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:16:23.829094 kubelet[2519]: E0213 20:16:23.829026 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:16:23.836226 systemd[1]: Started cri-containerd-c273863be31e651009d6821129f6ee69d775d9f482f963f0c24b7c8ec9c4ba9c.scope - libcontainer container c273863be31e651009d6821129f6ee69d775d9f482f963f0c24b7c8ec9c4ba9c. Feb 13 20:16:23.881340 kubelet[2519]: I0213 20:16:23.881153 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7887f69768-ls7rf" podStartSLOduration=25.590761674 podStartE2EDuration="29.881116419s" podCreationTimestamp="2025-02-13 20:15:54 +0000 UTC" firstStartedPulling="2025-02-13 20:16:18.279473131 +0000 UTC m=+37.201823796" lastFinishedPulling="2025-02-13 20:16:22.569827895 +0000 UTC m=+41.492178541" observedRunningTime="2025-02-13 20:16:23.876601343 +0000 UTC m=+42.798952012" watchObservedRunningTime="2025-02-13 20:16:23.881116419 +0000 UTC m=+42.803467083" Feb 13 20:16:23.981031 containerd[1475]: time="2025-02-13T20:16:23.980591866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-624nw,Uid:f105c06f-1a6f-4ec2-924d-9b57627c66c2,Namespace:calico-system,Attempt:1,} returns sandbox id \"c273863be31e651009d6821129f6ee69d775d9f482f963f0c24b7c8ec9c4ba9c\"" Feb 13 20:16:24.031444 containerd[1475]: time="2025-02-13T20:16:24.030521017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8479cf5b7f-mz564,Uid:a042f4c6-cdbb-48a4-9920-8318d966e49f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"33a58bac0d84535335712a9bb4b21f55a7e4753af993f5ecb03cd7ce5a075bb8\"" Feb 13 20:16:24.840761 kubelet[2519]: E0213 20:16:24.839098 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:16:24.840761 kubelet[2519]: E0213 20:16:24.840028 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:16:25.012121 kernel: bpftool[4716]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 20:16:25.175541 systemd-networkd[1378]: cali9d7b2dae214: Gained IPv6LL Feb 13 20:16:25.559935 systemd-networkd[1378]: cali8887062fda0: Gained IPv6LL Feb 13 20:16:26.260520 containerd[1475]: time="2025-02-13T20:16:26.260144243Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:26.264201 containerd[1475]: time="2025-02-13T20:16:26.262000153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 20:16:26.265820 containerd[1475]: time="2025-02-13T20:16:26.265728861Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:26.272896 containerd[1475]: time="2025-02-13T20:16:26.272654482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:26.275763 containerd[1475]: time="2025-02-13T20:16:26.275275921Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.702342821s" Feb 13 20:16:26.275763 containerd[1475]: time="2025-02-13T20:16:26.275347916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 20:16:26.278915 containerd[1475]: time="2025-02-13T20:16:26.278607898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 20:16:26.283012 containerd[1475]: time="2025-02-13T20:16:26.281985219Z" level=info msg="CreateContainer within sandbox \"e3dd7c08a9679977275bab49d352371ed82333ee72b6f151592b38757a06a9be\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:16:26.321779 containerd[1475]: time="2025-02-13T20:16:26.321466593Z" level=info msg="CreateContainer within sandbox \"e3dd7c08a9679977275bab49d352371ed82333ee72b6f151592b38757a06a9be\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"80de1677f94a686173b5a503f84edbfa0217924e7dafbe05066812b02d56c322\"" Feb 13 20:16:26.322798 containerd[1475]: time="2025-02-13T20:16:26.322686182Z" level=info msg="StartContainer for \"80de1677f94a686173b5a503f84edbfa0217924e7dafbe05066812b02d56c322\"" Feb 13 20:16:26.467537 systemd-networkd[1378]: vxlan.calico: Link UP Feb 13 20:16:26.467550 systemd-networkd[1378]: vxlan.calico: Gained carrier Feb 13 20:16:26.524805 systemd[1]: Started cri-containerd-80de1677f94a686173b5a503f84edbfa0217924e7dafbe05066812b02d56c322.scope - libcontainer container 80de1677f94a686173b5a503f84edbfa0217924e7dafbe05066812b02d56c322. Feb 13 20:16:26.659850 containerd[1475]: time="2025-02-13T20:16:26.659722662Z" level=info msg="StartContainer for \"80de1677f94a686173b5a503f84edbfa0217924e7dafbe05066812b02d56c322\" returns successfully" Feb 13 20:16:26.897372 kubelet[2519]: I0213 20:16:26.897299 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8479cf5b7f-4q5j2" podStartSLOduration=27.709319134 podStartE2EDuration="32.897274908s" podCreationTimestamp="2025-02-13 20:15:54 +0000 UTC" firstStartedPulling="2025-02-13 20:16:21.090402503 +0000 UTC m=+40.012753165" lastFinishedPulling="2025-02-13 20:16:26.278358293 +0000 UTC m=+45.200708939" observedRunningTime="2025-02-13 20:16:26.894716793 +0000 UTC m=+45.817067456" watchObservedRunningTime="2025-02-13 20:16:26.897274908 +0000 UTC m=+45.819625570" Feb 13 20:16:27.862582 kubelet[2519]: I0213 20:16:27.862091 2519 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:16:27.929508 containerd[1475]: time="2025-02-13T20:16:27.929441062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:27.934134 containerd[1475]: time="2025-02-13T20:16:27.934055046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 20:16:27.938613 containerd[1475]: time="2025-02-13T20:16:27.937038863Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:27.945542 containerd[1475]: time="2025-02-13T20:16:27.945391405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:27.947274 containerd[1475]: time="2025-02-13T20:16:27.946620294Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.667954653s" Feb 13 20:16:27.948080 containerd[1475]: time="2025-02-13T20:16:27.948042617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 20:16:27.949926 containerd[1475]: time="2025-02-13T20:16:27.949895395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:16:27.955314 containerd[1475]: time="2025-02-13T20:16:27.955239312Z" level=info msg="CreateContainer within sandbox \"c273863be31e651009d6821129f6ee69d775d9f482f963f0c24b7c8ec9c4ba9c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 20:16:27.986778 containerd[1475]: time="2025-02-13T20:16:27.986663983Z" level=info msg="CreateContainer within sandbox \"c273863be31e651009d6821129f6ee69d775d9f482f963f0c24b7c8ec9c4ba9c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"47e13a84b73e8cfacbe4bfbad06c5859aa8fe6065a9f445c05046f5b31f8e5bb\"" Feb 13 20:16:27.989407 containerd[1475]: time="2025-02-13T20:16:27.988986982Z" level=info msg="StartContainer for \"47e13a84b73e8cfacbe4bfbad06c5859aa8fe6065a9f445c05046f5b31f8e5bb\"" Feb 13 20:16:28.089121 systemd[1]: Started cri-containerd-47e13a84b73e8cfacbe4bfbad06c5859aa8fe6065a9f445c05046f5b31f8e5bb.scope - libcontainer container 47e13a84b73e8cfacbe4bfbad06c5859aa8fe6065a9f445c05046f5b31f8e5bb. Feb 13 20:16:28.146006 containerd[1475]: time="2025-02-13T20:16:28.145827732Z" level=info msg="StartContainer for \"47e13a84b73e8cfacbe4bfbad06c5859aa8fe6065a9f445c05046f5b31f8e5bb\" returns successfully" Feb 13 20:16:28.183117 systemd-networkd[1378]: vxlan.calico: Gained IPv6LL Feb 13 20:16:28.425952 containerd[1475]: time="2025-02-13T20:16:28.425790327Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:28.428984 containerd[1475]: time="2025-02-13T20:16:28.428924790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 20:16:28.435615 containerd[1475]: time="2025-02-13T20:16:28.435299836Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 484.410628ms" Feb 13 20:16:28.435615 containerd[1475]: time="2025-02-13T20:16:28.435371659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 20:16:28.438043 containerd[1475]: time="2025-02-13T20:16:28.437708558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 20:16:28.440796 containerd[1475]: time="2025-02-13T20:16:28.440645624Z" level=info msg="CreateContainer within sandbox \"33a58bac0d84535335712a9bb4b21f55a7e4753af993f5ecb03cd7ce5a075bb8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:16:28.476249 containerd[1475]: time="2025-02-13T20:16:28.476144714Z" level=info msg="CreateContainer within sandbox \"33a58bac0d84535335712a9bb4b21f55a7e4753af993f5ecb03cd7ce5a075bb8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b452e1c5fa121a4a53b863c09783936392ed3f0692c6684cac4e8bba772a3ac3\"" Feb 13 20:16:28.480805 containerd[1475]: time="2025-02-13T20:16:28.479140619Z" level=info msg="StartContainer for \"b452e1c5fa121a4a53b863c09783936392ed3f0692c6684cac4e8bba772a3ac3\"" Feb 13 20:16:28.556261 systemd[1]: Started cri-containerd-b452e1c5fa121a4a53b863c09783936392ed3f0692c6684cac4e8bba772a3ac3.scope - libcontainer container b452e1c5fa121a4a53b863c09783936392ed3f0692c6684cac4e8bba772a3ac3. Feb 13 20:16:28.708853 containerd[1475]: time="2025-02-13T20:16:28.708388697Z" level=info msg="StartContainer for \"b452e1c5fa121a4a53b863c09783936392ed3f0692c6684cac4e8bba772a3ac3\" returns successfully" Feb 13 20:16:28.926658 kubelet[2519]: I0213 20:16:28.926389 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8479cf5b7f-mz564" podStartSLOduration=30.523987477 podStartE2EDuration="34.926362689s" podCreationTimestamp="2025-02-13 20:15:54 +0000 UTC" firstStartedPulling="2025-02-13 20:16:24.034344327 +0000 UTC m=+42.956694992" lastFinishedPulling="2025-02-13 20:16:28.436719538 +0000 UTC m=+47.359070204" observedRunningTime="2025-02-13 20:16:28.926231465 +0000 UTC m=+47.848582145" watchObservedRunningTime="2025-02-13 20:16:28.926362689 +0000 UTC m=+47.848713353" Feb 13 20:16:29.921913 kubelet[2519]: I0213 20:16:29.921648 2519 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:16:30.228212 containerd[1475]: time="2025-02-13T20:16:30.226597343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:30.231179 containerd[1475]: time="2025-02-13T20:16:30.231093580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 20:16:30.236849 containerd[1475]: time="2025-02-13T20:16:30.235241381Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:30.242106 containerd[1475]: time="2025-02-13T20:16:30.242037605Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:30.244891 containerd[1475]: time="2025-02-13T20:16:30.244812600Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.806889474s" Feb 13 20:16:30.244891 containerd[1475]: time="2025-02-13T20:16:30.244887292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 20:16:30.251780 containerd[1475]: time="2025-02-13T20:16:30.251426667Z" level=info msg="CreateContainer within sandbox \"c273863be31e651009d6821129f6ee69d775d9f482f963f0c24b7c8ec9c4ba9c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 20:16:30.287837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount774227738.mount: Deactivated successfully. Feb 13 20:16:30.291407 containerd[1475]: time="2025-02-13T20:16:30.291216129Z" level=info msg="CreateContainer within sandbox \"c273863be31e651009d6821129f6ee69d775d9f482f963f0c24b7c8ec9c4ba9c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b0fa4a543e352ccaf017bc1a7e009096e0b5031f974d1e915e1f8dd5f059dd11\"" Feb 13 20:16:30.292695 containerd[1475]: time="2025-02-13T20:16:30.292624993Z" level=info msg="StartContainer for \"b0fa4a543e352ccaf017bc1a7e009096e0b5031f974d1e915e1f8dd5f059dd11\"" Feb 13 20:16:30.361150 systemd[1]: Started cri-containerd-b0fa4a543e352ccaf017bc1a7e009096e0b5031f974d1e915e1f8dd5f059dd11.scope - libcontainer container b0fa4a543e352ccaf017bc1a7e009096e0b5031f974d1e915e1f8dd5f059dd11. Feb 13 20:16:30.409423 containerd[1475]: time="2025-02-13T20:16:30.409174869Z" level=info msg="StartContainer for \"b0fa4a543e352ccaf017bc1a7e009096e0b5031f974d1e915e1f8dd5f059dd11\" returns successfully" Feb 13 20:16:30.740992 kubelet[2519]: I0213 20:16:30.740900 2519 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 20:16:30.744337 kubelet[2519]: I0213 20:16:30.744294 2519 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 20:16:30.952654 kubelet[2519]: I0213 20:16:30.951475 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-624nw" podStartSLOduration=30.689184975 podStartE2EDuration="36.951429237s" podCreationTimestamp="2025-02-13 20:15:54 +0000 UTC" firstStartedPulling="2025-02-13 20:16:23.986427101 +0000 UTC m=+42.908777751" lastFinishedPulling="2025-02-13 20:16:30.248671352 +0000 UTC m=+49.171022013" observedRunningTime="2025-02-13 20:16:30.95119676 +0000 UTC m=+49.873547429" watchObservedRunningTime="2025-02-13 20:16:30.951429237 +0000 UTC m=+49.873780101" Feb 13 20:16:40.661801 systemd[1]: run-containerd-runc-k8s.io-7bc85994d715af3ab973090023616c75a1f5f240b45be2c4f0b42e20a1743ab0-runc.1E9HjZ.mount: Deactivated successfully. Feb 13 20:16:40.728595 kubelet[2519]: E0213 20:16:40.728537 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:16:41.266895 containerd[1475]: time="2025-02-13T20:16:41.265998562Z" level=info msg="StopPodSandbox for \"b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe\"" Feb 13 20:16:41.441396 containerd[1475]: 2025-02-13 20:16:41.381 [WARNING][5013] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--mz564-eth0", GenerateName:"calico-apiserver-8479cf5b7f-", Namespace:"calico-apiserver", SelfLink:"", UID:"a042f4c6-cdbb-48a4-9920-8318d966e49f", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8479cf5b7f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-e-9d3732dae3", ContainerID:"33a58bac0d84535335712a9bb4b21f55a7e4753af993f5ecb03cd7ce5a075bb8", Pod:"calico-apiserver-8479cf5b7f-mz564", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d7b2dae214", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:41.441396 containerd[1475]: 2025-02-13 20:16:41.384 [INFO][5013] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" Feb 13 20:16:41.441396 containerd[1475]: 2025-02-13 20:16:41.384 [INFO][5013] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" iface="eth0" netns="" Feb 13 20:16:41.441396 containerd[1475]: 2025-02-13 20:16:41.384 [INFO][5013] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" Feb 13 20:16:41.441396 containerd[1475]: 2025-02-13 20:16:41.384 [INFO][5013] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" Feb 13 20:16:41.441396 containerd[1475]: 2025-02-13 20:16:41.424 [INFO][5021] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" HandleID="k8s-pod-network.b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--mz564-eth0" Feb 13 20:16:41.441396 containerd[1475]: 2025-02-13 20:16:41.424 [INFO][5021] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:41.441396 containerd[1475]: 2025-02-13 20:16:41.424 [INFO][5021] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:41.441396 containerd[1475]: 2025-02-13 20:16:41.433 [WARNING][5021] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" HandleID="k8s-pod-network.b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--mz564-eth0" Feb 13 20:16:41.441396 containerd[1475]: 2025-02-13 20:16:41.433 [INFO][5021] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" HandleID="k8s-pod-network.b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--mz564-eth0" Feb 13 20:16:41.441396 containerd[1475]: 2025-02-13 20:16:41.435 [INFO][5021] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:41.441396 containerd[1475]: 2025-02-13 20:16:41.438 [INFO][5013] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" Feb 13 20:16:41.442355 containerd[1475]: time="2025-02-13T20:16:41.441412710Z" level=info msg="TearDown network for sandbox \"b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe\" successfully" Feb 13 20:16:41.442355 containerd[1475]: time="2025-02-13T20:16:41.441443643Z" level=info msg="StopPodSandbox for \"b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe\" returns successfully" Feb 13 20:16:41.443548 containerd[1475]: time="2025-02-13T20:16:41.442600350Z" level=info msg="RemovePodSandbox for \"b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe\"" Feb 13 20:16:41.443548 containerd[1475]: time="2025-02-13T20:16:41.442647128Z" level=info msg="Forcibly stopping sandbox \"b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe\"" Feb 13 20:16:41.564825 containerd[1475]: 2025-02-13 20:16:41.508 [WARNING][5039] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--mz564-eth0", GenerateName:"calico-apiserver-8479cf5b7f-", Namespace:"calico-apiserver", SelfLink:"", UID:"a042f4c6-cdbb-48a4-9920-8318d966e49f", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8479cf5b7f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-e-9d3732dae3", ContainerID:"33a58bac0d84535335712a9bb4b21f55a7e4753af993f5ecb03cd7ce5a075bb8", Pod:"calico-apiserver-8479cf5b7f-mz564", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d7b2dae214", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:41.564825 containerd[1475]: 2025-02-13 20:16:41.509 [INFO][5039] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" Feb 13 20:16:41.564825 containerd[1475]: 2025-02-13 20:16:41.509 [INFO][5039] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" iface="eth0" netns="" Feb 13 20:16:41.564825 containerd[1475]: 2025-02-13 20:16:41.509 [INFO][5039] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" Feb 13 20:16:41.564825 containerd[1475]: 2025-02-13 20:16:41.509 [INFO][5039] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" Feb 13 20:16:41.564825 containerd[1475]: 2025-02-13 20:16:41.547 [INFO][5045] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" HandleID="k8s-pod-network.b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--mz564-eth0" Feb 13 20:16:41.564825 containerd[1475]: 2025-02-13 20:16:41.547 [INFO][5045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:41.564825 containerd[1475]: 2025-02-13 20:16:41.548 [INFO][5045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:41.564825 containerd[1475]: 2025-02-13 20:16:41.557 [WARNING][5045] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" HandleID="k8s-pod-network.b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--mz564-eth0" Feb 13 20:16:41.564825 containerd[1475]: 2025-02-13 20:16:41.557 [INFO][5045] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" HandleID="k8s-pod-network.b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--mz564-eth0" Feb 13 20:16:41.564825 containerd[1475]: 2025-02-13 20:16:41.559 [INFO][5045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:41.564825 containerd[1475]: 2025-02-13 20:16:41.562 [INFO][5039] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe" Feb 13 20:16:41.565696 containerd[1475]: time="2025-02-13T20:16:41.564845589Z" level=info msg="TearDown network for sandbox \"b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe\" successfully" Feb 13 20:16:41.582728 containerd[1475]: time="2025-02-13T20:16:41.582627612Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:16:41.582927 containerd[1475]: time="2025-02-13T20:16:41.582783367Z" level=info msg="RemovePodSandbox \"b9748e11b0280bdf9cdd36f90dfff42ed042295308c1b1729d116914f10d06fe\" returns successfully" Feb 13 20:16:41.583649 containerd[1475]: time="2025-02-13T20:16:41.583580554Z" level=info msg="StopPodSandbox for \"843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a\"" Feb 13 20:16:41.699438 containerd[1475]: 2025-02-13 20:16:41.648 [WARNING][5063] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--fbch8-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"cb4743e4-e761-441d-ae53-4d9924d89649", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-e-9d3732dae3", ContainerID:"15f1d230f79314ca45170eb2abe1f8008829b7782fc3216898b709e923d4460f", Pod:"coredns-6f6b679f8f-fbch8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidceff31beac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:41.699438 containerd[1475]: 2025-02-13 20:16:41.648 [INFO][5063] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" Feb 13 20:16:41.699438 containerd[1475]: 2025-02-13 20:16:41.648 [INFO][5063] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" iface="eth0" netns="" Feb 13 20:16:41.699438 containerd[1475]: 2025-02-13 20:16:41.648 [INFO][5063] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" Feb 13 20:16:41.699438 containerd[1475]: 2025-02-13 20:16:41.648 [INFO][5063] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" Feb 13 20:16:41.699438 containerd[1475]: 2025-02-13 20:16:41.682 [INFO][5069] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" HandleID="k8s-pod-network.843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" Workload="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--fbch8-eth0" Feb 13 20:16:41.699438 containerd[1475]: 2025-02-13 20:16:41.682 [INFO][5069] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:41.699438 containerd[1475]: 2025-02-13 20:16:41.682 [INFO][5069] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:41.699438 containerd[1475]: 2025-02-13 20:16:41.690 [WARNING][5069] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" HandleID="k8s-pod-network.843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" Workload="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--fbch8-eth0" Feb 13 20:16:41.699438 containerd[1475]: 2025-02-13 20:16:41.690 [INFO][5069] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" HandleID="k8s-pod-network.843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" Workload="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--fbch8-eth0" Feb 13 20:16:41.699438 containerd[1475]: 2025-02-13 20:16:41.694 [INFO][5069] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:41.699438 containerd[1475]: 2025-02-13 20:16:41.696 [INFO][5063] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" Feb 13 20:16:41.701361 containerd[1475]: time="2025-02-13T20:16:41.699712939Z" level=info msg="TearDown network for sandbox \"843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a\" successfully" Feb 13 20:16:41.701361 containerd[1475]: time="2025-02-13T20:16:41.699917394Z" level=info msg="StopPodSandbox for \"843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a\" returns successfully" Feb 13 20:16:41.702764 containerd[1475]: time="2025-02-13T20:16:41.702158980Z" level=info msg="RemovePodSandbox for \"843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a\"" Feb 13 20:16:41.702764 containerd[1475]: time="2025-02-13T20:16:41.702237559Z" level=info msg="Forcibly stopping sandbox \"843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a\"" Feb 13 20:16:41.824444 containerd[1475]: 2025-02-13 20:16:41.768 [WARNING][5087] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--fbch8-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"cb4743e4-e761-441d-ae53-4d9924d89649", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-e-9d3732dae3", ContainerID:"15f1d230f79314ca45170eb2abe1f8008829b7782fc3216898b709e923d4460f", Pod:"coredns-6f6b679f8f-fbch8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidceff31beac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:41.824444 containerd[1475]: 2025-02-13 20:16:41.768 [INFO][5087] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" Feb 13 20:16:41.824444 containerd[1475]: 2025-02-13 20:16:41.768 [INFO][5087] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" iface="eth0" netns="" Feb 13 20:16:41.824444 containerd[1475]: 2025-02-13 20:16:41.768 [INFO][5087] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" Feb 13 20:16:41.824444 containerd[1475]: 2025-02-13 20:16:41.768 [INFO][5087] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" Feb 13 20:16:41.824444 containerd[1475]: 2025-02-13 20:16:41.807 [INFO][5094] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" HandleID="k8s-pod-network.843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" Workload="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--fbch8-eth0" Feb 13 20:16:41.824444 containerd[1475]: 2025-02-13 20:16:41.807 [INFO][5094] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:41.824444 containerd[1475]: 2025-02-13 20:16:41.807 [INFO][5094] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:41.824444 containerd[1475]: 2025-02-13 20:16:41.816 [WARNING][5094] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" HandleID="k8s-pod-network.843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" Workload="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--fbch8-eth0" Feb 13 20:16:41.824444 containerd[1475]: 2025-02-13 20:16:41.816 [INFO][5094] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" HandleID="k8s-pod-network.843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" Workload="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--fbch8-eth0" Feb 13 20:16:41.824444 containerd[1475]: 2025-02-13 20:16:41.819 [INFO][5094] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:41.824444 containerd[1475]: 2025-02-13 20:16:41.821 [INFO][5087] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a" Feb 13 20:16:41.826201 containerd[1475]: time="2025-02-13T20:16:41.825054266Z" level=info msg="TearDown network for sandbox \"843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a\" successfully" Feb 13 20:16:41.834383 containerd[1475]: time="2025-02-13T20:16:41.834088662Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:16:41.834383 containerd[1475]: time="2025-02-13T20:16:41.834206113Z" level=info msg="RemovePodSandbox \"843ec67afa0193ee05663d6703ca90e7e8818ee0a443663eb8e0767453c9447a\" returns successfully" Feb 13 20:16:41.835341 containerd[1475]: time="2025-02-13T20:16:41.834909934Z" level=info msg="StopPodSandbox for \"f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610\"" Feb 13 20:16:41.942128 containerd[1475]: 2025-02-13 20:16:41.890 [WARNING][5112] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--62bhh-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"58af9d69-676e-41db-a9d1-fb2841461113", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-e-9d3732dae3", ContainerID:"063d95aa3cc6ba34c834adb56ecf142f5d72ac341121350c2940397fa55fdbe7", Pod:"coredns-6f6b679f8f-62bhh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali341109248de", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:41.942128 containerd[1475]: 2025-02-13 20:16:41.891 [INFO][5112] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" Feb 13 20:16:41.942128 containerd[1475]: 2025-02-13 20:16:41.891 [INFO][5112] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" iface="eth0" netns="" Feb 13 20:16:41.942128 containerd[1475]: 2025-02-13 20:16:41.891 [INFO][5112] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" Feb 13 20:16:41.942128 containerd[1475]: 2025-02-13 20:16:41.891 [INFO][5112] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" Feb 13 20:16:41.942128 containerd[1475]: 2025-02-13 20:16:41.924 [INFO][5118] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" HandleID="k8s-pod-network.f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" Workload="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--62bhh-eth0" Feb 13 20:16:41.942128 containerd[1475]: 2025-02-13 20:16:41.924 [INFO][5118] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:41.942128 containerd[1475]: 2025-02-13 20:16:41.924 [INFO][5118] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:41.942128 containerd[1475]: 2025-02-13 20:16:41.933 [WARNING][5118] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" HandleID="k8s-pod-network.f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" Workload="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--62bhh-eth0" Feb 13 20:16:41.942128 containerd[1475]: 2025-02-13 20:16:41.933 [INFO][5118] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" HandleID="k8s-pod-network.f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" Workload="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--62bhh-eth0" Feb 13 20:16:41.942128 containerd[1475]: 2025-02-13 20:16:41.936 [INFO][5118] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:41.942128 containerd[1475]: 2025-02-13 20:16:41.939 [INFO][5112] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" Feb 13 20:16:41.943723 containerd[1475]: time="2025-02-13T20:16:41.943059642Z" level=info msg="TearDown network for sandbox \"f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610\" successfully" Feb 13 20:16:41.943723 containerd[1475]: time="2025-02-13T20:16:41.943119825Z" level=info msg="StopPodSandbox for \"f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610\" returns successfully" Feb 13 20:16:41.945052 containerd[1475]: time="2025-02-13T20:16:41.944777701Z" level=info msg="RemovePodSandbox for \"f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610\"" Feb 13 20:16:41.945052 containerd[1475]: time="2025-02-13T20:16:41.944826268Z" level=info msg="Forcibly stopping sandbox \"f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610\"" Feb 13 20:16:42.063861 containerd[1475]: 2025-02-13 20:16:42.010 [WARNING][5136] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--62bhh-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"58af9d69-676e-41db-a9d1-fb2841461113", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-e-9d3732dae3", ContainerID:"063d95aa3cc6ba34c834adb56ecf142f5d72ac341121350c2940397fa55fdbe7", Pod:"coredns-6f6b679f8f-62bhh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali341109248de", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:42.063861 containerd[1475]: 2025-02-13 20:16:42.011 [INFO][5136] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" Feb 13 20:16:42.063861 containerd[1475]: 2025-02-13 20:16:42.011 [INFO][5136] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" iface="eth0" netns="" Feb 13 20:16:42.063861 containerd[1475]: 2025-02-13 20:16:42.011 [INFO][5136] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" Feb 13 20:16:42.063861 containerd[1475]: 2025-02-13 20:16:42.011 [INFO][5136] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" Feb 13 20:16:42.063861 containerd[1475]: 2025-02-13 20:16:42.043 [INFO][5142] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" HandleID="k8s-pod-network.f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" Workload="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--62bhh-eth0" Feb 13 20:16:42.063861 containerd[1475]: 2025-02-13 20:16:42.043 [INFO][5142] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:42.063861 containerd[1475]: 2025-02-13 20:16:42.043 [INFO][5142] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:42.063861 containerd[1475]: 2025-02-13 20:16:42.055 [WARNING][5142] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" HandleID="k8s-pod-network.f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" Workload="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--62bhh-eth0" Feb 13 20:16:42.063861 containerd[1475]: 2025-02-13 20:16:42.055 [INFO][5142] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" HandleID="k8s-pod-network.f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" Workload="ci--4081.3.1--e--9d3732dae3-k8s-coredns--6f6b679f8f--62bhh-eth0" Feb 13 20:16:42.063861 containerd[1475]: 2025-02-13 20:16:42.057 [INFO][5142] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:42.063861 containerd[1475]: 2025-02-13 20:16:42.060 [INFO][5136] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610" Feb 13 20:16:42.063861 containerd[1475]: time="2025-02-13T20:16:42.062763773Z" level=info msg="TearDown network for sandbox \"f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610\" successfully" Feb 13 20:16:42.074949 containerd[1475]: time="2025-02-13T20:16:42.074771204Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:16:42.075104 containerd[1475]: time="2025-02-13T20:16:42.074992829Z" level=info msg="RemovePodSandbox \"f6a63d570cdfe4692915369a2aae67f2addf0b3b3f79e725045a87a55ae07610\" returns successfully" Feb 13 20:16:42.077811 containerd[1475]: time="2025-02-13T20:16:42.075629340Z" level=info msg="StopPodSandbox for \"5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5\"" Feb 13 20:16:42.207935 containerd[1475]: 2025-02-13 20:16:42.149 [WARNING][5160] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--e--9d3732dae3-k8s-calico--kube--controllers--7887f69768--ls7rf-eth0", GenerateName:"calico-kube-controllers-7887f69768-", Namespace:"calico-system", SelfLink:"", UID:"648198c7-66dc-48d6-8b8d-fd320bc90666", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7887f69768", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-e-9d3732dae3", ContainerID:"7c074e44d7abc934522835f118a16be057c0e4dbd7bf9e667003e11ed0a81f96", Pod:"calico-kube-controllers-7887f69768-ls7rf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.45.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7046ea251e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:42.207935 containerd[1475]: 2025-02-13 20:16:42.150 [INFO][5160] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" Feb 13 20:16:42.207935 containerd[1475]: 2025-02-13 20:16:42.150 [INFO][5160] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" iface="eth0" netns="" Feb 13 20:16:42.207935 containerd[1475]: 2025-02-13 20:16:42.150 [INFO][5160] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" Feb 13 20:16:42.207935 containerd[1475]: 2025-02-13 20:16:42.150 [INFO][5160] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" Feb 13 20:16:42.207935 containerd[1475]: 2025-02-13 20:16:42.191 [INFO][5166] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" HandleID="k8s-pod-network.5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--kube--controllers--7887f69768--ls7rf-eth0" Feb 13 20:16:42.207935 containerd[1475]: 2025-02-13 20:16:42.191 [INFO][5166] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:42.207935 containerd[1475]: 2025-02-13 20:16:42.191 [INFO][5166] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:42.207935 containerd[1475]: 2025-02-13 20:16:42.200 [WARNING][5166] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" HandleID="k8s-pod-network.5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--kube--controllers--7887f69768--ls7rf-eth0" Feb 13 20:16:42.207935 containerd[1475]: 2025-02-13 20:16:42.200 [INFO][5166] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" HandleID="k8s-pod-network.5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--kube--controllers--7887f69768--ls7rf-eth0" Feb 13 20:16:42.207935 containerd[1475]: 2025-02-13 20:16:42.202 [INFO][5166] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:42.207935 containerd[1475]: 2025-02-13 20:16:42.205 [INFO][5160] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" Feb 13 20:16:42.209144 containerd[1475]: time="2025-02-13T20:16:42.207959243Z" level=info msg="TearDown network for sandbox \"5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5\" successfully" Feb 13 20:16:42.209144 containerd[1475]: time="2025-02-13T20:16:42.207988087Z" level=info msg="StopPodSandbox for \"5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5\" returns successfully" Feb 13 20:16:42.209144 containerd[1475]: time="2025-02-13T20:16:42.208951790Z" level=info msg="RemovePodSandbox for \"5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5\"" Feb 13 20:16:42.209144 containerd[1475]: time="2025-02-13T20:16:42.208988664Z" level=info msg="Forcibly stopping sandbox \"5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5\"" Feb 13 20:16:42.310865 containerd[1475]: 2025-02-13 20:16:42.263 [WARNING][5184] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--e--9d3732dae3-k8s-calico--kube--controllers--7887f69768--ls7rf-eth0", GenerateName:"calico-kube-controllers-7887f69768-", Namespace:"calico-system", SelfLink:"", UID:"648198c7-66dc-48d6-8b8d-fd320bc90666", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7887f69768", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-e-9d3732dae3", ContainerID:"7c074e44d7abc934522835f118a16be057c0e4dbd7bf9e667003e11ed0a81f96", Pod:"calico-kube-controllers-7887f69768-ls7rf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.45.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7046ea251e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:42.310865 containerd[1475]: 2025-02-13 20:16:42.264 [INFO][5184] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" Feb 13 20:16:42.310865 containerd[1475]: 2025-02-13 20:16:42.264 [INFO][5184] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" iface="eth0" netns="" Feb 13 20:16:42.310865 containerd[1475]: 2025-02-13 20:16:42.264 [INFO][5184] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" Feb 13 20:16:42.310865 containerd[1475]: 2025-02-13 20:16:42.264 [INFO][5184] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" Feb 13 20:16:42.310865 containerd[1475]: 2025-02-13 20:16:42.295 [INFO][5190] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" HandleID="k8s-pod-network.5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--kube--controllers--7887f69768--ls7rf-eth0" Feb 13 20:16:42.310865 containerd[1475]: 2025-02-13 20:16:42.295 [INFO][5190] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:42.310865 containerd[1475]: 2025-02-13 20:16:42.295 [INFO][5190] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:42.310865 containerd[1475]: 2025-02-13 20:16:42.303 [WARNING][5190] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" HandleID="k8s-pod-network.5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--kube--controllers--7887f69768--ls7rf-eth0" Feb 13 20:16:42.310865 containerd[1475]: 2025-02-13 20:16:42.303 [INFO][5190] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" HandleID="k8s-pod-network.5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--kube--controllers--7887f69768--ls7rf-eth0" Feb 13 20:16:42.310865 containerd[1475]: 2025-02-13 20:16:42.306 [INFO][5190] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:42.310865 containerd[1475]: 2025-02-13 20:16:42.308 [INFO][5184] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5" Feb 13 20:16:42.312485 containerd[1475]: time="2025-02-13T20:16:42.310889037Z" level=info msg="TearDown network for sandbox \"5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5\" successfully" Feb 13 20:16:42.316907 containerd[1475]: time="2025-02-13T20:16:42.316830388Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:16:42.317111 containerd[1475]: time="2025-02-13T20:16:42.316973207Z" level=info msg="RemovePodSandbox \"5b8fe4e580462ddad94fb9bc3d6f4371ae1085361204710bb116decc74506ab5\" returns successfully" Feb 13 20:16:42.317809 containerd[1475]: time="2025-02-13T20:16:42.317776469Z" level=info msg="StopPodSandbox for \"679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d\"" Feb 13 20:16:42.422118 containerd[1475]: 2025-02-13 20:16:42.372 [WARNING][5208] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--e--9d3732dae3-k8s-csi--node--driver--624nw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f105c06f-1a6f-4ec2-924d-9b57627c66c2", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-e-9d3732dae3", ContainerID:"c273863be31e651009d6821129f6ee69d775d9f482f963f0c24b7c8ec9c4ba9c", Pod:"csi-node-driver-624nw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.45.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8887062fda0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:42.422118 containerd[1475]: 2025-02-13 20:16:42.372 [INFO][5208] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" Feb 13 20:16:42.422118 containerd[1475]: 2025-02-13 20:16:42.372 [INFO][5208] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" iface="eth0" netns="" Feb 13 20:16:42.422118 containerd[1475]: 2025-02-13 20:16:42.372 [INFO][5208] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" Feb 13 20:16:42.422118 containerd[1475]: 2025-02-13 20:16:42.372 [INFO][5208] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" Feb 13 20:16:42.422118 containerd[1475]: 2025-02-13 20:16:42.405 [INFO][5214] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" HandleID="k8s-pod-network.679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" Workload="ci--4081.3.1--e--9d3732dae3-k8s-csi--node--driver--624nw-eth0" Feb 13 20:16:42.422118 containerd[1475]: 2025-02-13 20:16:42.405 [INFO][5214] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:42.422118 containerd[1475]: 2025-02-13 20:16:42.405 [INFO][5214] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:42.422118 containerd[1475]: 2025-02-13 20:16:42.414 [WARNING][5214] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" HandleID="k8s-pod-network.679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" Workload="ci--4081.3.1--e--9d3732dae3-k8s-csi--node--driver--624nw-eth0" Feb 13 20:16:42.422118 containerd[1475]: 2025-02-13 20:16:42.415 [INFO][5214] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" HandleID="k8s-pod-network.679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" Workload="ci--4081.3.1--e--9d3732dae3-k8s-csi--node--driver--624nw-eth0" Feb 13 20:16:42.422118 containerd[1475]: 2025-02-13 20:16:42.417 [INFO][5214] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:42.422118 containerd[1475]: 2025-02-13 20:16:42.419 [INFO][5208] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" Feb 13 20:16:42.423060 containerd[1475]: time="2025-02-13T20:16:42.422166335Z" level=info msg="TearDown network for sandbox \"679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d\" successfully" Feb 13 20:16:42.423060 containerd[1475]: time="2025-02-13T20:16:42.422191703Z" level=info msg="StopPodSandbox for \"679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d\" returns successfully" Feb 13 20:16:42.423060 containerd[1475]: time="2025-02-13T20:16:42.422665573Z" level=info msg="RemovePodSandbox for \"679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d\"" Feb 13 20:16:42.423060 containerd[1475]: time="2025-02-13T20:16:42.422693508Z" level=info msg="Forcibly stopping sandbox \"679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d\"" Feb 13 20:16:42.525961 containerd[1475]: 2025-02-13 20:16:42.476 [WARNING][5232] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--e--9d3732dae3-k8s-csi--node--driver--624nw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f105c06f-1a6f-4ec2-924d-9b57627c66c2", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-e-9d3732dae3", ContainerID:"c273863be31e651009d6821129f6ee69d775d9f482f963f0c24b7c8ec9c4ba9c", Pod:"csi-node-driver-624nw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.45.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8887062fda0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:42.525961 containerd[1475]: 2025-02-13 20:16:42.476 [INFO][5232] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" Feb 13 20:16:42.525961 containerd[1475]: 2025-02-13 20:16:42.476 [INFO][5232] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" iface="eth0" netns="" Feb 13 20:16:42.525961 containerd[1475]: 2025-02-13 20:16:42.476 [INFO][5232] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" Feb 13 20:16:42.525961 containerd[1475]: 2025-02-13 20:16:42.476 [INFO][5232] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" Feb 13 20:16:42.525961 containerd[1475]: 2025-02-13 20:16:42.508 [INFO][5238] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" HandleID="k8s-pod-network.679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" Workload="ci--4081.3.1--e--9d3732dae3-k8s-csi--node--driver--624nw-eth0" Feb 13 20:16:42.525961 containerd[1475]: 2025-02-13 20:16:42.508 [INFO][5238] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:42.525961 containerd[1475]: 2025-02-13 20:16:42.509 [INFO][5238] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:42.525961 containerd[1475]: 2025-02-13 20:16:42.519 [WARNING][5238] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" HandleID="k8s-pod-network.679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" Workload="ci--4081.3.1--e--9d3732dae3-k8s-csi--node--driver--624nw-eth0" Feb 13 20:16:42.525961 containerd[1475]: 2025-02-13 20:16:42.519 [INFO][5238] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" HandleID="k8s-pod-network.679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" Workload="ci--4081.3.1--e--9d3732dae3-k8s-csi--node--driver--624nw-eth0" Feb 13 20:16:42.525961 containerd[1475]: 2025-02-13 20:16:42.521 [INFO][5238] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:42.525961 containerd[1475]: 2025-02-13 20:16:42.523 [INFO][5232] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d" Feb 13 20:16:42.527481 containerd[1475]: time="2025-02-13T20:16:42.526026615Z" level=info msg="TearDown network for sandbox \"679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d\" successfully" Feb 13 20:16:42.531923 containerd[1475]: time="2025-02-13T20:16:42.531844523Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:16:42.532061 containerd[1475]: time="2025-02-13T20:16:42.531948497Z" level=info msg="RemovePodSandbox \"679cab8c2c6fd76149461136a7b78f54f07c3b698a2cb0c81b4a6d521444ce1d\" returns successfully" Feb 13 20:16:42.532569 containerd[1475]: time="2025-02-13T20:16:42.532540899Z" level=info msg="StopPodSandbox for \"a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175\"" Feb 13 20:16:42.637334 containerd[1475]: 2025-02-13 20:16:42.583 [WARNING][5256] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--4q5j2-eth0", GenerateName:"calico-apiserver-8479cf5b7f-", Namespace:"calico-apiserver", SelfLink:"", UID:"82fe64ff-9e75-4fc3-a4a3-1b4e07cd7dd2", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8479cf5b7f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-e-9d3732dae3", ContainerID:"e3dd7c08a9679977275bab49d352371ed82333ee72b6f151592b38757a06a9be", Pod:"calico-apiserver-8479cf5b7f-4q5j2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic96250e6c82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:42.637334 containerd[1475]: 2025-02-13 20:16:42.584 [INFO][5256] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" Feb 13 20:16:42.637334 containerd[1475]: 2025-02-13 20:16:42.584 [INFO][5256] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" iface="eth0" netns="" Feb 13 20:16:42.637334 containerd[1475]: 2025-02-13 20:16:42.584 [INFO][5256] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" Feb 13 20:16:42.637334 containerd[1475]: 2025-02-13 20:16:42.584 [INFO][5256] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" Feb 13 20:16:42.637334 containerd[1475]: 2025-02-13 20:16:42.617 [INFO][5262] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" HandleID="k8s-pod-network.a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--4q5j2-eth0" Feb 13 20:16:42.637334 containerd[1475]: 2025-02-13 20:16:42.622 [INFO][5262] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:42.637334 containerd[1475]: 2025-02-13 20:16:42.622 [INFO][5262] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:42.637334 containerd[1475]: 2025-02-13 20:16:42.630 [WARNING][5262] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" HandleID="k8s-pod-network.a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--4q5j2-eth0" Feb 13 20:16:42.637334 containerd[1475]: 2025-02-13 20:16:42.630 [INFO][5262] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" HandleID="k8s-pod-network.a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--4q5j2-eth0" Feb 13 20:16:42.637334 containerd[1475]: 2025-02-13 20:16:42.632 [INFO][5262] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:42.637334 containerd[1475]: 2025-02-13 20:16:42.634 [INFO][5256] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" Feb 13 20:16:42.637334 containerd[1475]: time="2025-02-13T20:16:42.637216831Z" level=info msg="TearDown network for sandbox \"a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175\" successfully" Feb 13 20:16:42.637334 containerd[1475]: time="2025-02-13T20:16:42.637254661Z" level=info msg="StopPodSandbox for \"a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175\" returns successfully" Feb 13 20:16:42.639441 containerd[1475]: time="2025-02-13T20:16:42.638229865Z" level=info msg="RemovePodSandbox for \"a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175\"" Feb 13 20:16:42.639441 containerd[1475]: time="2025-02-13T20:16:42.638266023Z" level=info msg="Forcibly stopping sandbox \"a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175\"" Feb 13 20:16:42.773479 containerd[1475]: 2025-02-13 20:16:42.705 [WARNING][5281] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--4q5j2-eth0", GenerateName:"calico-apiserver-8479cf5b7f-", Namespace:"calico-apiserver", SelfLink:"", UID:"82fe64ff-9e75-4fc3-a4a3-1b4e07cd7dd2", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8479cf5b7f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-e-9d3732dae3", ContainerID:"e3dd7c08a9679977275bab49d352371ed82333ee72b6f151592b38757a06a9be", Pod:"calico-apiserver-8479cf5b7f-4q5j2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic96250e6c82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:42.773479 containerd[1475]: 2025-02-13 20:16:42.707 [INFO][5281] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" Feb 13 20:16:42.773479 containerd[1475]: 2025-02-13 20:16:42.707 [INFO][5281] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" iface="eth0" netns="" Feb 13 20:16:42.773479 containerd[1475]: 2025-02-13 20:16:42.707 [INFO][5281] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" Feb 13 20:16:42.773479 containerd[1475]: 2025-02-13 20:16:42.707 [INFO][5281] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" Feb 13 20:16:42.773479 containerd[1475]: 2025-02-13 20:16:42.756 [INFO][5287] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" HandleID="k8s-pod-network.a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--4q5j2-eth0" Feb 13 20:16:42.773479 containerd[1475]: 2025-02-13 20:16:42.756 [INFO][5287] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:42.773479 containerd[1475]: 2025-02-13 20:16:42.756 [INFO][5287] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:42.773479 containerd[1475]: 2025-02-13 20:16:42.765 [WARNING][5287] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" HandleID="k8s-pod-network.a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--4q5j2-eth0" Feb 13 20:16:42.773479 containerd[1475]: 2025-02-13 20:16:42.765 [INFO][5287] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" HandleID="k8s-pod-network.a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" Workload="ci--4081.3.1--e--9d3732dae3-k8s-calico--apiserver--8479cf5b7f--4q5j2-eth0" Feb 13 20:16:42.773479 containerd[1475]: 2025-02-13 20:16:42.767 [INFO][5287] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:42.773479 containerd[1475]: 2025-02-13 20:16:42.770 [INFO][5281] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175" Feb 13 20:16:42.775918 containerd[1475]: time="2025-02-13T20:16:42.773473999Z" level=info msg="TearDown network for sandbox \"a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175\" successfully" Feb 13 20:16:42.783590 containerd[1475]: time="2025-02-13T20:16:42.783475031Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:16:42.783860 containerd[1475]: time="2025-02-13T20:16:42.783630633Z" level=info msg="RemovePodSandbox \"a65555590c7de57752e4b1d8cec9d1fa297b9887d3a262d52a34548d4e8a4175\" returns successfully" Feb 13 20:16:45.306808 kubelet[2519]: I0213 20:16:45.306042 2519 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:16:46.778297 systemd[1]: Started sshd@9-64.23.201.9:22-147.75.109.163:33780.service - OpenSSH per-connection server daemon (147.75.109.163:33780). Feb 13 20:16:46.958843 sshd[5299]: Accepted publickey for core from 147.75.109.163 port 33780 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:16:46.962808 sshd[5299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:46.981966 systemd-logind[1451]: New session 10 of user core. Feb 13 20:16:46.990625 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:16:47.050559 systemd[1]: run-containerd-runc-k8s.io-e152fe345c38f5dbb1298758e54ff647f822ece5a269613aa5a3831e97469ec0-runc.VjSmOc.mount: Deactivated successfully. Feb 13 20:16:48.042531 sshd[5299]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:48.053914 systemd[1]: sshd@9-64.23.201.9:22-147.75.109.163:33780.service: Deactivated successfully. Feb 13 20:16:48.059067 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:16:48.061440 systemd-logind[1451]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:16:48.062685 systemd-logind[1451]: Removed session 10. Feb 13 20:16:53.069297 systemd[1]: Started sshd@10-64.23.201.9:22-147.75.109.163:56722.service - OpenSSH per-connection server daemon (147.75.109.163:56722). Feb 13 20:16:53.136714 sshd[5364]: Accepted publickey for core from 147.75.109.163 port 56722 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:16:53.139031 sshd[5364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:53.150274 systemd-logind[1451]: New session 11 of user core. Feb 13 20:16:53.155377 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:16:53.438559 sshd[5364]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:53.444777 systemd[1]: sshd@10-64.23.201.9:22-147.75.109.163:56722.service: Deactivated successfully. Feb 13 20:16:53.448565 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:16:53.450472 systemd-logind[1451]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:16:53.452441 systemd-logind[1451]: Removed session 11. Feb 13 20:16:58.464628 systemd[1]: Started sshd@11-64.23.201.9:22-147.75.109.163:56736.service - OpenSSH per-connection server daemon (147.75.109.163:56736). Feb 13 20:16:58.536213 sshd[5378]: Accepted publickey for core from 147.75.109.163 port 56736 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:16:58.538858 sshd[5378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:58.545473 systemd-logind[1451]: New session 12 of user core. Feb 13 20:16:58.555086 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:16:58.704781 kubelet[2519]: I0213 20:16:58.704344 2519 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:16:58.752246 sshd[5378]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:58.761439 systemd[1]: sshd@11-64.23.201.9:22-147.75.109.163:56736.service: Deactivated successfully. Feb 13 20:16:58.768955 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:16:58.771037 systemd-logind[1451]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:16:58.773855 systemd-logind[1451]: Removed session 12. Feb 13 20:17:00.330994 kubelet[2519]: E0213 20:17:00.330899 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:17:03.773282 systemd[1]: Started sshd@12-64.23.201.9:22-147.75.109.163:54448.service - OpenSSH per-connection server daemon (147.75.109.163:54448). Feb 13 20:17:03.844505 sshd[5393]: Accepted publickey for core from 147.75.109.163 port 54448 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:17:03.848136 sshd[5393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:03.856872 systemd-logind[1451]: New session 13 of user core. Feb 13 20:17:03.862087 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:17:04.055224 sshd[5393]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:04.064408 systemd[1]: sshd@12-64.23.201.9:22-147.75.109.163:54448.service: Deactivated successfully. Feb 13 20:17:04.068200 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:17:04.071717 systemd-logind[1451]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:17:04.073494 systemd-logind[1451]: Removed session 13. Feb 13 20:17:04.331046 kubelet[2519]: E0213 20:17:04.330902 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:17:08.330875 kubelet[2519]: E0213 20:17:08.330726 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:17:09.077371 systemd[1]: Started sshd@13-64.23.201.9:22-147.75.109.163:54460.service - OpenSSH per-connection server daemon (147.75.109.163:54460). Feb 13 20:17:09.140811 sshd[5415]: Accepted publickey for core from 147.75.109.163 port 54460 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:17:09.143583 sshd[5415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:09.152285 systemd-logind[1451]: New session 14 of user core. Feb 13 20:17:09.159081 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:17:09.344152 sshd[5415]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:09.357160 systemd[1]: sshd@13-64.23.201.9:22-147.75.109.163:54460.service: Deactivated successfully. Feb 13 20:17:09.361934 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:17:09.366111 systemd-logind[1451]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:17:09.372310 systemd[1]: Started sshd@14-64.23.201.9:22-147.75.109.163:42308.service - OpenSSH per-connection server daemon (147.75.109.163:42308). Feb 13 20:17:09.375063 systemd-logind[1451]: Removed session 14. Feb 13 20:17:09.441020 sshd[5429]: Accepted publickey for core from 147.75.109.163 port 42308 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:17:09.444943 sshd[5429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:09.454789 systemd-logind[1451]: New session 15 of user core. Feb 13 20:17:09.464082 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:17:09.781340 sshd[5429]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:09.794106 systemd[1]: sshd@14-64.23.201.9:22-147.75.109.163:42308.service: Deactivated successfully. Feb 13 20:17:09.798701 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:17:09.803361 systemd-logind[1451]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:17:09.812270 systemd[1]: Started sshd@15-64.23.201.9:22-147.75.109.163:42318.service - OpenSSH per-connection server daemon (147.75.109.163:42318). Feb 13 20:17:09.816359 systemd-logind[1451]: Removed session 15. Feb 13 20:17:09.926146 sshd[5440]: Accepted publickey for core from 147.75.109.163 port 42318 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:17:09.929052 sshd[5440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:09.939634 systemd-logind[1451]: New session 16 of user core. Feb 13 20:17:09.945158 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:17:10.185123 sshd[5440]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:10.194145 systemd[1]: sshd@15-64.23.201.9:22-147.75.109.163:42318.service: Deactivated successfully. Feb 13 20:17:10.198351 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:17:10.200685 systemd-logind[1451]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:17:10.203792 systemd-logind[1451]: Removed session 16. Feb 13 20:17:15.206370 systemd[1]: Started sshd@16-64.23.201.9:22-147.75.109.163:42326.service - OpenSSH per-connection server daemon (147.75.109.163:42326). Feb 13 20:17:15.301795 sshd[5474]: Accepted publickey for core from 147.75.109.163 port 42326 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:17:15.305263 sshd[5474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:15.315029 systemd-logind[1451]: New session 17 of user core. Feb 13 20:17:15.322172 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:17:15.588721 sshd[5474]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:15.596424 systemd[1]: sshd@16-64.23.201.9:22-147.75.109.163:42326.service: Deactivated successfully. Feb 13 20:17:15.604055 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:17:15.609086 systemd-logind[1451]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:17:15.612237 systemd-logind[1451]: Removed session 17. Feb 13 20:17:20.612262 systemd[1]: Started sshd@17-64.23.201.9:22-147.75.109.163:57682.service - OpenSSH per-connection server daemon (147.75.109.163:57682). Feb 13 20:17:20.762252 sshd[5510]: Accepted publickey for core from 147.75.109.163 port 57682 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:17:20.770073 sshd[5510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:20.781118 systemd-logind[1451]: New session 18 of user core. Feb 13 20:17:20.790159 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:17:21.310038 sshd[5510]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:21.319536 systemd[1]: sshd@17-64.23.201.9:22-147.75.109.163:57682.service: Deactivated successfully. Feb 13 20:17:21.323125 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:17:21.324668 systemd-logind[1451]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:17:21.327277 systemd-logind[1451]: Removed session 18. Feb 13 20:17:21.331638 kubelet[2519]: E0213 20:17:21.331589 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:17:26.325334 systemd[1]: Started sshd@18-64.23.201.9:22-147.75.109.163:57684.service - OpenSSH per-connection server daemon (147.75.109.163:57684). Feb 13 20:17:26.473539 sshd[5528]: Accepted publickey for core from 147.75.109.163 port 57684 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:17:26.476856 sshd[5528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:26.487887 systemd-logind[1451]: New session 19 of user core. Feb 13 20:17:26.495452 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:17:26.765361 sshd[5528]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:26.771705 systemd-logind[1451]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:17:26.772470 systemd[1]: sshd@18-64.23.201.9:22-147.75.109.163:57684.service: Deactivated successfully. Feb 13 20:17:26.776069 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:17:26.778348 systemd-logind[1451]: Removed session 19. Feb 13 20:17:31.794299 systemd[1]: Started sshd@19-64.23.201.9:22-147.75.109.163:59358.service - OpenSSH per-connection server daemon (147.75.109.163:59358). Feb 13 20:17:31.854639 sshd[5541]: Accepted publickey for core from 147.75.109.163 port 59358 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:17:31.857832 sshd[5541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:31.866669 systemd-logind[1451]: New session 20 of user core. Feb 13 20:17:31.871098 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:17:32.052992 sshd[5541]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:32.059575 systemd[1]: sshd@19-64.23.201.9:22-147.75.109.163:59358.service: Deactivated successfully. Feb 13 20:17:32.063394 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:17:32.066153 systemd-logind[1451]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:17:32.069611 systemd-logind[1451]: Removed session 20. Feb 13 20:17:35.332579 kubelet[2519]: E0213 20:17:35.330454 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:17:37.083376 systemd[1]: Started sshd@20-64.23.201.9:22-147.75.109.163:59372.service - OpenSSH per-connection server daemon (147.75.109.163:59372). Feb 13 20:17:37.152814 sshd[5555]: Accepted publickey for core from 147.75.109.163 port 59372 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:17:37.155856 sshd[5555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:37.163541 systemd-logind[1451]: New session 21 of user core. Feb 13 20:17:37.172012 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:17:37.381031 sshd[5555]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:37.392534 systemd[1]: sshd@20-64.23.201.9:22-147.75.109.163:59372.service: Deactivated successfully. Feb 13 20:17:37.396621 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:17:37.399825 systemd-logind[1451]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:17:37.410280 systemd[1]: Started sshd@21-64.23.201.9:22-147.75.109.163:59386.service - OpenSSH per-connection server daemon (147.75.109.163:59386). Feb 13 20:17:37.413480 systemd-logind[1451]: Removed session 21. Feb 13 20:17:37.455966 sshd[5568]: Accepted publickey for core from 147.75.109.163 port 59386 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:17:37.459209 sshd[5568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:37.467814 systemd-logind[1451]: New session 22 of user core. Feb 13 20:17:37.473072 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:17:38.081523 sshd[5568]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:38.115289 systemd[1]: Started sshd@22-64.23.201.9:22-147.75.109.163:59394.service - OpenSSH per-connection server daemon (147.75.109.163:59394). Feb 13 20:17:38.116158 systemd[1]: sshd@21-64.23.201.9:22-147.75.109.163:59386.service: Deactivated successfully. Feb 13 20:17:38.121952 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:17:38.124477 systemd-logind[1451]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:17:38.131688 systemd-logind[1451]: Removed session 22. Feb 13 20:17:38.208145 sshd[5577]: Accepted publickey for core from 147.75.109.163 port 59394 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:17:38.211666 sshd[5577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:38.227220 systemd-logind[1451]: New session 23 of user core. Feb 13 20:17:38.235155 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:17:41.321590 systemd[1]: run-containerd-runc-k8s.io-7bc85994d715af3ab973090023616c75a1f5f240b45be2c4f0b42e20a1743ab0-runc.AgOBvj.mount: Deactivated successfully. Feb 13 20:17:41.586221 sshd[5577]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:41.601284 systemd[1]: Started sshd@23-64.23.201.9:22-147.75.109.163:38042.service - OpenSSH per-connection server daemon (147.75.109.163:38042). Feb 13 20:17:41.604575 systemd[1]: sshd@22-64.23.201.9:22-147.75.109.163:59394.service: Deactivated successfully. Feb 13 20:17:41.614003 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:17:41.615256 systemd[1]: session-23.scope: Consumed 1.068s CPU time. Feb 13 20:17:41.622145 systemd-logind[1451]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:17:41.629491 systemd-logind[1451]: Removed session 23. Feb 13 20:17:41.708413 sshd[5638]: Accepted publickey for core from 147.75.109.163 port 38042 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:17:41.713365 sshd[5638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:41.726344 systemd-logind[1451]: New session 24 of user core. Feb 13 20:17:41.740044 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 20:17:42.754114 sshd[5638]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:42.768517 systemd[1]: sshd@23-64.23.201.9:22-147.75.109.163:38042.service: Deactivated successfully. Feb 13 20:17:42.774502 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 20:17:42.776423 systemd-logind[1451]: Session 24 logged out. Waiting for processes to exit. Feb 13 20:17:42.782140 systemd-logind[1451]: Removed session 24. Feb 13 20:17:42.790366 systemd[1]: Started sshd@24-64.23.201.9:22-147.75.109.163:38052.service - OpenSSH per-connection server daemon (147.75.109.163:38052). Feb 13 20:17:42.849681 sshd[5656]: Accepted publickey for core from 147.75.109.163 port 38052 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:17:42.852360 sshd[5656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:42.861286 systemd-logind[1451]: New session 25 of user core. Feb 13 20:17:42.870048 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 20:17:43.018092 sshd[5656]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:43.024353 systemd[1]: sshd@24-64.23.201.9:22-147.75.109.163:38052.service: Deactivated successfully. Feb 13 20:17:43.028489 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 20:17:43.029984 systemd-logind[1451]: Session 25 logged out. Waiting for processes to exit. Feb 13 20:17:43.031728 systemd-logind[1451]: Removed session 25. Feb 13 20:17:45.359607 kubelet[2519]: E0213 20:17:45.359419 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:17:48.036091 systemd[1]: Started sshd@25-64.23.201.9:22-147.75.109.163:38066.service - OpenSSH per-connection server daemon (147.75.109.163:38066). Feb 13 20:17:48.118943 sshd[5698]: Accepted publickey for core from 147.75.109.163 port 38066 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:17:48.122597 sshd[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:48.131121 systemd-logind[1451]: New session 26 of user core. Feb 13 20:17:48.136018 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 20:17:48.315208 sshd[5698]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:48.323649 systemd[1]: sshd@25-64.23.201.9:22-147.75.109.163:38066.service: Deactivated successfully. Feb 13 20:17:48.328872 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 20:17:48.334863 systemd-logind[1451]: Session 26 logged out. Waiting for processes to exit. Feb 13 20:17:48.336720 systemd-logind[1451]: Removed session 26. Feb 13 20:17:49.332454 kubelet[2519]: E0213 20:17:49.332269 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:17:51.335489 kubelet[2519]: E0213 20:17:51.335428 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:17:53.345294 systemd[1]: Started sshd@26-64.23.201.9:22-147.75.109.163:53086.service - OpenSSH per-connection server daemon (147.75.109.163:53086). Feb 13 20:17:53.442554 sshd[5733]: Accepted publickey for core from 147.75.109.163 port 53086 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:17:53.446259 sshd[5733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:53.454570 systemd-logind[1451]: New session 27 of user core. Feb 13 20:17:53.461095 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 20:17:53.656799 sshd[5733]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:53.662919 systemd[1]: sshd@26-64.23.201.9:22-147.75.109.163:53086.service: Deactivated successfully. Feb 13 20:17:53.667721 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 20:17:53.669865 systemd-logind[1451]: Session 27 logged out. Waiting for processes to exit. Feb 13 20:17:53.672259 systemd-logind[1451]: Removed session 27. Feb 13 20:17:58.682237 systemd[1]: Started sshd@27-64.23.201.9:22-147.75.109.163:53092.service - OpenSSH per-connection server daemon (147.75.109.163:53092). Feb 13 20:17:58.759721 sshd[5758]: Accepted publickey for core from 147.75.109.163 port 53092 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:17:58.763397 sshd[5758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:58.772662 systemd-logind[1451]: New session 28 of user core. Feb 13 20:17:58.782135 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 20:17:58.973582 sshd[5758]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:58.980502 systemd[1]: sshd@27-64.23.201.9:22-147.75.109.163:53092.service: Deactivated successfully. Feb 13 20:17:58.985041 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 20:17:58.986581 systemd-logind[1451]: Session 28 logged out. Waiting for processes to exit. Feb 13 20:17:58.988941 systemd-logind[1451]: Removed session 28. Feb 13 20:18:04.003649 systemd[1]: Started sshd@28-64.23.201.9:22-147.75.109.163:43088.service - OpenSSH per-connection server daemon (147.75.109.163:43088). Feb 13 20:18:04.073638 sshd[5775]: Accepted publickey for core from 147.75.109.163 port 43088 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:04.077219 sshd[5775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:04.090963 systemd-logind[1451]: New session 29 of user core. Feb 13 20:18:04.098197 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 20:18:04.291641 sshd[5775]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:04.297477 systemd[1]: sshd@28-64.23.201.9:22-147.75.109.163:43088.service: Deactivated successfully. Feb 13 20:18:04.302217 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 20:18:04.307605 systemd-logind[1451]: Session 29 logged out. Waiting for processes to exit. Feb 13 20:18:04.309621 systemd-logind[1451]: Removed session 29. Feb 13 20:18:09.313292 systemd[1]: Started sshd@29-64.23.201.9:22-147.75.109.163:45660.service - OpenSSH per-connection server daemon (147.75.109.163:45660). Feb 13 20:18:09.377860 sshd[5788]: Accepted publickey for core from 147.75.109.163 port 45660 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:18:09.380583 sshd[5788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:09.390355 systemd-logind[1451]: New session 30 of user core. Feb 13 20:18:09.396095 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 20:18:09.580482 sshd[5788]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:09.586324 systemd-logind[1451]: Session 30 logged out. Waiting for processes to exit. Feb 13 20:18:09.587117 systemd[1]: sshd@29-64.23.201.9:22-147.75.109.163:45660.service: Deactivated successfully. Feb 13 20:18:09.590449 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 20:18:09.595495 systemd-logind[1451]: Removed session 30. Feb 13 20:18:11.332780 kubelet[2519]: E0213 20:18:11.332233 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2"