Dec 13 08:47:12.083841 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 08:47:12.083887 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 08:47:12.083910 kernel: BIOS-provided physical RAM map: Dec 13 08:47:12.083926 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 08:47:12.083940 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 08:47:12.083955 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 08:47:12.083974 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Dec 13 08:47:12.083991 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Dec 13 08:47:12.084006 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 08:47:12.084026 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 08:47:12.084043 kernel: NX (Execute Disable) protection: active Dec 13 08:47:12.084059 kernel: APIC: Static calls initialized Dec 13 08:47:12.084075 kernel: SMBIOS 2.8 present. Dec 13 08:47:12.084092 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Dec 13 08:47:12.084113 kernel: Hypervisor detected: KVM Dec 13 08:47:12.084135 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 08:47:12.084153 kernel: kvm-clock: using sched offset of 3681893284 cycles Dec 13 08:47:12.084172 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 08:47:12.084191 kernel: tsc: Detected 2294.608 MHz processor Dec 13 08:47:12.084209 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 08:47:12.084228 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 08:47:12.084246 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Dec 13 08:47:12.084264 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 08:47:12.084282 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 08:47:12.084304 kernel: ACPI: Early table checksum verification disabled Dec 13 08:47:12.084323 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Dec 13 08:47:12.084341 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:47:12.084359 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:47:12.084377 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:47:12.084395 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 13 08:47:12.084413 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:47:12.084431 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:47:12.085337 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:47:12.085367 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:47:12.085385 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Dec 13 08:47:12.085404 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Dec 13 08:47:12.085422 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 13 08:47:12.085493 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Dec 13 08:47:12.085514 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Dec 13 08:47:12.085533 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Dec 13 08:47:12.085568 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Dec 13 08:47:12.085592 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 08:47:12.085605 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 08:47:12.085620 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 08:47:12.085635 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 08:47:12.085651 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Dec 13 08:47:12.085666 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Dec 13 08:47:12.085695 kernel: Zone ranges: Dec 13 08:47:12.085716 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 08:47:12.085735 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Dec 13 08:47:12.085755 kernel: Normal empty Dec 13 08:47:12.085775 kernel: Movable zone start for each node Dec 13 08:47:12.085795 kernel: Early memory node ranges Dec 13 08:47:12.085815 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 08:47:12.085835 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Dec 13 08:47:12.085855 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Dec 13 08:47:12.085879 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 08:47:12.085899 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 08:47:12.085919 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Dec 13 08:47:12.085939 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 08:47:12.085959 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 08:47:12.085979 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 08:47:12.085998 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 08:47:12.086018 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 08:47:12.086038 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 08:47:12.086062 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 08:47:12.086082 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 08:47:12.086102 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 08:47:12.086122 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 08:47:12.086142 kernel: TSC deadline timer available Dec 13 08:47:12.086161 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 08:47:12.086181 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 08:47:12.086201 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 13 08:47:12.086222 kernel: Booting paravirtualized kernel on KVM Dec 13 08:47:12.086246 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 08:47:12.086266 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 08:47:12.086286 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 08:47:12.086306 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 08:47:12.086325 kernel: pcpu-alloc: [0] 0 1 Dec 13 08:47:12.086345 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 08:47:12.086366 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 08:47:12.086387 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 08:47:12.086411 kernel: random: crng init done Dec 13 08:47:12.086431 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 08:47:12.086471 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 08:47:12.086490 kernel: Fallback order for Node 0: 0 Dec 13 08:47:12.086510 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Dec 13 08:47:12.086530 kernel: Policy zone: DMA32 Dec 13 08:47:12.086550 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 08:47:12.086570 kernel: Memory: 1971192K/2096600K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Dec 13 08:47:12.086590 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 08:47:12.086627 kernel: Kernel/User page tables isolation: enabled Dec 13 08:47:12.086641 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 08:47:12.086655 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 08:47:12.086667 kernel: Dynamic Preempt: voluntary Dec 13 08:47:12.086680 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 08:47:12.086696 kernel: rcu: RCU event tracing is enabled. Dec 13 08:47:12.086719 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 08:47:12.086739 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 08:47:12.086759 kernel: Rude variant of Tasks RCU enabled. Dec 13 08:47:12.086792 kernel: Tracing variant of Tasks RCU enabled. Dec 13 08:47:12.086816 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 08:47:12.086836 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 08:47:12.086856 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 08:47:12.086876 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 08:47:12.086896 kernel: Console: colour VGA+ 80x25 Dec 13 08:47:12.086915 kernel: printk: console [tty0] enabled Dec 13 08:47:12.086935 kernel: printk: console [ttyS0] enabled Dec 13 08:47:12.086954 kernel: ACPI: Core revision 20230628 Dec 13 08:47:12.086974 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 08:47:12.086999 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 08:47:12.087019 kernel: x2apic enabled Dec 13 08:47:12.087039 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 08:47:12.087063 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 08:47:12.087077 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Dec 13 08:47:12.087099 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Dec 13 08:47:12.087119 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 08:47:12.087140 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 08:47:12.087180 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 08:47:12.087209 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 08:47:12.087232 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 08:47:12.087257 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 08:47:12.087279 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 08:47:12.087300 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 08:47:12.087321 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 08:47:12.087342 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 08:47:12.087364 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 08:47:12.087391 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 08:47:12.087412 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 08:47:12.087433 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 08:47:12.087576 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 08:47:12.087599 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 08:47:12.087620 kernel: Freeing SMP alternatives memory: 32K Dec 13 08:47:12.087641 kernel: pid_max: default: 32768 minimum: 301 Dec 13 08:47:12.087662 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 08:47:12.087689 kernel: landlock: Up and running. Dec 13 08:47:12.087710 kernel: SELinux: Initializing. Dec 13 08:47:12.087732 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 08:47:12.087753 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 08:47:12.087774 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Dec 13 08:47:12.087796 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 08:47:12.087817 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 08:47:12.087838 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 08:47:12.087864 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Dec 13 08:47:12.087885 kernel: signal: max sigframe size: 1776 Dec 13 08:47:12.087906 kernel: rcu: Hierarchical SRCU implementation. Dec 13 08:47:12.087927 kernel: rcu: Max phase no-delay instances is 400. Dec 13 08:47:12.087949 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 08:47:12.087974 kernel: smp: Bringing up secondary CPUs ... Dec 13 08:47:12.087989 kernel: smpboot: x86: Booting SMP configuration: Dec 13 08:47:12.088008 kernel: .... node #0, CPUs: #1 Dec 13 08:47:12.088029 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 08:47:12.088050 kernel: smpboot: Max logical packages: 1 Dec 13 08:47:12.088077 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) Dec 13 08:47:12.088099 kernel: devtmpfs: initialized Dec 13 08:47:12.088120 kernel: x86/mm: Memory block size: 128MB Dec 13 08:47:12.088141 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 08:47:12.088162 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 08:47:12.088184 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 08:47:12.088205 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 08:47:12.088226 kernel: audit: initializing netlink subsys (disabled) Dec 13 08:47:12.088247 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 08:47:12.088272 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 08:47:12.088293 kernel: audit: type=2000 audit(1734079630.882:1): state=initialized audit_enabled=0 res=1 Dec 13 08:47:12.088314 kernel: cpuidle: using governor menu Dec 13 08:47:12.088335 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 08:47:12.088366 kernel: dca service started, version 1.12.1 Dec 13 08:47:12.088390 kernel: PCI: Using configuration type 1 for base access Dec 13 08:47:12.088412 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 08:47:12.088433 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 08:47:12.088494 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 08:47:12.088515 kernel: ACPI: Added _OSI(Module Device) Dec 13 08:47:12.088555 kernel: ACPI: Added _OSI(Processor Device) Dec 13 08:47:12.088576 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 08:47:12.088597 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 08:47:12.088619 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 08:47:12.088640 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 08:47:12.088661 kernel: ACPI: Interpreter enabled Dec 13 08:47:12.088682 kernel: ACPI: PM: (supports S0 S5) Dec 13 08:47:12.088703 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 08:47:12.088729 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 08:47:12.088750 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 08:47:12.088771 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 13 08:47:12.088792 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 08:47:12.089079 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 08:47:12.089251 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 08:47:12.089401 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 08:47:12.089432 kernel: acpiphp: Slot [3] registered Dec 13 08:47:12.089493 kernel: acpiphp: Slot [4] registered Dec 13 08:47:12.089518 kernel: acpiphp: Slot [5] registered Dec 13 08:47:12.089545 kernel: acpiphp: Slot [6] registered Dec 13 08:47:12.089571 kernel: acpiphp: Slot [7] registered Dec 13 08:47:12.089595 kernel: acpiphp: Slot [8] registered Dec 13 08:47:12.089619 kernel: acpiphp: Slot [9] registered Dec 13 08:47:12.089642 kernel: acpiphp: Slot [10] registered Dec 13 08:47:12.089664 kernel: acpiphp: Slot [11] registered Dec 13 08:47:12.089693 kernel: acpiphp: Slot [12] registered Dec 13 08:47:12.089714 kernel: acpiphp: Slot [13] registered Dec 13 08:47:12.089736 kernel: acpiphp: Slot [14] registered Dec 13 08:47:12.089762 kernel: acpiphp: Slot [15] registered Dec 13 08:47:12.089776 kernel: acpiphp: Slot [16] registered Dec 13 08:47:12.089789 kernel: acpiphp: Slot [17] registered Dec 13 08:47:12.089802 kernel: acpiphp: Slot [18] registered Dec 13 08:47:12.089815 kernel: acpiphp: Slot [19] registered Dec 13 08:47:12.089829 kernel: acpiphp: Slot [20] registered Dec 13 08:47:12.089843 kernel: acpiphp: Slot [21] registered Dec 13 08:47:12.089867 kernel: acpiphp: Slot [22] registered Dec 13 08:47:12.089881 kernel: acpiphp: Slot [23] registered Dec 13 08:47:12.089894 kernel: acpiphp: Slot [24] registered Dec 13 08:47:12.089910 kernel: acpiphp: Slot [25] registered Dec 13 08:47:12.089931 kernel: acpiphp: Slot [26] registered Dec 13 08:47:12.089952 kernel: acpiphp: Slot [27] registered Dec 13 08:47:12.089972 kernel: acpiphp: Slot [28] registered Dec 13 08:47:12.089988 kernel: acpiphp: Slot [29] registered Dec 13 08:47:12.090010 kernel: acpiphp: Slot [30] registered Dec 13 08:47:12.090036 kernel: acpiphp: Slot [31] registered Dec 13 08:47:12.090057 kernel: PCI host bridge to bus 0000:00 Dec 13 08:47:12.090289 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 08:47:12.090424 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 08:47:12.090572 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 08:47:12.090720 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 08:47:12.090848 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 13 08:47:12.090982 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 08:47:12.091161 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 08:47:12.091349 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 08:47:12.091575 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Dec 13 08:47:12.091793 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Dec 13 08:47:12.091953 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 08:47:12.092122 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 08:47:12.092228 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 08:47:12.092329 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 08:47:12.092471 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Dec 13 08:47:12.092578 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Dec 13 08:47:12.092689 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 08:47:12.092790 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 13 08:47:12.092896 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 13 08:47:12.093012 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Dec 13 08:47:12.093124 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Dec 13 08:47:12.093226 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Dec 13 08:47:12.093326 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Dec 13 08:47:12.093427 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 08:47:12.094655 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 08:47:12.094818 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 08:47:12.094925 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Dec 13 08:47:12.095025 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Dec 13 08:47:12.095125 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Dec 13 08:47:12.095238 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 08:47:12.095339 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Dec 13 08:47:12.095459 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Dec 13 08:47:12.095562 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 13 08:47:12.095672 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Dec 13 08:47:12.095828 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Dec 13 08:47:12.095986 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Dec 13 08:47:12.098025 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 13 08:47:12.098154 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Dec 13 08:47:12.098267 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 08:47:12.098369 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Dec 13 08:47:12.098486 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Dec 13 08:47:12.099735 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Dec 13 08:47:12.099853 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Dec 13 08:47:12.100121 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Dec 13 08:47:12.100266 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Dec 13 08:47:12.100384 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Dec 13 08:47:12.102600 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Dec 13 08:47:12.102730 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Dec 13 08:47:12.102744 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 08:47:12.102755 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 08:47:12.102765 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 08:47:12.102774 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 08:47:12.102794 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 08:47:12.102804 kernel: iommu: Default domain type: Translated Dec 13 08:47:12.102814 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 08:47:12.102824 kernel: PCI: Using ACPI for IRQ routing Dec 13 08:47:12.102834 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 08:47:12.102844 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 08:47:12.102854 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Dec 13 08:47:12.102964 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 13 08:47:12.103136 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 13 08:47:12.103261 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 08:47:12.103274 kernel: vgaarb: loaded Dec 13 08:47:12.103284 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 08:47:12.103294 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 08:47:12.103304 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 08:47:12.103314 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 08:47:12.103324 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 08:47:12.103334 kernel: pnp: PnP ACPI init Dec 13 08:47:12.103344 kernel: pnp: PnP ACPI: found 4 devices Dec 13 08:47:12.103381 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 08:47:12.103396 kernel: NET: Registered PF_INET protocol family Dec 13 08:47:12.103411 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 08:47:12.103427 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 08:47:12.103456 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 08:47:12.103467 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 08:47:12.103477 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 08:47:12.103487 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 08:47:12.103497 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 08:47:12.103511 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 08:47:12.103521 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 08:47:12.103531 kernel: NET: Registered PF_XDP protocol family Dec 13 08:47:12.103658 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 08:47:12.103758 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 08:47:12.103849 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 08:47:12.103943 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 08:47:12.104035 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 13 08:47:12.104153 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 13 08:47:12.104260 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 08:47:12.104276 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 08:47:12.104384 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 30858 usecs Dec 13 08:47:12.104398 kernel: PCI: CLS 0 bytes, default 64 Dec 13 08:47:12.104409 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 08:47:12.104419 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Dec 13 08:47:12.104429 kernel: Initialise system trusted keyrings Dec 13 08:47:12.104474 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 08:47:12.104484 kernel: Key type asymmetric registered Dec 13 08:47:12.104494 kernel: Asymmetric key parser 'x509' registered Dec 13 08:47:12.104504 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 08:47:12.104513 kernel: io scheduler mq-deadline registered Dec 13 08:47:12.104524 kernel: io scheduler kyber registered Dec 13 08:47:12.104533 kernel: io scheduler bfq registered Dec 13 08:47:12.104544 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 08:47:12.104554 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 13 08:47:12.104568 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 08:47:12.104578 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 08:47:12.104587 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 08:47:12.104597 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 08:47:12.104607 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 08:47:12.104617 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 08:47:12.104626 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 08:47:12.104637 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 08:47:12.104774 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 08:47:12.104876 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 08:47:12.104971 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T08:47:11 UTC (1734079631) Dec 13 08:47:12.105062 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 08:47:12.105074 kernel: intel_pstate: CPU model not supported Dec 13 08:47:12.105084 kernel: NET: Registered PF_INET6 protocol family Dec 13 08:47:12.105094 kernel: Segment Routing with IPv6 Dec 13 08:47:12.105104 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 08:47:12.105113 kernel: NET: Registered PF_PACKET protocol family Dec 13 08:47:12.105127 kernel: Key type dns_resolver registered Dec 13 08:47:12.105137 kernel: IPI shorthand broadcast: enabled Dec 13 08:47:12.105146 kernel: sched_clock: Marking stable (1175003623, 193231615)->(1425917832, -57682594) Dec 13 08:47:12.105156 kernel: registered taskstats version 1 Dec 13 08:47:12.105166 kernel: Loading compiled-in X.509 certificates Dec 13 08:47:12.105176 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 08:47:12.105185 kernel: Key type .fscrypt registered Dec 13 08:47:12.105194 kernel: Key type fscrypt-provisioning registered Dec 13 08:47:12.105204 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 08:47:12.105217 kernel: ima: Allocated hash algorithm: sha1 Dec 13 08:47:12.105227 kernel: ima: No architecture policies found Dec 13 08:47:12.105236 kernel: clk: Disabling unused clocks Dec 13 08:47:12.105245 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 08:47:12.105255 kernel: Write protecting the kernel read-only data: 36864k Dec 13 08:47:12.105293 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 08:47:12.105307 kernel: Run /init as init process Dec 13 08:47:12.105317 kernel: with arguments: Dec 13 08:47:12.105327 kernel: /init Dec 13 08:47:12.105341 kernel: with environment: Dec 13 08:47:12.105351 kernel: HOME=/ Dec 13 08:47:12.105361 kernel: TERM=linux Dec 13 08:47:12.105370 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 08:47:12.105383 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 08:47:12.105396 systemd[1]: Detected virtualization kvm. Dec 13 08:47:12.105408 systemd[1]: Detected architecture x86-64. Dec 13 08:47:12.105422 systemd[1]: Running in initrd. Dec 13 08:47:12.105432 systemd[1]: No hostname configured, using default hostname. Dec 13 08:47:12.107560 systemd[1]: Hostname set to . Dec 13 08:47:12.107582 systemd[1]: Initializing machine ID from VM UUID. Dec 13 08:47:12.107594 systemd[1]: Queued start job for default target initrd.target. Dec 13 08:47:12.107605 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 08:47:12.107616 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 08:47:12.107628 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 08:47:12.107647 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 08:47:12.107658 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 08:47:12.107668 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 08:47:12.107681 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 08:47:12.107692 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 08:47:12.107703 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 08:47:12.107714 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 08:47:12.107729 systemd[1]: Reached target paths.target - Path Units. Dec 13 08:47:12.107740 systemd[1]: Reached target slices.target - Slice Units. Dec 13 08:47:12.107750 systemd[1]: Reached target swap.target - Swaps. Dec 13 08:47:12.107765 systemd[1]: Reached target timers.target - Timer Units. Dec 13 08:47:12.107775 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 08:47:12.107786 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 08:47:12.107801 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 08:47:12.107812 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 08:47:12.107823 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 08:47:12.107834 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 08:47:12.107845 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 08:47:12.107855 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 08:47:12.107866 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 08:47:12.107878 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 08:47:12.107892 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 08:47:12.107902 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 08:47:12.107913 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 08:47:12.107924 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 08:47:12.107934 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:47:12.107985 systemd-journald[184]: Collecting audit messages is disabled. Dec 13 08:47:12.108019 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 08:47:12.108030 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 08:47:12.108041 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 08:47:12.108054 systemd-journald[184]: Journal started Dec 13 08:47:12.108082 systemd-journald[184]: Runtime Journal (/run/log/journal/566a16a1cfcc4f43b845ed545a817b5a) is 4.9M, max 39.3M, 34.4M free. Dec 13 08:47:12.097178 systemd-modules-load[185]: Inserted module 'overlay' Dec 13 08:47:12.123067 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 08:47:12.135494 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 08:47:12.137767 systemd-modules-load[185]: Inserted module 'br_netfilter' Dec 13 08:47:12.180471 kernel: Bridge firewalling registered Dec 13 08:47:12.184006 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 08:47:12.189275 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 08:47:12.190529 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:47:12.201695 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 08:47:12.209931 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 08:47:12.219969 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 08:47:12.227553 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 08:47:12.242384 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 08:47:12.253531 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 08:47:12.263063 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 08:47:12.266697 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 08:47:12.276825 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 08:47:12.281130 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 08:47:12.295682 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 08:47:12.305266 dracut-cmdline[216]: dracut-dracut-053 Dec 13 08:47:12.310196 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 08:47:12.354008 systemd-resolved[218]: Positive Trust Anchors: Dec 13 08:47:12.354033 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 08:47:12.354126 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 08:47:12.357356 systemd-resolved[218]: Defaulting to hostname 'linux'. Dec 13 08:47:12.358968 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 08:47:12.365534 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 08:47:12.447503 kernel: SCSI subsystem initialized Dec 13 08:47:12.459523 kernel: Loading iSCSI transport class v2.0-870. Dec 13 08:47:12.473568 kernel: iscsi: registered transport (tcp) Dec 13 08:47:12.499490 kernel: iscsi: registered transport (qla4xxx) Dec 13 08:47:12.499597 kernel: QLogic iSCSI HBA Driver Dec 13 08:47:12.559417 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 08:47:12.573828 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 08:47:12.606598 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 08:47:12.606734 kernel: device-mapper: uevent: version 1.0.3 Dec 13 08:47:12.608024 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 08:47:12.658534 kernel: raid6: avx2x4 gen() 17578 MB/s Dec 13 08:47:12.676529 kernel: raid6: avx2x2 gen() 16839 MB/s Dec 13 08:47:12.694962 kernel: raid6: avx2x1 gen() 12991 MB/s Dec 13 08:47:12.695084 kernel: raid6: using algorithm avx2x4 gen() 17578 MB/s Dec 13 08:47:12.713521 kernel: raid6: .... xor() 6847 MB/s, rmw enabled Dec 13 08:47:12.713612 kernel: raid6: using avx2x2 recovery algorithm Dec 13 08:47:12.741675 kernel: xor: automatically using best checksumming function avx Dec 13 08:47:12.922505 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 08:47:12.938793 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 08:47:12.945803 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 08:47:12.979926 systemd-udevd[401]: Using default interface naming scheme 'v255'. Dec 13 08:47:12.989074 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 08:47:12.999698 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 08:47:13.024287 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Dec 13 08:47:13.069865 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 08:47:13.077839 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 08:47:13.150837 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 08:47:13.161691 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 08:47:13.198530 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 08:47:13.200546 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 08:47:13.203302 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 08:47:13.206801 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 08:47:13.214468 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 08:47:13.245818 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 08:47:13.276590 kernel: libata version 3.00 loaded. Dec 13 08:47:13.276682 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Dec 13 08:47:13.331865 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 13 08:47:13.332115 kernel: scsi host0: ata_piix Dec 13 08:47:13.332364 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 08:47:13.332389 kernel: scsi host1: Virtio SCSI HBA Dec 13 08:47:13.332611 kernel: scsi host2: ata_piix Dec 13 08:47:13.332807 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Dec 13 08:47:13.332831 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Dec 13 08:47:13.332852 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 08:47:13.333037 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 08:47:13.333061 kernel: GPT:9289727 != 125829119 Dec 13 08:47:13.333080 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 08:47:13.333101 kernel: GPT:9289727 != 125829119 Dec 13 08:47:13.333121 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 08:47:13.333148 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 08:47:13.327077 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 08:47:13.327221 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 08:47:13.347571 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Dec 13 08:47:13.347859 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Dec 13 08:47:13.332400 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 08:47:13.333185 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 08:47:13.356063 kernel: ACPI: bus type USB registered Dec 13 08:47:13.356102 kernel: usbcore: registered new interface driver usbfs Dec 13 08:47:13.356142 kernel: usbcore: registered new interface driver hub Dec 13 08:47:13.356168 kernel: usbcore: registered new device driver usb Dec 13 08:47:13.333533 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:47:13.339597 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:47:13.356468 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:47:13.449705 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:47:13.460793 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 08:47:13.471168 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 08:47:13.471204 kernel: AES CTR mode by8 optimization enabled Dec 13 08:47:13.535790 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 08:47:13.552490 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (458) Dec 13 08:47:13.563593 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (451) Dec 13 08:47:13.569188 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 08:47:13.583400 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 08:47:13.590738 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 08:47:13.601277 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Dec 13 08:47:13.601585 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Dec 13 08:47:13.601717 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Dec 13 08:47:13.601843 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Dec 13 08:47:13.601965 kernel: hub 1-0:1.0: USB hub found Dec 13 08:47:13.602117 kernel: hub 1-0:1.0: 2 ports detected Dec 13 08:47:13.607348 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 08:47:13.608966 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 08:47:13.615717 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 08:47:13.627116 disk-uuid[550]: Primary Header is updated. Dec 13 08:47:13.627116 disk-uuid[550]: Secondary Entries is updated. Dec 13 08:47:13.627116 disk-uuid[550]: Secondary Header is updated. Dec 13 08:47:13.635487 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 08:47:13.645482 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 08:47:13.658535 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 08:47:14.655480 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 08:47:14.656018 disk-uuid[551]: The operation has completed successfully. Dec 13 08:47:14.707354 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 08:47:14.707493 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 08:47:14.728802 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 08:47:14.733199 sh[564]: Success Dec 13 08:47:14.752473 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 08:47:14.837935 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 08:47:14.840718 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 08:47:14.841723 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 08:47:14.868602 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 08:47:14.868700 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 08:47:14.871088 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 08:47:14.873580 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 08:47:14.876250 kernel: BTRFS info (device dm-0): using free space tree Dec 13 08:47:14.888713 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 08:47:14.890495 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 08:47:14.902740 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 08:47:14.907413 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 08:47:14.923685 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 08:47:14.923750 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 08:47:14.923778 kernel: BTRFS info (device vda6): using free space tree Dec 13 08:47:14.931489 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 08:47:14.947315 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 08:47:14.951489 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 08:47:14.962856 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 08:47:14.969893 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 08:47:15.104744 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 08:47:15.111858 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 08:47:15.137339 ignition[656]: Ignition 2.19.0 Dec 13 08:47:15.137497 ignition[656]: Stage: fetch-offline Dec 13 08:47:15.137555 ignition[656]: no configs at "/usr/lib/ignition/base.d" Dec 13 08:47:15.137567 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:47:15.143992 ignition[656]: parsed url from cmdline: "" Dec 13 08:47:15.144009 ignition[656]: no config URL provided Dec 13 08:47:15.144024 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 08:47:15.144051 ignition[656]: no config at "/usr/lib/ignition/user.ign" Dec 13 08:47:15.144065 ignition[656]: failed to fetch config: resource requires networking Dec 13 08:47:15.144512 ignition[656]: Ignition finished successfully Dec 13 08:47:15.148595 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 08:47:15.164483 systemd-networkd[752]: lo: Link UP Dec 13 08:47:15.164494 systemd-networkd[752]: lo: Gained carrier Dec 13 08:47:15.168535 systemd-networkd[752]: Enumeration completed Dec 13 08:47:15.168750 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 08:47:15.168973 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Dec 13 08:47:15.168978 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Dec 13 08:47:15.169925 systemd[1]: Reached target network.target - Network. Dec 13 08:47:15.171802 systemd-networkd[752]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 08:47:15.171807 systemd-networkd[752]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 08:47:15.173906 systemd-networkd[752]: eth0: Link UP Dec 13 08:47:15.173911 systemd-networkd[752]: eth0: Gained carrier Dec 13 08:47:15.173924 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Dec 13 08:47:15.179460 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 08:47:15.179866 systemd-networkd[752]: eth1: Link UP Dec 13 08:47:15.179870 systemd-networkd[752]: eth1: Gained carrier Dec 13 08:47:15.179888 systemd-networkd[752]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 08:47:15.193604 systemd-networkd[752]: eth0: DHCPv4 address 146.190.59.17/19, gateway 146.190.32.1 acquired from 169.254.169.253 Dec 13 08:47:15.202416 systemd-networkd[752]: eth1: DHCPv4 address 10.124.0.3/20, gateway 10.124.0.1 acquired from 169.254.169.253 Dec 13 08:47:15.209044 ignition[756]: Ignition 2.19.0 Dec 13 08:47:15.209066 ignition[756]: Stage: fetch Dec 13 08:47:15.209591 ignition[756]: no configs at "/usr/lib/ignition/base.d" Dec 13 08:47:15.209609 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:47:15.209755 ignition[756]: parsed url from cmdline: "" Dec 13 08:47:15.209765 ignition[756]: no config URL provided Dec 13 08:47:15.209777 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 08:47:15.209797 ignition[756]: no config at "/usr/lib/ignition/user.ign" Dec 13 08:47:15.209827 ignition[756]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Dec 13 08:47:15.226276 ignition[756]: GET result: OK Dec 13 08:47:15.226550 ignition[756]: parsing config with SHA512: 0873ff49b07807bde9c3626ce3645228d516c8596f69bbcd58adbcd777da60cc18608932c822e5c739903fc674d959536f89cf5431a65e4a7a735c52208d71d8 Dec 13 08:47:15.234624 unknown[756]: fetched base config from "system" Dec 13 08:47:15.234639 unknown[756]: fetched base config from "system" Dec 13 08:47:15.235394 ignition[756]: fetch: fetch complete Dec 13 08:47:15.234663 unknown[756]: fetched user config from "digitalocean" Dec 13 08:47:15.235401 ignition[756]: fetch: fetch passed Dec 13 08:47:15.237674 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 08:47:15.235472 ignition[756]: Ignition finished successfully Dec 13 08:47:15.244793 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 08:47:15.281936 ignition[763]: Ignition 2.19.0 Dec 13 08:47:15.281947 ignition[763]: Stage: kargs Dec 13 08:47:15.282251 ignition[763]: no configs at "/usr/lib/ignition/base.d" Dec 13 08:47:15.282268 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:47:15.283596 ignition[763]: kargs: kargs passed Dec 13 08:47:15.286080 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 08:47:15.283678 ignition[763]: Ignition finished successfully Dec 13 08:47:15.300844 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 08:47:15.327014 ignition[769]: Ignition 2.19.0 Dec 13 08:47:15.327030 ignition[769]: Stage: disks Dec 13 08:47:15.327270 ignition[769]: no configs at "/usr/lib/ignition/base.d" Dec 13 08:47:15.327283 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:47:15.329078 ignition[769]: disks: disks passed Dec 13 08:47:15.330659 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 08:47:15.329140 ignition[769]: Ignition finished successfully Dec 13 08:47:15.336740 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 08:47:15.338183 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 08:47:15.339487 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 08:47:15.340943 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 08:47:15.342685 systemd[1]: Reached target basic.target - Basic System. Dec 13 08:47:15.355742 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 08:47:15.381687 systemd-fsck[777]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 08:47:15.390675 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 08:47:15.397694 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 08:47:15.521477 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 08:47:15.522951 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 08:47:15.524379 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 08:47:15.530680 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 08:47:15.546909 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 08:47:15.552794 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Dec 13 08:47:15.557788 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 08:47:15.558817 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 08:47:15.558878 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 08:47:15.567887 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (785) Dec 13 08:47:15.566997 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 08:47:15.577792 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 08:47:15.577830 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 08:47:15.577845 kernel: BTRFS info (device vda6): using free space tree Dec 13 08:47:15.581492 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 08:47:15.585967 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 08:47:15.596909 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 08:47:15.696016 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 08:47:15.697176 coreos-metadata[787]: Dec 13 08:47:15.696 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 08:47:15.700082 coreos-metadata[788]: Dec 13 08:47:15.700 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 08:47:15.707092 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Dec 13 08:47:15.713321 initrd-setup-root[830]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 08:47:15.715499 coreos-metadata[787]: Dec 13 08:47:15.713 INFO Fetch successful Dec 13 08:47:15.716197 coreos-metadata[788]: Dec 13 08:47:15.714 INFO Fetch successful Dec 13 08:47:15.723207 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Dec 13 08:47:15.727185 initrd-setup-root[837]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 08:47:15.723329 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Dec 13 08:47:15.730099 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 08:47:15.732487 coreos-metadata[788]: Dec 13 08:47:15.728 INFO wrote hostname ci-4081.2.1-f-1ee231485e to /sysroot/etc/hostname Dec 13 08:47:15.848280 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 08:47:15.860699 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 08:47:15.865726 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 08:47:15.875183 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 08:47:15.879840 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 08:47:15.905099 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 08:47:15.913845 ignition[906]: INFO : Ignition 2.19.0 Dec 13 08:47:15.913845 ignition[906]: INFO : Stage: mount Dec 13 08:47:15.915565 ignition[906]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 08:47:15.915565 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:47:15.917554 ignition[906]: INFO : mount: mount passed Dec 13 08:47:15.917554 ignition[906]: INFO : Ignition finished successfully Dec 13 08:47:15.919295 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 08:47:15.926664 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 08:47:15.955828 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 08:47:15.972480 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (919) Dec 13 08:47:15.977508 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 08:47:15.977609 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 08:47:15.977637 kernel: BTRFS info (device vda6): using free space tree Dec 13 08:47:15.984507 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 08:47:15.986345 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 08:47:16.016953 ignition[936]: INFO : Ignition 2.19.0 Dec 13 08:47:16.016953 ignition[936]: INFO : Stage: files Dec 13 08:47:16.019110 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 08:47:16.019110 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:47:16.019110 ignition[936]: DEBUG : files: compiled without relabeling support, skipping Dec 13 08:47:16.022617 ignition[936]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 08:47:16.022617 ignition[936]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 08:47:16.026128 ignition[936]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 08:47:16.027370 ignition[936]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 08:47:16.027370 ignition[936]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 08:47:16.026670 unknown[936]: wrote ssh authorized keys file for user: core Dec 13 08:47:16.031427 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 08:47:16.031427 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 08:47:16.031427 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 08:47:16.031427 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 08:47:16.069293 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 08:47:16.374329 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 08:47:16.374329 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 08:47:16.378313 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 08:47:16.378313 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 08:47:16.378313 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 08:47:16.378313 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 08:47:16.378313 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 08:47:16.378313 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 08:47:16.378313 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 08:47:16.378313 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 08:47:16.378313 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 08:47:16.378313 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 08:47:16.378313 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 08:47:16.378313 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 08:47:16.378313 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 08:47:16.587607 systemd-networkd[752]: eth0: Gained IPv6LL Dec 13 08:47:16.843655 systemd-networkd[752]: eth1: Gained IPv6LL Dec 13 08:47:16.936359 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 08:47:17.794677 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 08:47:17.794677 ignition[936]: INFO : files: op(c): [started] processing unit "containerd.service" Dec 13 08:47:17.797471 ignition[936]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 08:47:17.797471 ignition[936]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 08:47:17.797471 ignition[936]: INFO : files: op(c): [finished] processing unit "containerd.service" Dec 13 08:47:17.797471 ignition[936]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Dec 13 08:47:17.797471 ignition[936]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 08:47:17.797471 ignition[936]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 08:47:17.797471 ignition[936]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Dec 13 08:47:17.797471 ignition[936]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 13 08:47:17.797471 ignition[936]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 08:47:17.797471 ignition[936]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 08:47:17.797471 ignition[936]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 08:47:17.797471 ignition[936]: INFO : files: files passed Dec 13 08:47:17.797471 ignition[936]: INFO : Ignition finished successfully Dec 13 08:47:17.798353 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 08:47:17.807805 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 08:47:17.813692 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 08:47:17.815099 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 08:47:17.815214 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 08:47:17.842204 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 08:47:17.842204 initrd-setup-root-after-ignition[965]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 08:47:17.845654 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 08:47:17.848050 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 08:47:17.849808 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 08:47:17.865793 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 08:47:17.917578 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 08:47:17.917774 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 08:47:17.920668 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 08:47:17.921581 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 08:47:17.922974 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 08:47:17.931777 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 08:47:17.952286 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 08:47:17.958781 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 08:47:17.977277 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 08:47:17.978300 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 08:47:17.979717 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 08:47:17.981289 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 08:47:17.981524 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 08:47:17.983280 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 08:47:17.984303 systemd[1]: Stopped target basic.target - Basic System. Dec 13 08:47:17.986114 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 08:47:17.987313 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 08:47:17.988616 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 08:47:17.990341 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 08:47:17.991755 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 08:47:17.993326 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 08:47:17.994569 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 08:47:17.995913 systemd[1]: Stopped target swap.target - Swaps. Dec 13 08:47:17.997275 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 08:47:17.997537 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 08:47:17.998871 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 08:47:17.999675 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 08:47:18.000902 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 08:47:18.001139 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 08:47:18.002410 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 08:47:18.002704 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 08:47:18.004270 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 08:47:18.004397 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 08:47:18.005896 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 08:47:18.006001 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 08:47:18.007250 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 08:47:18.007356 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 08:47:18.017568 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 08:47:18.020853 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 08:47:18.021675 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 08:47:18.021996 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 08:47:18.025775 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 08:47:18.027550 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 08:47:18.038031 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 08:47:18.038147 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 08:47:18.055114 ignition[989]: INFO : Ignition 2.19.0 Dec 13 08:47:18.055114 ignition[989]: INFO : Stage: umount Dec 13 08:47:18.055114 ignition[989]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 08:47:18.055114 ignition[989]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:47:18.065688 ignition[989]: INFO : umount: umount passed Dec 13 08:47:18.065688 ignition[989]: INFO : Ignition finished successfully Dec 13 08:47:18.057801 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 08:47:18.057910 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 08:47:18.059568 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 08:47:18.059696 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 08:47:18.061647 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 08:47:18.061711 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 08:47:18.062602 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 08:47:18.062649 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 08:47:18.063185 systemd[1]: Stopped target network.target - Network. Dec 13 08:47:18.066593 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 08:47:18.066663 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 08:47:18.069858 systemd[1]: Stopped target paths.target - Path Units. Dec 13 08:47:18.070960 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 08:47:18.075545 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 08:47:18.076377 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 08:47:18.078052 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 08:47:18.079347 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 08:47:18.079408 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 08:47:18.080753 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 08:47:18.080797 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 08:47:18.082627 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 08:47:18.082692 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 08:47:18.083755 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 08:47:18.083804 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 08:47:18.085514 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 08:47:18.086565 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 08:47:18.088521 systemd-networkd[752]: eth1: DHCPv6 lease lost Dec 13 08:47:18.089042 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 08:47:18.089683 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 08:47:18.089784 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 08:47:18.092174 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 08:47:18.092246 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 08:47:18.092626 systemd-networkd[752]: eth0: DHCPv6 lease lost Dec 13 08:47:18.095435 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 08:47:18.097771 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 08:47:18.099078 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 08:47:18.099189 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 08:47:18.105288 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 08:47:18.105363 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 08:47:18.111624 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 08:47:18.112249 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 08:47:18.112335 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 08:47:18.113180 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 08:47:18.113247 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 08:47:18.114426 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 08:47:18.114492 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 08:47:18.115973 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 08:47:18.116037 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 08:47:18.118595 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 08:47:18.131276 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 08:47:18.132497 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 08:47:18.136343 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 08:47:18.136552 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 08:47:18.138267 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 08:47:18.138339 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 08:47:18.139461 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 08:47:18.139536 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 08:47:18.141706 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 08:47:18.141786 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 08:47:18.142645 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 08:47:18.142708 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 08:47:18.145101 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 08:47:18.145179 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 08:47:18.152812 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 08:47:18.153733 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 08:47:18.153816 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 08:47:18.154490 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 08:47:18.154533 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 08:47:18.158853 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 08:47:18.158939 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 08:47:18.161059 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 08:47:18.161133 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:47:18.166733 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 08:47:18.166883 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 08:47:18.168216 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 08:47:18.174733 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 08:47:18.188314 systemd[1]: Switching root. Dec 13 08:47:18.227883 systemd-journald[184]: Journal stopped Dec 13 08:47:19.794735 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Dec 13 08:47:19.794856 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 08:47:19.794883 kernel: SELinux: policy capability open_perms=1 Dec 13 08:47:19.794921 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 08:47:19.794947 kernel: SELinux: policy capability always_check_network=0 Dec 13 08:47:19.794960 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 08:47:19.794978 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 08:47:19.794990 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 08:47:19.795009 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 08:47:19.795021 kernel: audit: type=1403 audit(1734079638.517:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 08:47:19.795036 systemd[1]: Successfully loaded SELinux policy in 47.736ms. Dec 13 08:47:19.795067 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.246ms. Dec 13 08:47:19.795111 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 08:47:19.795152 systemd[1]: Detected virtualization kvm. Dec 13 08:47:19.795188 systemd[1]: Detected architecture x86-64. Dec 13 08:47:19.795214 systemd[1]: Detected first boot. Dec 13 08:47:19.795242 systemd[1]: Hostname set to . Dec 13 08:47:19.795270 systemd[1]: Initializing machine ID from VM UUID. Dec 13 08:47:19.795303 zram_generator::config[1052]: No configuration found. Dec 13 08:47:19.795337 systemd[1]: Populated /etc with preset unit settings. Dec 13 08:47:19.795374 systemd[1]: Queued start job for default target multi-user.target. Dec 13 08:47:19.795408 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 08:47:19.795465 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 08:47:19.795525 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 08:47:19.795552 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 08:47:19.795587 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 08:47:19.795614 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 08:47:19.795642 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 08:47:19.795669 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 08:47:19.795696 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 08:47:19.795722 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 08:47:19.795753 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 08:47:19.795779 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 08:47:19.795806 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 08:47:19.795832 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 08:47:19.795860 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 08:47:19.795888 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 08:47:19.795920 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 08:47:19.795953 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 08:47:19.795986 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 08:47:19.796005 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 08:47:19.796018 systemd[1]: Reached target slices.target - Slice Units. Dec 13 08:47:19.796046 systemd[1]: Reached target swap.target - Swaps. Dec 13 08:47:19.796074 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 08:47:19.796111 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 08:47:19.796139 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 08:47:19.796170 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 08:47:19.796196 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 08:47:19.796223 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 08:47:19.796249 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 08:47:19.796275 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 08:47:19.796326 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 08:47:19.796354 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 08:47:19.796381 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 08:47:19.796409 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:47:19.796465 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 08:47:19.796494 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 08:47:19.796521 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 08:47:19.796548 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 08:47:19.796576 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:47:19.796605 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 08:47:19.796635 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 08:47:19.796663 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 08:47:19.796690 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 08:47:19.796725 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 08:47:19.796765 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 08:47:19.796792 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 08:47:19.796819 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 08:47:19.796868 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 08:47:19.796900 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Dec 13 08:47:19.796928 kernel: fuse: init (API version 7.39) Dec 13 08:47:19.796953 kernel: loop: module loaded Dec 13 08:47:19.796987 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 08:47:19.797016 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 08:47:19.797051 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 08:47:19.797086 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 08:47:19.797111 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 08:47:19.797130 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:47:19.797153 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 08:47:19.797182 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 08:47:19.797209 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 08:47:19.797240 kernel: ACPI: bus type drm_connector registered Dec 13 08:47:19.797334 systemd-journald[1139]: Collecting audit messages is disabled. Dec 13 08:47:19.797371 systemd-journald[1139]: Journal started Dec 13 08:47:19.797400 systemd-journald[1139]: Runtime Journal (/run/log/journal/566a16a1cfcc4f43b845ed545a817b5a) is 4.9M, max 39.3M, 34.4M free. Dec 13 08:47:19.803513 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 08:47:19.805017 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 08:47:19.807849 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 08:47:19.808606 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 08:47:19.810401 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 08:47:19.811625 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 08:47:19.811871 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 08:47:19.813158 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 08:47:19.813345 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 08:47:19.814268 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 08:47:19.814770 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 08:47:19.815658 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 08:47:19.815830 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 08:47:19.816938 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 08:47:19.817113 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 08:47:19.818031 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 08:47:19.820863 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 08:47:19.824074 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 08:47:19.827319 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 08:47:19.829350 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 08:47:19.842072 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 08:47:19.847326 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 08:47:19.855637 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 08:47:19.859183 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 08:47:19.860505 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 08:47:19.869771 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 08:47:19.887715 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 08:47:19.889479 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 08:47:19.895699 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 08:47:19.896463 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 08:47:19.907523 systemd-journald[1139]: Time spent on flushing to /var/log/journal/566a16a1cfcc4f43b845ed545a817b5a is 45.142ms for 973 entries. Dec 13 08:47:19.907523 systemd-journald[1139]: System Journal (/var/log/journal/566a16a1cfcc4f43b845ed545a817b5a) is 8.0M, max 195.6M, 187.6M free. Dec 13 08:47:19.966562 systemd-journald[1139]: Received client request to flush runtime journal. Dec 13 08:47:19.909690 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 08:47:19.935735 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 08:47:19.943694 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 08:47:19.945268 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 08:47:19.969327 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 08:47:19.999163 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 08:47:20.005960 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 08:47:20.015686 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Dec 13 08:47:20.015732 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Dec 13 08:47:20.027543 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 08:47:20.031893 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 08:47:20.049757 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 08:47:20.077128 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 08:47:20.091801 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 08:47:20.121019 udevadm[1211]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 08:47:20.133047 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 08:47:20.142919 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 08:47:20.187644 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Dec 13 08:47:20.187668 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Dec 13 08:47:20.200101 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 08:47:21.042019 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 08:47:21.050063 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 08:47:21.082473 systemd-udevd[1220]: Using default interface naming scheme 'v255'. Dec 13 08:47:21.111903 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 08:47:21.122752 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 08:47:21.156954 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 08:47:21.237347 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Dec 13 08:47:21.250500 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1237) Dec 13 08:47:21.263491 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1237) Dec 13 08:47:21.267706 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 08:47:21.269923 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:47:21.271477 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:47:21.281859 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 08:47:21.291752 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 08:47:21.305842 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 08:47:21.309968 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 08:47:21.310052 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 08:47:21.310136 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:47:21.322068 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 08:47:21.322390 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 08:47:21.326106 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 08:47:21.339939 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 08:47:21.340272 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 08:47:21.357996 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 08:47:21.358332 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 08:47:21.362365 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 08:47:21.370787 systemd-networkd[1224]: lo: Link UP Dec 13 08:47:21.370799 systemd-networkd[1224]: lo: Gained carrier Dec 13 08:47:21.374844 systemd-networkd[1224]: Enumeration completed Dec 13 08:47:21.375239 systemd-networkd[1224]: eth1: Configuring with /run/systemd/network/10-f2:19:47:ea:98:5c.network. Dec 13 08:47:21.375994 systemd-networkd[1224]: eth1: Link UP Dec 13 08:47:21.376003 systemd-networkd[1224]: eth1: Gained carrier Dec 13 08:47:21.378304 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 08:47:21.388479 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1228) Dec 13 08:47:21.391680 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 08:47:21.417677 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 08:47:21.449137 systemd-networkd[1224]: eth0: Configuring with /run/systemd/network/10-da:e0:ed:36:1f:c9.network. Dec 13 08:47:21.450054 systemd-networkd[1224]: eth0: Link UP Dec 13 08:47:21.450065 systemd-networkd[1224]: eth0: Gained carrier Dec 13 08:47:21.508482 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 08:47:21.514509 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 13 08:47:21.523735 kernel: ACPI: button: Power Button [PWRF] Dec 13 08:47:21.542469 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 08:47:21.581498 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 08:47:21.590480 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Dec 13 08:47:21.590598 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Dec 13 08:47:21.594776 kernel: Console: switching to colour dummy device 80x25 Dec 13 08:47:21.595823 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 08:47:21.595894 kernel: [drm] features: -context_init Dec 13 08:47:21.598618 kernel: [drm] number of scanouts: 1 Dec 13 08:47:21.598692 kernel: [drm] number of cap sets: 0 Dec 13 08:47:21.602514 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Dec 13 08:47:21.619226 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 13 08:47:21.619359 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 08:47:21.621322 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:47:21.630488 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 08:47:21.650881 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 08:47:21.651237 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:47:21.664967 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:47:21.728681 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 08:47:21.729114 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:47:21.734832 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:47:21.789496 kernel: EDAC MC: Ver: 3.0.0 Dec 13 08:47:21.818248 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 08:47:21.829108 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 08:47:21.849523 lvm[1278]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 08:47:21.876315 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:47:21.881882 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 08:47:21.884044 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 08:47:21.894780 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 08:47:21.903116 lvm[1285]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 08:47:21.936431 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 08:47:21.938936 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 08:47:21.945745 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Dec 13 08:47:21.945976 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 08:47:21.946033 systemd[1]: Reached target machines.target - Containers. Dec 13 08:47:21.948904 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 08:47:21.972503 kernel: ISO 9660 Extensions: RRIP_1991A Dec 13 08:47:21.970285 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Dec 13 08:47:21.974345 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 08:47:21.979316 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 08:47:21.993865 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 08:47:22.000828 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 08:47:22.004137 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 08:47:22.017006 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 08:47:22.025342 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 08:47:22.031820 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 08:47:22.039980 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 08:47:22.057840 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 08:47:22.061836 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 08:47:22.077698 kernel: loop0: detected capacity change from 0 to 140768 Dec 13 08:47:22.122328 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 08:47:22.155554 kernel: loop1: detected capacity change from 0 to 8 Dec 13 08:47:22.177309 kernel: loop2: detected capacity change from 0 to 211296 Dec 13 08:47:22.226620 kernel: loop3: detected capacity change from 0 to 142488 Dec 13 08:47:22.289000 kernel: loop4: detected capacity change from 0 to 140768 Dec 13 08:47:22.313370 kernel: loop5: detected capacity change from 0 to 8 Dec 13 08:47:22.317340 kernel: loop6: detected capacity change from 0 to 211296 Dec 13 08:47:22.337293 kernel: loop7: detected capacity change from 0 to 142488 Dec 13 08:47:22.355691 (sd-merge)[1310]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Dec 13 08:47:22.356456 (sd-merge)[1310]: Merged extensions into '/usr'. Dec 13 08:47:22.366150 systemd[1]: Reloading requested from client PID 1299 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 08:47:22.366379 systemd[1]: Reloading... Dec 13 08:47:22.528362 zram_generator::config[1347]: No configuration found. Dec 13 08:47:22.738031 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 08:47:22.804574 ldconfig[1296]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 08:47:22.847779 systemd[1]: Reloading finished in 480 ms. Dec 13 08:47:22.871939 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 08:47:22.876062 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 08:47:22.888908 systemd[1]: Starting ensure-sysext.service... Dec 13 08:47:22.899677 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 08:47:22.912083 systemd[1]: Reloading requested from client PID 1388 ('systemctl') (unit ensure-sysext.service)... Dec 13 08:47:22.912320 systemd[1]: Reloading... Dec 13 08:47:22.931125 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 08:47:22.931524 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 08:47:22.932515 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 08:47:22.932903 systemd-tmpfiles[1389]: ACLs are not supported, ignoring. Dec 13 08:47:22.932986 systemd-tmpfiles[1389]: ACLs are not supported, ignoring. Dec 13 08:47:22.936871 systemd-tmpfiles[1389]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 08:47:22.936884 systemd-tmpfiles[1389]: Skipping /boot Dec 13 08:47:22.950577 systemd-tmpfiles[1389]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 08:47:22.950592 systemd-tmpfiles[1389]: Skipping /boot Dec 13 08:47:23.023511 zram_generator::config[1422]: No configuration found. Dec 13 08:47:23.115621 systemd-networkd[1224]: eth1: Gained IPv6LL Dec 13 08:47:23.179627 systemd-networkd[1224]: eth0: Gained IPv6LL Dec 13 08:47:23.260299 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 08:47:23.515162 systemd[1]: Reloading finished in 602 ms. Dec 13 08:47:23.565228 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 08:47:23.576721 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 08:47:23.596905 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 08:47:23.610768 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 08:47:23.617673 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 08:47:23.631482 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 08:47:23.639739 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 08:47:23.653949 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:47:23.656189 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:47:23.664685 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 08:47:23.675912 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 08:47:23.694920 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 08:47:23.696474 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 08:47:23.696795 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:47:23.708300 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 08:47:23.708800 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 08:47:23.716816 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 08:47:23.717020 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 08:47:23.730159 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:47:23.735728 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:47:23.749927 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 08:47:23.767011 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 08:47:23.767769 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 08:47:23.768026 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:47:23.777893 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 08:47:23.782722 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 08:47:23.786805 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 08:47:23.791748 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 08:47:23.792002 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 08:47:23.799634 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 08:47:23.799908 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 08:47:23.806838 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 08:47:23.809768 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 08:47:23.826184 augenrules[1513]: No rules Dec 13 08:47:23.827289 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 08:47:23.836483 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:47:23.837021 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:47:23.850680 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 08:47:23.860465 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 08:47:23.871840 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 08:47:23.883789 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 08:47:23.886341 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 08:47:23.900047 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 08:47:23.901563 systemd-resolved[1476]: Positive Trust Anchors: Dec 13 08:47:23.901579 systemd-resolved[1476]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 08:47:23.901631 systemd-resolved[1476]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 08:47:23.903839 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 08:47:23.903883 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:47:23.908455 systemd[1]: Finished ensure-sysext.service. Dec 13 08:47:23.909610 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 08:47:23.909861 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 08:47:23.912000 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 08:47:23.912240 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 08:47:23.914832 systemd-resolved[1476]: Using system hostname 'ci-4081.2.1-f-1ee231485e'. Dec 13 08:47:23.918950 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 08:47:23.919209 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 08:47:23.920943 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 08:47:23.924165 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 08:47:23.926745 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 08:47:23.937696 systemd[1]: Reached target network.target - Network. Dec 13 08:47:23.940040 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 08:47:23.941600 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 08:47:23.942939 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 08:47:23.943233 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 08:47:23.958827 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 08:47:23.961411 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 08:47:24.038966 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 08:47:24.040901 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 08:47:24.042798 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 08:47:24.043398 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 08:47:24.043982 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 08:47:24.045193 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 08:47:24.045263 systemd[1]: Reached target paths.target - Path Units. Dec 13 08:47:24.047255 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 08:47:24.048029 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 08:47:24.049381 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 08:47:24.050115 systemd[1]: Reached target timers.target - Timer Units. Dec 13 08:47:24.053290 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 08:47:24.057024 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 08:47:24.062109 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 08:47:24.064326 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 08:47:24.065524 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 08:47:24.065939 systemd[1]: Reached target basic.target - Basic System. Dec 13 08:47:24.067269 systemd[1]: System is tainted: cgroupsv1 Dec 13 08:47:24.067367 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 08:47:24.067424 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 08:47:24.073599 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 08:47:24.084806 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 08:47:24.094770 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 08:47:24.100628 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 08:47:24.116789 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 08:47:24.118418 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 08:47:24.129598 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:47:24.134730 jq[1549]: false Dec 13 08:47:24.138772 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 08:47:24.151758 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 08:47:24.155889 coreos-metadata[1544]: Dec 13 08:47:24.155 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 08:47:24.165636 extend-filesystems[1550]: Found loop4 Dec 13 08:47:24.174931 extend-filesystems[1550]: Found loop5 Dec 13 08:47:24.174931 extend-filesystems[1550]: Found loop6 Dec 13 08:47:24.174931 extend-filesystems[1550]: Found loop7 Dec 13 08:47:24.174931 extend-filesystems[1550]: Found vda Dec 13 08:47:24.174931 extend-filesystems[1550]: Found vda1 Dec 13 08:47:24.174931 extend-filesystems[1550]: Found vda2 Dec 13 08:47:24.174931 extend-filesystems[1550]: Found vda3 Dec 13 08:47:24.174931 extend-filesystems[1550]: Found usr Dec 13 08:47:24.174931 extend-filesystems[1550]: Found vda4 Dec 13 08:47:24.174931 extend-filesystems[1550]: Found vda6 Dec 13 08:47:24.174931 extend-filesystems[1550]: Found vda7 Dec 13 08:47:24.174931 extend-filesystems[1550]: Found vda9 Dec 13 08:47:24.174931 extend-filesystems[1550]: Checking size of /dev/vda9 Dec 13 08:47:24.206347 coreos-metadata[1544]: Dec 13 08:47:24.172 INFO Fetch successful Dec 13 08:47:24.166296 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 08:47:24.186075 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 08:47:24.213860 dbus-daemon[1546]: [system] SELinux support is enabled Dec 13 08:47:24.215743 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 08:47:24.868792 systemd-resolved[1476]: Clock change detected. Flushing caches. Dec 13 08:47:24.869215 systemd-timesyncd[1538]: Contacted time server 198.169.208.142:123 (0.flatcar.pool.ntp.org). Dec 13 08:47:24.869281 systemd-timesyncd[1538]: Initial clock synchronization to Fri 2024-12-13 08:47:24.868717 UTC. Dec 13 08:47:24.870650 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 08:47:24.875095 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 08:47:24.888251 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 08:47:24.897512 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 08:47:24.900787 extend-filesystems[1550]: Resized partition /dev/vda9 Dec 13 08:47:24.900120 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 08:47:24.913148 extend-filesystems[1582]: resize2fs 1.47.1 (20-May-2024) Dec 13 08:47:24.926996 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Dec 13 08:47:24.933343 jq[1580]: true Dec 13 08:47:24.951380 update_engine[1578]: I20241213 08:47:24.938661 1578 main.cc:92] Flatcar Update Engine starting Dec 13 08:47:24.951380 update_engine[1578]: I20241213 08:47:24.950579 1578 update_check_scheduler.cc:74] Next update check in 5m56s Dec 13 08:47:24.957500 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 08:47:24.957788 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 08:47:24.966233 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 08:47:24.966582 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 08:47:24.971088 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 08:47:24.986942 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 08:47:24.987222 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 08:47:25.031898 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 08:47:25.031976 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 08:47:25.037685 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 08:47:25.037895 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Dec 13 08:47:25.037929 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 08:47:25.042806 (ntainerd)[1595]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 08:47:25.043843 systemd[1]: Started update-engine.service - Update Engine. Dec 13 08:47:25.047140 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 08:47:25.056751 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 08:47:25.064480 jq[1593]: true Dec 13 08:47:25.067673 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 08:47:25.097241 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 08:47:25.120920 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 08:47:25.144112 tar[1591]: linux-amd64/helm Dec 13 08:47:25.156873 extend-filesystems[1582]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 08:47:25.156873 extend-filesystems[1582]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 08:47:25.156873 extend-filesystems[1582]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 08:47:25.182080 extend-filesystems[1550]: Resized filesystem in /dev/vda9 Dec 13 08:47:25.182080 extend-filesystems[1550]: Found vdb Dec 13 08:47:25.157808 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 08:47:25.158119 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 08:47:25.206551 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1617) Dec 13 08:47:25.322437 systemd-logind[1572]: New seat seat0. Dec 13 08:47:25.327275 bash[1638]: Updated "/home/core/.ssh/authorized_keys" Dec 13 08:47:25.325692 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 08:47:25.327025 systemd-logind[1572]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 08:47:25.327046 systemd-logind[1572]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 08:47:25.328062 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 08:47:25.352692 systemd[1]: Starting sshkeys.service... Dec 13 08:47:25.421508 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 08:47:25.432785 locksmithd[1609]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 08:47:25.434719 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 08:47:25.606631 coreos-metadata[1654]: Dec 13 08:47:25.600 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 08:47:25.609411 containerd[1595]: time="2024-12-13T08:47:25.607825850Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 08:47:25.621286 coreos-metadata[1654]: Dec 13 08:47:25.621 INFO Fetch successful Dec 13 08:47:25.656437 unknown[1654]: wrote ssh authorized keys file for user: core Dec 13 08:47:25.688551 containerd[1595]: time="2024-12-13T08:47:25.688429606Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 08:47:25.691280 containerd[1595]: time="2024-12-13T08:47:25.691222779Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:47:25.691480 containerd[1595]: time="2024-12-13T08:47:25.691420948Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 08:47:25.691572 containerd[1595]: time="2024-12-13T08:47:25.691557407Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 08:47:25.691768 containerd[1595]: time="2024-12-13T08:47:25.691750457Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 08:47:25.691829 containerd[1595]: time="2024-12-13T08:47:25.691818161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 08:47:25.691929 containerd[1595]: time="2024-12-13T08:47:25.691914373Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:47:25.691998 containerd[1595]: time="2024-12-13T08:47:25.691984376Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 08:47:25.692411 containerd[1595]: time="2024-12-13T08:47:25.692379433Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:47:25.692515 containerd[1595]: time="2024-12-13T08:47:25.692501624Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 08:47:25.692671 containerd[1595]: time="2024-12-13T08:47:25.692651260Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:47:25.692756 containerd[1595]: time="2024-12-13T08:47:25.692738427Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 08:47:25.692966 containerd[1595]: time="2024-12-13T08:47:25.692940672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 08:47:25.693382 containerd[1595]: time="2024-12-13T08:47:25.693342027Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 08:47:25.694100 containerd[1595]: time="2024-12-13T08:47:25.693670497Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:47:25.694100 containerd[1595]: time="2024-12-13T08:47:25.693695454Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 08:47:25.694100 containerd[1595]: time="2024-12-13T08:47:25.693784994Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 08:47:25.694100 containerd[1595]: time="2024-12-13T08:47:25.693888714Z" level=info msg="metadata content store policy set" policy=shared Dec 13 08:47:25.716049 containerd[1595]: time="2024-12-13T08:47:25.715428512Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 08:47:25.716049 containerd[1595]: time="2024-12-13T08:47:25.715567814Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 08:47:25.716049 containerd[1595]: time="2024-12-13T08:47:25.715596519Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 08:47:25.716049 containerd[1595]: time="2024-12-13T08:47:25.715666435Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 08:47:25.716049 containerd[1595]: time="2024-12-13T08:47:25.715688231Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 08:47:25.716049 containerd[1595]: time="2024-12-13T08:47:25.715878192Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 08:47:25.719555 containerd[1595]: time="2024-12-13T08:47:25.718436051Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 08:47:25.719555 containerd[1595]: time="2024-12-13T08:47:25.718610285Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 08:47:25.719555 containerd[1595]: time="2024-12-13T08:47:25.718632169Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 08:47:25.719555 containerd[1595]: time="2024-12-13T08:47:25.718652458Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 08:47:25.719555 containerd[1595]: time="2024-12-13T08:47:25.718673379Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 08:47:25.719555 containerd[1595]: time="2024-12-13T08:47:25.718703931Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 08:47:25.719555 containerd[1595]: time="2024-12-13T08:47:25.718726835Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 08:47:25.719555 containerd[1595]: time="2024-12-13T08:47:25.718747605Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 08:47:25.719555 containerd[1595]: time="2024-12-13T08:47:25.718767925Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 08:47:25.719555 containerd[1595]: time="2024-12-13T08:47:25.718785785Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 08:47:25.719555 containerd[1595]: time="2024-12-13T08:47:25.718803092Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 08:47:25.719555 containerd[1595]: time="2024-12-13T08:47:25.718821058Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 08:47:25.719555 containerd[1595]: time="2024-12-13T08:47:25.718856400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 08:47:25.719555 containerd[1595]: time="2024-12-13T08:47:25.718877198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 08:47:25.720227 containerd[1595]: time="2024-12-13T08:47:25.718894741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 08:47:25.720227 containerd[1595]: time="2024-12-13T08:47:25.718913547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 08:47:25.720227 containerd[1595]: time="2024-12-13T08:47:25.718930411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 08:47:25.720227 containerd[1595]: time="2024-12-13T08:47:25.718949779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 08:47:25.720227 containerd[1595]: time="2024-12-13T08:47:25.718966388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 08:47:25.720227 containerd[1595]: time="2024-12-13T08:47:25.718985178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 08:47:25.720227 containerd[1595]: time="2024-12-13T08:47:25.719002963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 08:47:25.720227 containerd[1595]: time="2024-12-13T08:47:25.719022402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 08:47:25.720227 containerd[1595]: time="2024-12-13T08:47:25.719038470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 08:47:25.720227 containerd[1595]: time="2024-12-13T08:47:25.719056547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 08:47:25.720227 containerd[1595]: time="2024-12-13T08:47:25.719073414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 08:47:25.720227 containerd[1595]: time="2024-12-13T08:47:25.719096567Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 08:47:25.720227 containerd[1595]: time="2024-12-13T08:47:25.719133756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 08:47:25.720227 containerd[1595]: time="2024-12-13T08:47:25.719153252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 08:47:25.720227 containerd[1595]: time="2024-12-13T08:47:25.719169398Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 08:47:25.721085 containerd[1595]: time="2024-12-13T08:47:25.719231671Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 08:47:25.721085 containerd[1595]: time="2024-12-13T08:47:25.719257280Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 08:47:25.721085 containerd[1595]: time="2024-12-13T08:47:25.719273767Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 08:47:25.721085 containerd[1595]: time="2024-12-13T08:47:25.719290422Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 08:47:25.722234 containerd[1595]: time="2024-12-13T08:47:25.719307224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 08:47:25.722234 containerd[1595]: time="2024-12-13T08:47:25.722020050Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 08:47:25.722234 containerd[1595]: time="2024-12-13T08:47:25.722042090Z" level=info msg="NRI interface is disabled by configuration." Dec 13 08:47:25.722234 containerd[1595]: time="2024-12-13T08:47:25.722054966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 08:47:25.723965 containerd[1595]: time="2024-12-13T08:47:25.723796477Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 08:47:25.726428 containerd[1595]: time="2024-12-13T08:47:25.724252974Z" level=info msg="Connect containerd service" Dec 13 08:47:25.726428 containerd[1595]: time="2024-12-13T08:47:25.724368239Z" level=info msg="using legacy CRI server" Dec 13 08:47:25.726428 containerd[1595]: time="2024-12-13T08:47:25.724380689Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 08:47:25.726428 containerd[1595]: time="2024-12-13T08:47:25.724527509Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 08:47:25.726620 update-ssh-keys[1661]: Updated "/home/core/.ssh/authorized_keys" Dec 13 08:47:25.728342 containerd[1595]: time="2024-12-13T08:47:25.728038331Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 08:47:25.730686 containerd[1595]: time="2024-12-13T08:47:25.730638809Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 08:47:25.730917 containerd[1595]: time="2024-12-13T08:47:25.730884830Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 08:47:25.731474 containerd[1595]: time="2024-12-13T08:47:25.731425729Z" level=info msg="Start subscribing containerd event" Dec 13 08:47:25.733343 containerd[1595]: time="2024-12-13T08:47:25.731597543Z" level=info msg="Start recovering state" Dec 13 08:47:25.733343 containerd[1595]: time="2024-12-13T08:47:25.731692840Z" level=info msg="Start event monitor" Dec 13 08:47:25.733343 containerd[1595]: time="2024-12-13T08:47:25.731721674Z" level=info msg="Start snapshots syncer" Dec 13 08:47:25.733343 containerd[1595]: time="2024-12-13T08:47:25.731735997Z" level=info msg="Start cni network conf syncer for default" Dec 13 08:47:25.733343 containerd[1595]: time="2024-12-13T08:47:25.731745992Z" level=info msg="Start streaming server" Dec 13 08:47:25.733343 containerd[1595]: time="2024-12-13T08:47:25.731810838Z" level=info msg="containerd successfully booted in 0.126742s" Dec 13 08:47:25.731817 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 08:47:25.744406 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 08:47:25.751820 systemd[1]: Finished sshkeys.service. Dec 13 08:47:25.810085 sshd_keygen[1592]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 08:47:25.873662 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 08:47:25.885787 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 08:47:25.916648 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 08:47:25.917085 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 08:47:25.930120 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 08:47:25.972398 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 08:47:25.988900 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 08:47:26.006585 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 08:47:26.009346 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 08:47:26.420568 tar[1591]: linux-amd64/LICENSE Dec 13 08:47:26.421114 tar[1591]: linux-amd64/README.md Dec 13 08:47:26.448715 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 08:47:26.853598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:47:26.858978 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 08:47:26.863443 systemd[1]: Startup finished in 8.114s (kernel) + 7.761s (userspace) = 15.875s. Dec 13 08:47:26.876038 (kubelet)[1703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:47:28.014225 kubelet[1703]: E1213 08:47:28.014107 1703 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:47:28.018256 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:47:28.019360 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:47:32.357328 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 08:47:32.366885 systemd[1]: Started sshd@0-146.190.59.17:22-147.75.109.163:43834.service - OpenSSH per-connection server daemon (147.75.109.163:43834). Dec 13 08:47:32.456258 sshd[1716]: Accepted publickey for core from 147.75.109.163 port 43834 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:47:32.459541 sshd[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:47:32.474159 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 08:47:32.480802 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 08:47:32.485463 systemd-logind[1572]: New session 1 of user core. Dec 13 08:47:32.505533 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 08:47:32.516044 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 08:47:32.521712 (systemd)[1722]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 08:47:32.678616 systemd[1722]: Queued start job for default target default.target. Dec 13 08:47:32.679245 systemd[1722]: Created slice app.slice - User Application Slice. Dec 13 08:47:32.679285 systemd[1722]: Reached target paths.target - Paths. Dec 13 08:47:32.679305 systemd[1722]: Reached target timers.target - Timers. Dec 13 08:47:32.686509 systemd[1722]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 08:47:32.698630 systemd[1722]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 08:47:32.698741 systemd[1722]: Reached target sockets.target - Sockets. Dec 13 08:47:32.698764 systemd[1722]: Reached target basic.target - Basic System. Dec 13 08:47:32.698839 systemd[1722]: Reached target default.target - Main User Target. Dec 13 08:47:32.698884 systemd[1722]: Startup finished in 167ms. Dec 13 08:47:32.699218 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 08:47:32.706968 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 08:47:32.776903 systemd[1]: Started sshd@1-146.190.59.17:22-147.75.109.163:43840.service - OpenSSH per-connection server daemon (147.75.109.163:43840). Dec 13 08:47:32.838292 sshd[1734]: Accepted publickey for core from 147.75.109.163 port 43840 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:47:32.840833 sshd[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:47:32.849217 systemd-logind[1572]: New session 2 of user core. Dec 13 08:47:32.852685 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 08:47:32.920370 sshd[1734]: pam_unix(sshd:session): session closed for user core Dec 13 08:47:32.927259 systemd[1]: sshd@1-146.190.59.17:22-147.75.109.163:43840.service: Deactivated successfully. Dec 13 08:47:32.931003 systemd-logind[1572]: Session 2 logged out. Waiting for processes to exit. Dec 13 08:47:32.938831 systemd[1]: Started sshd@2-146.190.59.17:22-147.75.109.163:43856.service - OpenSSH per-connection server daemon (147.75.109.163:43856). Dec 13 08:47:32.939627 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 08:47:32.941740 systemd-logind[1572]: Removed session 2. Dec 13 08:47:32.999636 sshd[1742]: Accepted publickey for core from 147.75.109.163 port 43856 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:47:33.001819 sshd[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:47:33.010671 systemd-logind[1572]: New session 3 of user core. Dec 13 08:47:33.019948 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 08:47:33.079792 sshd[1742]: pam_unix(sshd:session): session closed for user core Dec 13 08:47:33.091906 systemd[1]: Started sshd@3-146.190.59.17:22-147.75.109.163:43860.service - OpenSSH per-connection server daemon (147.75.109.163:43860). Dec 13 08:47:33.092782 systemd[1]: sshd@2-146.190.59.17:22-147.75.109.163:43856.service: Deactivated successfully. Dec 13 08:47:33.094907 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 08:47:33.098026 systemd-logind[1572]: Session 3 logged out. Waiting for processes to exit. Dec 13 08:47:33.100871 systemd-logind[1572]: Removed session 3. Dec 13 08:47:33.141696 sshd[1747]: Accepted publickey for core from 147.75.109.163 port 43860 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:47:33.143941 sshd[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:47:33.151776 systemd-logind[1572]: New session 4 of user core. Dec 13 08:47:33.160853 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 08:47:33.224485 sshd[1747]: pam_unix(sshd:session): session closed for user core Dec 13 08:47:33.231549 systemd[1]: sshd@3-146.190.59.17:22-147.75.109.163:43860.service: Deactivated successfully. Dec 13 08:47:33.235534 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 08:47:33.236482 systemd-logind[1572]: Session 4 logged out. Waiting for processes to exit. Dec 13 08:47:33.247785 systemd[1]: Started sshd@4-146.190.59.17:22-147.75.109.163:43866.service - OpenSSH per-connection server daemon (147.75.109.163:43866). Dec 13 08:47:33.249066 systemd-logind[1572]: Removed session 4. Dec 13 08:47:33.297029 sshd[1758]: Accepted publickey for core from 147.75.109.163 port 43866 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:47:33.299353 sshd[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:47:33.309394 systemd-logind[1572]: New session 5 of user core. Dec 13 08:47:33.315988 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 08:47:33.394024 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 08:47:33.395028 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 08:47:33.409944 sudo[1762]: pam_unix(sudo:session): session closed for user root Dec 13 08:47:33.413991 sshd[1758]: pam_unix(sshd:session): session closed for user core Dec 13 08:47:33.420424 systemd[1]: sshd@4-146.190.59.17:22-147.75.109.163:43866.service: Deactivated successfully. Dec 13 08:47:33.424467 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 08:47:33.425919 systemd-logind[1572]: Session 5 logged out. Waiting for processes to exit. Dec 13 08:47:33.429788 systemd[1]: Started sshd@5-146.190.59.17:22-147.75.109.163:43882.service - OpenSSH per-connection server daemon (147.75.109.163:43882). Dec 13 08:47:33.432056 systemd-logind[1572]: Removed session 5. Dec 13 08:47:33.488004 sshd[1767]: Accepted publickey for core from 147.75.109.163 port 43882 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:47:33.490552 sshd[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:47:33.499068 systemd-logind[1572]: New session 6 of user core. Dec 13 08:47:33.504886 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 08:47:33.568573 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 08:47:33.568928 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 08:47:33.573946 sudo[1772]: pam_unix(sudo:session): session closed for user root Dec 13 08:47:33.581695 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 08:47:33.582182 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 08:47:33.603916 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 08:47:33.606204 auditctl[1775]: No rules Dec 13 08:47:33.606775 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 08:47:33.607033 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 08:47:33.622772 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 08:47:33.656856 augenrules[1794]: No rules Dec 13 08:47:33.658756 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 08:47:33.661649 sudo[1771]: pam_unix(sudo:session): session closed for user root Dec 13 08:47:33.666724 sshd[1767]: pam_unix(sshd:session): session closed for user core Dec 13 08:47:33.673610 systemd[1]: sshd@5-146.190.59.17:22-147.75.109.163:43882.service: Deactivated successfully. Dec 13 08:47:33.677366 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 08:47:33.678932 systemd-logind[1572]: Session 6 logged out. Waiting for processes to exit. Dec 13 08:47:33.685865 systemd[1]: Started sshd@6-146.190.59.17:22-147.75.109.163:43888.service - OpenSSH per-connection server daemon (147.75.109.163:43888). Dec 13 08:47:33.689395 systemd-logind[1572]: Removed session 6. Dec 13 08:47:33.740174 sshd[1803]: Accepted publickey for core from 147.75.109.163 port 43888 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:47:33.741157 sshd[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:47:33.750538 systemd-logind[1572]: New session 7 of user core. Dec 13 08:47:33.760017 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 08:47:33.823586 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 08:47:33.824098 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 08:47:34.355940 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 08:47:34.356211 (dockerd)[1822]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 08:47:34.904156 dockerd[1822]: time="2024-12-13T08:47:34.904076342Z" level=info msg="Starting up" Dec 13 08:47:35.075834 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4012045103-merged.mount: Deactivated successfully. Dec 13 08:47:35.199767 systemd[1]: var-lib-docker-metacopy\x2dcheck4109100750-merged.mount: Deactivated successfully. Dec 13 08:47:35.221458 dockerd[1822]: time="2024-12-13T08:47:35.221382883Z" level=info msg="Loading containers: start." Dec 13 08:47:35.379367 kernel: Initializing XFRM netlink socket Dec 13 08:47:35.490713 systemd-networkd[1224]: docker0: Link UP Dec 13 08:47:35.534783 dockerd[1822]: time="2024-12-13T08:47:35.534727662Z" level=info msg="Loading containers: done." Dec 13 08:47:35.565219 dockerd[1822]: time="2024-12-13T08:47:35.565031481Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 08:47:35.565219 dockerd[1822]: time="2024-12-13T08:47:35.565182172Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 08:47:35.565525 dockerd[1822]: time="2024-12-13T08:47:35.565358350Z" level=info msg="Daemon has completed initialization" Dec 13 08:47:35.632481 dockerd[1822]: time="2024-12-13T08:47:35.631685315Z" level=info msg="API listen on /run/docker.sock" Dec 13 08:47:35.631994 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 08:47:36.068601 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2157621429-merged.mount: Deactivated successfully. Dec 13 08:47:36.788892 containerd[1595]: time="2024-12-13T08:47:36.788412563Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 08:47:37.486984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount674490454.mount: Deactivated successfully. Dec 13 08:47:38.269104 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 08:47:38.282255 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:47:38.456691 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:47:38.469497 (kubelet)[2039]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:47:38.562034 kubelet[2039]: E1213 08:47:38.561602 2039 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:47:38.570608 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:47:38.571613 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:47:39.538410 containerd[1595]: time="2024-12-13T08:47:39.538335464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:39.542046 containerd[1595]: time="2024-12-13T08:47:39.541969036Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Dec 13 08:47:39.545351 containerd[1595]: time="2024-12-13T08:47:39.544802762Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:39.553284 containerd[1595]: time="2024-12-13T08:47:39.553176976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:39.555964 containerd[1595]: time="2024-12-13T08:47:39.555739923Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 2.767255085s" Dec 13 08:47:39.555964 containerd[1595]: time="2024-12-13T08:47:39.555797180Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 08:47:39.591349 containerd[1595]: time="2024-12-13T08:47:39.590387652Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 08:47:41.816403 containerd[1595]: time="2024-12-13T08:47:41.816024950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:41.820620 containerd[1595]: time="2024-12-13T08:47:41.820457672Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Dec 13 08:47:41.824623 containerd[1595]: time="2024-12-13T08:47:41.824528252Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:41.831409 containerd[1595]: time="2024-12-13T08:47:41.831304975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:41.833272 containerd[1595]: time="2024-12-13T08:47:41.833057361Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.242622776s" Dec 13 08:47:41.833272 containerd[1595]: time="2024-12-13T08:47:41.833119696Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 08:47:41.870673 containerd[1595]: time="2024-12-13T08:47:41.870338177Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 08:47:43.227560 containerd[1595]: time="2024-12-13T08:47:43.227445571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:43.230358 containerd[1595]: time="2024-12-13T08:47:43.230257002Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Dec 13 08:47:43.233666 containerd[1595]: time="2024-12-13T08:47:43.233589265Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:43.240062 containerd[1595]: time="2024-12-13T08:47:43.239952715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:43.242192 containerd[1595]: time="2024-12-13T08:47:43.241934080Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.371539289s" Dec 13 08:47:43.242192 containerd[1595]: time="2024-12-13T08:47:43.241996391Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 08:47:43.282355 containerd[1595]: time="2024-12-13T08:47:43.282276186Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 08:47:43.286058 systemd-resolved[1476]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Dec 13 08:47:44.626069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3188283340.mount: Deactivated successfully. Dec 13 08:47:45.183860 containerd[1595]: time="2024-12-13T08:47:45.183798119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:45.186122 containerd[1595]: time="2024-12-13T08:47:45.186059970Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Dec 13 08:47:45.188584 containerd[1595]: time="2024-12-13T08:47:45.188539472Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:45.193371 containerd[1595]: time="2024-12-13T08:47:45.193273493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:45.194458 containerd[1595]: time="2024-12-13T08:47:45.193992273Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.911657828s" Dec 13 08:47:45.194458 containerd[1595]: time="2024-12-13T08:47:45.194037516Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 08:47:45.222492 containerd[1595]: time="2024-12-13T08:47:45.222449111Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 08:47:46.028723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1009688163.mount: Deactivated successfully. Dec 13 08:47:46.339548 systemd-resolved[1476]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Dec 13 08:47:47.122995 containerd[1595]: time="2024-12-13T08:47:47.122915264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:47.127374 containerd[1595]: time="2024-12-13T08:47:47.127068772Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 08:47:47.130378 containerd[1595]: time="2024-12-13T08:47:47.130279101Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:47.140692 containerd[1595]: time="2024-12-13T08:47:47.140600696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:47.143491 containerd[1595]: time="2024-12-13T08:47:47.143434720Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.920932687s" Dec 13 08:47:47.144119 containerd[1595]: time="2024-12-13T08:47:47.143679779Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 08:47:47.185364 containerd[1595]: time="2024-12-13T08:47:47.185280120Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 08:47:47.805465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3927759191.mount: Deactivated successfully. Dec 13 08:47:47.820434 containerd[1595]: time="2024-12-13T08:47:47.820298470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:47.823333 containerd[1595]: time="2024-12-13T08:47:47.823195919Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 08:47:47.826621 containerd[1595]: time="2024-12-13T08:47:47.826400624Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:47.833145 containerd[1595]: time="2024-12-13T08:47:47.833023430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:47.835953 containerd[1595]: time="2024-12-13T08:47:47.835236452Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 649.52731ms" Dec 13 08:47:47.835953 containerd[1595]: time="2024-12-13T08:47:47.835524946Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 08:47:47.877544 containerd[1595]: time="2024-12-13T08:47:47.877504178Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 08:47:48.443022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2771776049.mount: Deactivated successfully. Dec 13 08:47:48.706578 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 08:47:48.717673 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:47:48.906572 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:47:48.916065 (kubelet)[2163]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:47:49.048656 kubelet[2163]: E1213 08:47:49.048476 2163 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:47:49.060345 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:47:49.062395 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:47:50.861050 containerd[1595]: time="2024-12-13T08:47:50.860793011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:50.864575 containerd[1595]: time="2024-12-13T08:47:50.864457396Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Dec 13 08:47:50.868465 containerd[1595]: time="2024-12-13T08:47:50.868369932Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:50.875623 containerd[1595]: time="2024-12-13T08:47:50.875522699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:50.877872 containerd[1595]: time="2024-12-13T08:47:50.877627326Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.999857439s" Dec 13 08:47:50.877872 containerd[1595]: time="2024-12-13T08:47:50.877701268Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 08:47:53.826452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:47:53.838888 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:47:53.878422 systemd[1]: Reloading requested from client PID 2271 ('systemctl') (unit session-7.scope)... Dec 13 08:47:53.878617 systemd[1]: Reloading... Dec 13 08:47:54.049374 zram_generator::config[2311]: No configuration found. Dec 13 08:47:54.264585 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 08:47:54.389216 systemd[1]: Reloading finished in 509 ms. Dec 13 08:47:54.463980 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 08:47:54.464108 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 08:47:54.464546 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:47:54.486023 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:47:54.629586 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:47:54.635784 (kubelet)[2374]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 08:47:54.717602 kubelet[2374]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 08:47:54.720108 kubelet[2374]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 08:47:54.720108 kubelet[2374]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 08:47:54.720108 kubelet[2374]: I1213 08:47:54.718282 2374 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 08:47:55.286256 kubelet[2374]: I1213 08:47:55.286206 2374 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 08:47:55.286473 kubelet[2374]: I1213 08:47:55.286459 2374 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 08:47:55.286895 kubelet[2374]: I1213 08:47:55.286870 2374 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 08:47:55.325217 kubelet[2374]: I1213 08:47:55.325164 2374 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 08:47:55.326554 kubelet[2374]: E1213 08:47:55.326515 2374 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://146.190.59.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 146.190.59.17:6443: connect: connection refused Dec 13 08:47:55.348796 kubelet[2374]: I1213 08:47:55.348761 2374 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 08:47:55.355558 kubelet[2374]: I1213 08:47:55.355487 2374 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 08:47:55.357420 kubelet[2374]: I1213 08:47:55.357373 2374 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 08:47:55.357948 kubelet[2374]: I1213 08:47:55.357702 2374 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 08:47:55.357948 kubelet[2374]: I1213 08:47:55.357727 2374 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 08:47:55.358385 kubelet[2374]: I1213 08:47:55.358167 2374 state_mem.go:36] "Initialized new in-memory state store" Dec 13 08:47:55.358608 kubelet[2374]: I1213 08:47:55.358445 2374 kubelet.go:396] "Attempting to sync node with API server" Dec 13 08:47:55.358608 kubelet[2374]: I1213 08:47:55.358472 2374 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 08:47:55.359039 kubelet[2374]: I1213 08:47:55.358810 2374 kubelet.go:312] "Adding apiserver pod source" Dec 13 08:47:55.359039 kubelet[2374]: I1213 08:47:55.358845 2374 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 08:47:55.361349 kubelet[2374]: W1213 08:47:55.360181 2374 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://146.190.59.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-f-1ee231485e&limit=500&resourceVersion=0": dial tcp 146.190.59.17:6443: connect: connection refused Dec 13 08:47:55.361349 kubelet[2374]: E1213 08:47:55.360281 2374 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://146.190.59.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-f-1ee231485e&limit=500&resourceVersion=0": dial tcp 146.190.59.17:6443: connect: connection refused Dec 13 08:47:55.361349 kubelet[2374]: W1213 08:47:55.360775 2374 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://146.190.59.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.59.17:6443: connect: connection refused Dec 13 08:47:55.361349 kubelet[2374]: E1213 08:47:55.360845 2374 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://146.190.59.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.59.17:6443: connect: connection refused Dec 13 08:47:55.361349 kubelet[2374]: I1213 08:47:55.361348 2374 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 08:47:55.367879 kubelet[2374]: I1213 08:47:55.367826 2374 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 08:47:55.371999 kubelet[2374]: W1213 08:47:55.371946 2374 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 08:47:55.375672 kubelet[2374]: I1213 08:47:55.375633 2374 server.go:1256] "Started kubelet" Dec 13 08:47:55.376598 kubelet[2374]: I1213 08:47:55.376561 2374 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 08:47:55.377332 kubelet[2374]: I1213 08:47:55.376800 2374 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 08:47:55.377332 kubelet[2374]: I1213 08:47:55.377192 2374 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 08:47:55.378125 kubelet[2374]: I1213 08:47:55.378098 2374 server.go:461] "Adding debug handlers to kubelet server" Dec 13 08:47:55.382208 kubelet[2374]: I1213 08:47:55.382172 2374 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 08:47:55.392306 kubelet[2374]: E1213 08:47:55.392255 2374 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.59.17:6443/api/v1/namespaces/default/events\": dial tcp 146.190.59.17:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.2.1-f-1ee231485e.1810b046f6fca7bd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.1-f-1ee231485e,UID:ci-4081.2.1-f-1ee231485e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.1-f-1ee231485e,},FirstTimestamp:2024-12-13 08:47:55.375593405 +0000 UTC m=+0.733790705,LastTimestamp:2024-12-13 08:47:55.375593405 +0000 UTC m=+0.733790705,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.1-f-1ee231485e,}" Dec 13 08:47:55.395356 kubelet[2374]: I1213 08:47:55.394807 2374 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 08:47:55.397434 kubelet[2374]: I1213 08:47:55.397399 2374 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 08:47:55.398396 kubelet[2374]: I1213 08:47:55.398359 2374 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 08:47:55.400076 kubelet[2374]: E1213 08:47:55.399931 2374 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.59.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-f-1ee231485e?timeout=10s\": dial tcp 146.190.59.17:6443: connect: connection refused" interval="200ms" Dec 13 08:47:55.400448 kubelet[2374]: W1213 08:47:55.400261 2374 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://146.190.59.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.59.17:6443: connect: connection refused Dec 13 08:47:55.400634 kubelet[2374]: E1213 08:47:55.400587 2374 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://146.190.59.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.59.17:6443: connect: connection refused Dec 13 08:47:55.400876 kubelet[2374]: I1213 08:47:55.400751 2374 factory.go:221] Registration of the systemd container factory successfully Dec 13 08:47:55.401505 kubelet[2374]: I1213 08:47:55.401482 2374 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 08:47:55.401841 kubelet[2374]: E1213 08:47:55.401818 2374 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 08:47:55.407018 kubelet[2374]: I1213 08:47:55.406798 2374 factory.go:221] Registration of the containerd container factory successfully Dec 13 08:47:55.426139 kubelet[2374]: I1213 08:47:55.425988 2374 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 08:47:55.435038 kubelet[2374]: I1213 08:47:55.434980 2374 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 08:47:55.436725 kubelet[2374]: I1213 08:47:55.435070 2374 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 08:47:55.436725 kubelet[2374]: I1213 08:47:55.435128 2374 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 08:47:55.436725 kubelet[2374]: E1213 08:47:55.435278 2374 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 08:47:55.439946 kubelet[2374]: W1213 08:47:55.436904 2374 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://146.190.59.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.59.17:6443: connect: connection refused Dec 13 08:47:55.439946 kubelet[2374]: E1213 08:47:55.437493 2374 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://146.190.59.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.59.17:6443: connect: connection refused Dec 13 08:47:55.453182 kubelet[2374]: I1213 08:47:55.453083 2374 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 08:47:55.453182 kubelet[2374]: I1213 08:47:55.453168 2374 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 08:47:55.453182 kubelet[2374]: I1213 08:47:55.453192 2374 state_mem.go:36] "Initialized new in-memory state store" Dec 13 08:47:55.463069 kubelet[2374]: I1213 08:47:55.462995 2374 policy_none.go:49] "None policy: Start" Dec 13 08:47:55.466812 kubelet[2374]: I1213 08:47:55.466724 2374 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 08:47:55.467006 kubelet[2374]: I1213 08:47:55.466812 2374 state_mem.go:35] "Initializing new in-memory state store" Dec 13 08:47:55.481345 kubelet[2374]: I1213 08:47:55.480884 2374 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 08:47:55.481345 kubelet[2374]: I1213 08:47:55.481347 2374 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 08:47:55.486452 kubelet[2374]: E1213 08:47:55.486419 2374 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.2.1-f-1ee231485e\" not found" Dec 13 08:47:55.497239 kubelet[2374]: I1213 08:47:55.497206 2374 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-f-1ee231485e" Dec 13 08:47:55.497926 kubelet[2374]: E1213 08:47:55.497736 2374 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.59.17:6443/api/v1/nodes\": dial tcp 146.190.59.17:6443: connect: connection refused" node="ci-4081.2.1-f-1ee231485e" Dec 13 08:47:55.535815 kubelet[2374]: I1213 08:47:55.535754 2374 topology_manager.go:215] "Topology Admit Handler" podUID="ff020e86a14abe7b4e6855da72a7f3b5" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.1-f-1ee231485e" Dec 13 08:47:55.543376 kubelet[2374]: I1213 08:47:55.540243 2374 topology_manager.go:215] "Topology Admit Handler" podUID="412191306363a2de9b5a0940c1508afd" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.1-f-1ee231485e" Dec 13 08:47:55.543376 kubelet[2374]: I1213 08:47:55.541996 2374 topology_manager.go:215] "Topology Admit Handler" podUID="a63674f1025984e5c6e2d12584db9983" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.1-f-1ee231485e" Dec 13 08:47:55.600549 kubelet[2374]: I1213 08:47:55.600497 2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ff020e86a14abe7b4e6855da72a7f3b5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-f-1ee231485e\" (UID: \"ff020e86a14abe7b4e6855da72a7f3b5\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-f-1ee231485e" Dec 13 08:47:55.600713 kubelet[2374]: I1213 08:47:55.600573 2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/412191306363a2de9b5a0940c1508afd-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-f-1ee231485e\" (UID: \"412191306363a2de9b5a0940c1508afd\") " pod="kube-system/kube-scheduler-ci-4081.2.1-f-1ee231485e" Dec 13 08:47:55.600713 kubelet[2374]: I1213 08:47:55.600612 2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a63674f1025984e5c6e2d12584db9983-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-f-1ee231485e\" (UID: \"a63674f1025984e5c6e2d12584db9983\") " pod="kube-system/kube-apiserver-ci-4081.2.1-f-1ee231485e" Dec 13 08:47:55.600713 kubelet[2374]: I1213 08:47:55.600643 2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ff020e86a14abe7b4e6855da72a7f3b5-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-f-1ee231485e\" (UID: \"ff020e86a14abe7b4e6855da72a7f3b5\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-f-1ee231485e" Dec 13 08:47:55.600713 kubelet[2374]: I1213 08:47:55.600673 2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ff020e86a14abe7b4e6855da72a7f3b5-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-f-1ee231485e\" (UID: \"ff020e86a14abe7b4e6855da72a7f3b5\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-f-1ee231485e" Dec 13 08:47:55.600713 kubelet[2374]: I1213 08:47:55.600700 2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ff020e86a14abe7b4e6855da72a7f3b5-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-f-1ee231485e\" (UID: \"ff020e86a14abe7b4e6855da72a7f3b5\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-f-1ee231485e" Dec 13 08:47:55.600860 kubelet[2374]: I1213 08:47:55.600729 2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ff020e86a14abe7b4e6855da72a7f3b5-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-f-1ee231485e\" (UID: \"ff020e86a14abe7b4e6855da72a7f3b5\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-f-1ee231485e" Dec 13 08:47:55.600860 kubelet[2374]: I1213 08:47:55.600758 2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a63674f1025984e5c6e2d12584db9983-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-f-1ee231485e\" (UID: \"a63674f1025984e5c6e2d12584db9983\") " pod="kube-system/kube-apiserver-ci-4081.2.1-f-1ee231485e" Dec 13 08:47:55.600860 kubelet[2374]: I1213 08:47:55.600788 2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a63674f1025984e5c6e2d12584db9983-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-f-1ee231485e\" (UID: \"a63674f1025984e5c6e2d12584db9983\") " pod="kube-system/kube-apiserver-ci-4081.2.1-f-1ee231485e" Dec 13 08:47:55.601302 kubelet[2374]: E1213 08:47:55.601278 2374 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.59.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-f-1ee231485e?timeout=10s\": dial tcp 146.190.59.17:6443: connect: connection refused" interval="400ms" Dec 13 08:47:55.699920 kubelet[2374]: I1213 08:47:55.699873 2374 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-f-1ee231485e" Dec 13 08:47:55.700450 kubelet[2374]: E1213 08:47:55.700426 2374 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.59.17:6443/api/v1/nodes\": dial tcp 146.190.59.17:6443: connect: connection refused" node="ci-4081.2.1-f-1ee231485e" Dec 13 08:47:55.850442 kubelet[2374]: E1213 08:47:55.850230 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:47:55.851074 kubelet[2374]: E1213 08:47:55.850893 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:47:55.852416 kubelet[2374]: E1213 08:47:55.851985 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:47:55.854556 containerd[1595]: time="2024-12-13T08:47:55.851434522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-f-1ee231485e,Uid:ff020e86a14abe7b4e6855da72a7f3b5,Namespace:kube-system,Attempt:0,}" Dec 13 08:47:55.857027 containerd[1595]: time="2024-12-13T08:47:55.856904342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-f-1ee231485e,Uid:a63674f1025984e5c6e2d12584db9983,Namespace:kube-system,Attempt:0,}" Dec 13 08:47:55.857182 containerd[1595]: time="2024-12-13T08:47:55.856907031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-f-1ee231485e,Uid:412191306363a2de9b5a0940c1508afd,Namespace:kube-system,Attempt:0,}" Dec 13 08:47:55.860467 systemd-resolved[1476]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Dec 13 08:47:56.002188 kubelet[2374]: E1213 08:47:56.002128 2374 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.59.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-f-1ee231485e?timeout=10s\": dial tcp 146.190.59.17:6443: connect: connection refused" interval="800ms" Dec 13 08:47:56.102674 kubelet[2374]: I1213 08:47:56.102477 2374 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-f-1ee231485e" Dec 13 08:47:56.103382 kubelet[2374]: E1213 08:47:56.102827 2374 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.59.17:6443/api/v1/nodes\": dial tcp 146.190.59.17:6443: connect: connection refused" node="ci-4081.2.1-f-1ee231485e" Dec 13 08:47:56.303611 kubelet[2374]: W1213 08:47:56.303522 2374 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://146.190.59.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-f-1ee231485e&limit=500&resourceVersion=0": dial tcp 146.190.59.17:6443: connect: connection refused Dec 13 08:47:56.303611 kubelet[2374]: E1213 08:47:56.303613 2374 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://146.190.59.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-f-1ee231485e&limit=500&resourceVersion=0": dial tcp 146.190.59.17:6443: connect: connection refused Dec 13 08:47:56.477380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount610033638.mount: Deactivated successfully. Dec 13 08:47:56.557085 containerd[1595]: time="2024-12-13T08:47:56.556940534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:47:56.560019 containerd[1595]: time="2024-12-13T08:47:56.559951091Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:47:56.562227 containerd[1595]: time="2024-12-13T08:47:56.562091714Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 08:47:56.564481 containerd[1595]: time="2024-12-13T08:47:56.564412880Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 08:47:56.568220 containerd[1595]: time="2024-12-13T08:47:56.568155025Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:47:56.585239 containerd[1595]: time="2024-12-13T08:47:56.585132525Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:47:56.605625 containerd[1595]: time="2024-12-13T08:47:56.605536532Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 08:47:56.622170 kubelet[2374]: W1213 08:47:56.622074 2374 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://146.190.59.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.59.17:6443: connect: connection refused Dec 13 08:47:56.622170 kubelet[2374]: E1213 08:47:56.622179 2374 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://146.190.59.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.59.17:6443: connect: connection refused Dec 13 08:47:56.642349 containerd[1595]: time="2024-12-13T08:47:56.642208892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:47:56.644876 containerd[1595]: time="2024-12-13T08:47:56.644605295Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 789.992803ms" Dec 13 08:47:56.646918 containerd[1595]: time="2024-12-13T08:47:56.645237926Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 788.084993ms" Dec 13 08:47:56.647736 containerd[1595]: time="2024-12-13T08:47:56.647683185Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 790.676108ms" Dec 13 08:47:56.733391 kubelet[2374]: W1213 08:47:56.732805 2374 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://146.190.59.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.59.17:6443: connect: connection refused Dec 13 08:47:56.733391 kubelet[2374]: E1213 08:47:56.732929 2374 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://146.190.59.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.59.17:6443: connect: connection refused Dec 13 08:47:56.803921 kubelet[2374]: E1213 08:47:56.803839 2374 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.59.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-f-1ee231485e?timeout=10s\": dial tcp 146.190.59.17:6443: connect: connection refused" interval="1.6s" Dec 13 08:47:56.907785 kubelet[2374]: I1213 08:47:56.907736 2374 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-f-1ee231485e" Dec 13 08:47:56.909714 kubelet[2374]: E1213 08:47:56.908200 2374 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.59.17:6443/api/v1/nodes\": dial tcp 146.190.59.17:6443: connect: connection refused" node="ci-4081.2.1-f-1ee231485e" Dec 13 08:47:56.948763 kubelet[2374]: W1213 08:47:56.948658 2374 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://146.190.59.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.59.17:6443: connect: connection refused Dec 13 08:47:56.948763 kubelet[2374]: E1213 08:47:56.948763 2374 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://146.190.59.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.59.17:6443: connect: connection refused Dec 13 08:47:57.032415 containerd[1595]: time="2024-12-13T08:47:57.031914236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:47:57.032415 containerd[1595]: time="2024-12-13T08:47:57.032014312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:47:57.032415 containerd[1595]: time="2024-12-13T08:47:57.032040797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:47:57.033663 containerd[1595]: time="2024-12-13T08:47:57.032211359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:47:57.034999 containerd[1595]: time="2024-12-13T08:47:57.034882054Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:47:57.034999 containerd[1595]: time="2024-12-13T08:47:57.034958483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:47:57.035401 containerd[1595]: time="2024-12-13T08:47:57.034993465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:47:57.035401 containerd[1595]: time="2024-12-13T08:47:57.035183101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:47:57.045721 containerd[1595]: time="2024-12-13T08:47:57.044849583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:47:57.045721 containerd[1595]: time="2024-12-13T08:47:57.044940878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:47:57.045721 containerd[1595]: time="2024-12-13T08:47:57.044963122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:47:57.045721 containerd[1595]: time="2024-12-13T08:47:57.045117844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:47:57.169624 containerd[1595]: time="2024-12-13T08:47:57.169465083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-f-1ee231485e,Uid:a63674f1025984e5c6e2d12584db9983,Namespace:kube-system,Attempt:0,} returns sandbox id \"59b46b64678db72fed92031cbd75af05325d77e3613682c40c3d4f9a55536762\"" Dec 13 08:47:57.171733 kubelet[2374]: E1213 08:47:57.171360 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:47:57.178602 containerd[1595]: time="2024-12-13T08:47:57.178547312Z" level=info msg="CreateContainer within sandbox \"59b46b64678db72fed92031cbd75af05325d77e3613682c40c3d4f9a55536762\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 08:47:57.193068 containerd[1595]: time="2024-12-13T08:47:57.192845029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-f-1ee231485e,Uid:ff020e86a14abe7b4e6855da72a7f3b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"9af5fac4414947f9f01ba13b3cc3b0ef22fe46b5d02860d56d9ea4a15241d24e\"" Dec 13 08:47:57.195450 kubelet[2374]: E1213 08:47:57.195407 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:47:57.204809 containerd[1595]: time="2024-12-13T08:47:57.204445484Z" level=info msg="CreateContainer within sandbox \"9af5fac4414947f9f01ba13b3cc3b0ef22fe46b5d02860d56d9ea4a15241d24e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 08:47:57.216471 containerd[1595]: time="2024-12-13T08:47:57.216370119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-f-1ee231485e,Uid:412191306363a2de9b5a0940c1508afd,Namespace:kube-system,Attempt:0,} returns sandbox id \"0fb4b82eb30ba0377aa56af9a2097e86bf13edc6d47cd0da6d0d86bbf383d89d\"" Dec 13 08:47:57.218131 kubelet[2374]: E1213 08:47:57.218097 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:47:57.221574 containerd[1595]: time="2024-12-13T08:47:57.221198541Z" level=info msg="CreateContainer within sandbox \"0fb4b82eb30ba0377aa56af9a2097e86bf13edc6d47cd0da6d0d86bbf383d89d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 08:47:57.253726 containerd[1595]: time="2024-12-13T08:47:57.253635796Z" level=info msg="CreateContainer within sandbox \"59b46b64678db72fed92031cbd75af05325d77e3613682c40c3d4f9a55536762\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"da54347abddeb3666014c30de348416d703e6b95ebd023e91e22f16b02088f1a\"" Dec 13 08:47:57.255431 containerd[1595]: time="2024-12-13T08:47:57.255369681Z" level=info msg="StartContainer for \"da54347abddeb3666014c30de348416d703e6b95ebd023e91e22f16b02088f1a\"" Dec 13 08:47:57.280261 containerd[1595]: time="2024-12-13T08:47:57.280100833Z" level=info msg="CreateContainer within sandbox \"9af5fac4414947f9f01ba13b3cc3b0ef22fe46b5d02860d56d9ea4a15241d24e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bd1987ed24ba5b0af35f63408042fcbd64c43998a95ecdbfe3c9e5bae117d4b2\"" Dec 13 08:47:57.281672 containerd[1595]: time="2024-12-13T08:47:57.281625675Z" level=info msg="StartContainer for \"bd1987ed24ba5b0af35f63408042fcbd64c43998a95ecdbfe3c9e5bae117d4b2\"" Dec 13 08:47:57.306753 containerd[1595]: time="2024-12-13T08:47:57.304063667Z" level=info msg="CreateContainer within sandbox \"0fb4b82eb30ba0377aa56af9a2097e86bf13edc6d47cd0da6d0d86bbf383d89d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d05b0ea6a440b261596e7a617882ddffc06c992d777f7d764b085c0f2fbc6f7f\"" Dec 13 08:47:57.309886 containerd[1595]: time="2024-12-13T08:47:57.309828628Z" level=info msg="StartContainer for \"d05b0ea6a440b261596e7a617882ddffc06c992d777f7d764b085c0f2fbc6f7f\"" Dec 13 08:47:57.407575 containerd[1595]: time="2024-12-13T08:47:57.406487872Z" level=info msg="StartContainer for \"da54347abddeb3666014c30de348416d703e6b95ebd023e91e22f16b02088f1a\" returns successfully" Dec 13 08:47:57.417532 kubelet[2374]: E1213 08:47:57.417493 2374 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://146.190.59.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 146.190.59.17:6443: connect: connection refused Dec 13 08:47:57.469359 containerd[1595]: time="2024-12-13T08:47:57.469200812Z" level=info msg="StartContainer for \"bd1987ed24ba5b0af35f63408042fcbd64c43998a95ecdbfe3c9e5bae117d4b2\" returns successfully" Dec 13 08:47:57.479164 containerd[1595]: time="2024-12-13T08:47:57.478934772Z" level=info msg="StartContainer for \"d05b0ea6a440b261596e7a617882ddffc06c992d777f7d764b085c0f2fbc6f7f\" returns successfully" Dec 13 08:47:57.499621 kubelet[2374]: E1213 08:47:57.499115 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:47:58.507161 kubelet[2374]: E1213 08:47:58.507103 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:47:58.510201 kubelet[2374]: I1213 08:47:58.510163 2374 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-f-1ee231485e" Dec 13 08:47:58.511267 kubelet[2374]: E1213 08:47:58.511243 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:47:58.512250 kubelet[2374]: E1213 08:47:58.512212 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:47:59.515347 kubelet[2374]: E1213 08:47:59.512771 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:47:59.516990 kubelet[2374]: E1213 08:47:59.516745 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:47:59.519690 kubelet[2374]: E1213 08:47:59.519567 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:47:59.966347 kubelet[2374]: E1213 08:47:59.965047 2374 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.2.1-f-1ee231485e\" not found" node="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:00.069352 kubelet[2374]: I1213 08:48:00.068297 2374 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:00.366771 kubelet[2374]: I1213 08:48:00.366304 2374 apiserver.go:52] "Watching apiserver" Dec 13 08:48:00.398307 kubelet[2374]: I1213 08:48:00.398203 2374 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 08:48:03.895850 systemd[1]: Reloading requested from client PID 2643 ('systemctl') (unit session-7.scope)... Dec 13 08:48:03.895873 systemd[1]: Reloading... Dec 13 08:48:04.088398 zram_generator::config[2691]: No configuration found. Dec 13 08:48:04.325848 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 08:48:04.488715 systemd[1]: Reloading finished in 591 ms. Dec 13 08:48:04.546481 kubelet[2374]: I1213 08:48:04.546440 2374 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 08:48:04.547180 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:48:04.565546 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 08:48:04.566041 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:48:04.578389 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:48:04.777680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:48:04.791247 (kubelet)[2743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 08:48:04.884426 kubelet[2743]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 08:48:04.884426 kubelet[2743]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 08:48:04.884426 kubelet[2743]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 08:48:04.885011 kubelet[2743]: I1213 08:48:04.884445 2743 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 08:48:04.898006 kubelet[2743]: I1213 08:48:04.895949 2743 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 08:48:04.898006 kubelet[2743]: I1213 08:48:04.895995 2743 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 08:48:04.898006 kubelet[2743]: I1213 08:48:04.896439 2743 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 08:48:04.899898 kubelet[2743]: I1213 08:48:04.899857 2743 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 08:48:04.917215 kubelet[2743]: I1213 08:48:04.916352 2743 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 08:48:04.943536 kubelet[2743]: I1213 08:48:04.943113 2743 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 08:48:04.944402 kubelet[2743]: I1213 08:48:04.944365 2743 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 08:48:04.947507 kubelet[2743]: I1213 08:48:04.944858 2743 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 08:48:04.947507 kubelet[2743]: I1213 08:48:04.944922 2743 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 08:48:04.947507 kubelet[2743]: I1213 08:48:04.944940 2743 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 08:48:04.947507 kubelet[2743]: I1213 08:48:04.945009 2743 state_mem.go:36] "Initialized new in-memory state store" Dec 13 08:48:04.947507 kubelet[2743]: I1213 08:48:04.945170 2743 kubelet.go:396] "Attempting to sync node with API server" Dec 13 08:48:04.947507 kubelet[2743]: I1213 08:48:04.945306 2743 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 08:48:04.947507 kubelet[2743]: I1213 08:48:04.945490 2743 kubelet.go:312] "Adding apiserver pod source" Dec 13 08:48:04.949582 kubelet[2743]: I1213 08:48:04.945513 2743 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 08:48:04.949582 kubelet[2743]: I1213 08:48:04.947897 2743 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 08:48:04.949582 kubelet[2743]: I1213 08:48:04.948374 2743 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 08:48:04.951747 kubelet[2743]: I1213 08:48:04.951022 2743 server.go:1256] "Started kubelet" Dec 13 08:48:04.959134 kubelet[2743]: I1213 08:48:04.959091 2743 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 08:48:04.980445 kubelet[2743]: I1213 08:48:04.976708 2743 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 08:48:04.982349 kubelet[2743]: I1213 08:48:04.980956 2743 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 08:48:04.987357 kubelet[2743]: I1213 08:48:04.984431 2743 server.go:461] "Adding debug handlers to kubelet server" Dec 13 08:48:05.015361 kubelet[2743]: I1213 08:48:04.984835 2743 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 08:48:05.015361 kubelet[2743]: I1213 08:48:04.989106 2743 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 08:48:05.015361 kubelet[2743]: I1213 08:48:04.989129 2743 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 08:48:05.015361 kubelet[2743]: I1213 08:48:05.015259 2743 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 08:48:05.015361 kubelet[2743]: I1213 08:48:05.006824 2743 factory.go:221] Registration of the systemd container factory successfully Dec 13 08:48:05.016694 kubelet[2743]: I1213 08:48:05.016450 2743 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 08:48:05.017567 kubelet[2743]: E1213 08:48:05.008524 2743 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 08:48:05.022367 kubelet[2743]: I1213 08:48:05.021528 2743 factory.go:221] Registration of the containerd container factory successfully Dec 13 08:48:05.029069 kubelet[2743]: I1213 08:48:05.029038 2743 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 08:48:05.033592 kubelet[2743]: I1213 08:48:05.033562 2743 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 08:48:05.033769 kubelet[2743]: I1213 08:48:05.033760 2743 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 08:48:05.033879 kubelet[2743]: I1213 08:48:05.033871 2743 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 08:48:05.034002 kubelet[2743]: E1213 08:48:05.033993 2743 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 08:48:05.097237 kubelet[2743]: I1213 08:48:05.097001 2743 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:05.123813 kubelet[2743]: I1213 08:48:05.123759 2743 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:05.123978 kubelet[2743]: I1213 08:48:05.123891 2743 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:05.142529 kubelet[2743]: E1213 08:48:05.139853 2743 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 08:48:05.176376 kubelet[2743]: I1213 08:48:05.175738 2743 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 08:48:05.176376 kubelet[2743]: I1213 08:48:05.175777 2743 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 08:48:05.176376 kubelet[2743]: I1213 08:48:05.175807 2743 state_mem.go:36] "Initialized new in-memory state store" Dec 13 08:48:05.176376 kubelet[2743]: I1213 08:48:05.176044 2743 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 08:48:05.176376 kubelet[2743]: I1213 08:48:05.176081 2743 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 08:48:05.176376 kubelet[2743]: I1213 08:48:05.176092 2743 policy_none.go:49] "None policy: Start" Dec 13 08:48:05.178564 kubelet[2743]: I1213 08:48:05.177890 2743 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 08:48:05.178564 kubelet[2743]: I1213 08:48:05.177929 2743 state_mem.go:35] "Initializing new in-memory state store" Dec 13 08:48:05.178564 kubelet[2743]: I1213 08:48:05.178156 2743 state_mem.go:75] "Updated machine memory state" Dec 13 08:48:05.181251 kubelet[2743]: I1213 08:48:05.181214 2743 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 08:48:05.183547 kubelet[2743]: I1213 08:48:05.183510 2743 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 08:48:05.341035 kubelet[2743]: I1213 08:48:05.340275 2743 topology_manager.go:215] "Topology Admit Handler" podUID="a63674f1025984e5c6e2d12584db9983" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.1-f-1ee231485e" Dec 13 08:48:05.341035 kubelet[2743]: I1213 08:48:05.340508 2743 topology_manager.go:215] "Topology Admit Handler" podUID="ff020e86a14abe7b4e6855da72a7f3b5" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.1-f-1ee231485e" Dec 13 08:48:05.341035 kubelet[2743]: I1213 08:48:05.340571 2743 topology_manager.go:215] "Topology Admit Handler" podUID="412191306363a2de9b5a0940c1508afd" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.1-f-1ee231485e" Dec 13 08:48:05.360386 kubelet[2743]: W1213 08:48:05.360006 2743 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 08:48:05.364953 kubelet[2743]: W1213 08:48:05.363918 2743 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 08:48:05.365244 kubelet[2743]: W1213 08:48:05.365056 2743 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 08:48:05.417045 kubelet[2743]: I1213 08:48:05.416480 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ff020e86a14abe7b4e6855da72a7f3b5-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-f-1ee231485e\" (UID: \"ff020e86a14abe7b4e6855da72a7f3b5\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-f-1ee231485e" Dec 13 08:48:05.417045 kubelet[2743]: I1213 08:48:05.416542 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ff020e86a14abe7b4e6855da72a7f3b5-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-f-1ee231485e\" (UID: \"ff020e86a14abe7b4e6855da72a7f3b5\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-f-1ee231485e" Dec 13 08:48:05.417045 kubelet[2743]: I1213 08:48:05.416769 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ff020e86a14abe7b4e6855da72a7f3b5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-f-1ee231485e\" (UID: \"ff020e86a14abe7b4e6855da72a7f3b5\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-f-1ee231485e" Dec 13 08:48:05.417045 kubelet[2743]: I1213 08:48:05.416810 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/412191306363a2de9b5a0940c1508afd-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-f-1ee231485e\" (UID: \"412191306363a2de9b5a0940c1508afd\") " pod="kube-system/kube-scheduler-ci-4081.2.1-f-1ee231485e" Dec 13 08:48:05.417045 kubelet[2743]: I1213 08:48:05.416843 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a63674f1025984e5c6e2d12584db9983-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-f-1ee231485e\" (UID: \"a63674f1025984e5c6e2d12584db9983\") " pod="kube-system/kube-apiserver-ci-4081.2.1-f-1ee231485e" Dec 13 08:48:05.417403 kubelet[2743]: I1213 08:48:05.416875 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ff020e86a14abe7b4e6855da72a7f3b5-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-f-1ee231485e\" (UID: \"ff020e86a14abe7b4e6855da72a7f3b5\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-f-1ee231485e" Dec 13 08:48:05.417403 kubelet[2743]: I1213 08:48:05.416908 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ff020e86a14abe7b4e6855da72a7f3b5-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-f-1ee231485e\" (UID: \"ff020e86a14abe7b4e6855da72a7f3b5\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-f-1ee231485e" Dec 13 08:48:05.417403 kubelet[2743]: I1213 08:48:05.416938 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a63674f1025984e5c6e2d12584db9983-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-f-1ee231485e\" (UID: \"a63674f1025984e5c6e2d12584db9983\") " pod="kube-system/kube-apiserver-ci-4081.2.1-f-1ee231485e" Dec 13 08:48:05.417403 kubelet[2743]: I1213 08:48:05.416975 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a63674f1025984e5c6e2d12584db9983-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-f-1ee231485e\" (UID: \"a63674f1025984e5c6e2d12584db9983\") " pod="kube-system/kube-apiserver-ci-4081.2.1-f-1ee231485e" Dec 13 08:48:05.665083 kubelet[2743]: E1213 08:48:05.664958 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:05.668342 kubelet[2743]: E1213 08:48:05.668168 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:05.668936 kubelet[2743]: E1213 08:48:05.668905 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:05.965635 kubelet[2743]: I1213 08:48:05.965461 2743 apiserver.go:52] "Watching apiserver" Dec 13 08:48:06.015774 kubelet[2743]: I1213 08:48:06.015725 2743 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 08:48:06.099600 kubelet[2743]: E1213 08:48:06.099259 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:06.102395 kubelet[2743]: E1213 08:48:06.101065 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:06.102395 kubelet[2743]: E1213 08:48:06.101931 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:06.150161 kubelet[2743]: I1213 08:48:06.149426 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.2.1-f-1ee231485e" podStartSLOduration=1.1492501179999999 podStartE2EDuration="1.149250118s" podCreationTimestamp="2024-12-13 08:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:48:06.148863431 +0000 UTC m=+1.349677938" watchObservedRunningTime="2024-12-13 08:48:06.149250118 +0000 UTC m=+1.350064625" Dec 13 08:48:06.196043 kubelet[2743]: I1213 08:48:06.195923 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.2.1-f-1ee231485e" podStartSLOduration=1.1957645829999999 podStartE2EDuration="1.195764583s" podCreationTimestamp="2024-12-13 08:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:48:06.175533598 +0000 UTC m=+1.376348105" watchObservedRunningTime="2024-12-13 08:48:06.195764583 +0000 UTC m=+1.396579069" Dec 13 08:48:06.216854 kubelet[2743]: I1213 08:48:06.214185 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.2.1-f-1ee231485e" podStartSLOduration=1.214134452 podStartE2EDuration="1.214134452s" podCreationTimestamp="2024-12-13 08:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:48:06.198048059 +0000 UTC m=+1.398862560" watchObservedRunningTime="2024-12-13 08:48:06.214134452 +0000 UTC m=+1.414948959" Dec 13 08:48:07.102834 kubelet[2743]: E1213 08:48:07.102790 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:10.023553 update_engine[1578]: I20241213 08:48:10.023427 1578 update_attempter.cc:509] Updating boot flags... Dec 13 08:48:10.086400 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2805) Dec 13 08:48:10.203976 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2808) Dec 13 08:48:10.659576 kubelet[2743]: E1213 08:48:10.659491 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:11.124490 kubelet[2743]: E1213 08:48:11.124422 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:11.514120 sudo[1807]: pam_unix(sudo:session): session closed for user root Dec 13 08:48:11.520162 sshd[1803]: pam_unix(sshd:session): session closed for user core Dec 13 08:48:11.525791 systemd[1]: sshd@6-146.190.59.17:22-147.75.109.163:43888.service: Deactivated successfully. Dec 13 08:48:11.532680 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 08:48:11.535117 systemd-logind[1572]: Session 7 logged out. Waiting for processes to exit. Dec 13 08:48:11.537156 systemd-logind[1572]: Removed session 7. Dec 13 08:48:11.908727 kubelet[2743]: E1213 08:48:11.907702 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:12.125519 kubelet[2743]: E1213 08:48:12.125380 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:14.261466 kubelet[2743]: E1213 08:48:14.260874 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:15.135346 kubelet[2743]: E1213 08:48:15.132827 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:16.135250 kubelet[2743]: E1213 08:48:16.135210 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:16.911902 kubelet[2743]: I1213 08:48:16.910750 2743 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 08:48:16.913501 containerd[1595]: time="2024-12-13T08:48:16.913438244Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 08:48:16.917549 kubelet[2743]: I1213 08:48:16.915594 2743 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 08:48:17.711243 kubelet[2743]: I1213 08:48:17.710531 2743 topology_manager.go:215] "Topology Admit Handler" podUID="83155285-5495-4838-8645-05c859d29fa4" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-hlb2b" Dec 13 08:48:17.762889 kubelet[2743]: I1213 08:48:17.762834 2743 topology_manager.go:215] "Topology Admit Handler" podUID="a26ae021-221c-4e5f-813d-85bc8f27cb13" podNamespace="kube-system" podName="kube-proxy-f8cp5" Dec 13 08:48:17.811335 kubelet[2743]: I1213 08:48:17.811205 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45727\" (UniqueName: \"kubernetes.io/projected/83155285-5495-4838-8645-05c859d29fa4-kube-api-access-45727\") pod \"tigera-operator-c7ccbd65-hlb2b\" (UID: \"83155285-5495-4838-8645-05c859d29fa4\") " pod="tigera-operator/tigera-operator-c7ccbd65-hlb2b" Dec 13 08:48:17.811335 kubelet[2743]: I1213 08:48:17.811272 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a26ae021-221c-4e5f-813d-85bc8f27cb13-lib-modules\") pod \"kube-proxy-f8cp5\" (UID: \"a26ae021-221c-4e5f-813d-85bc8f27cb13\") " pod="kube-system/kube-proxy-f8cp5" Dec 13 08:48:17.811335 kubelet[2743]: I1213 08:48:17.811307 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a26ae021-221c-4e5f-813d-85bc8f27cb13-xtables-lock\") pod \"kube-proxy-f8cp5\" (UID: \"a26ae021-221c-4e5f-813d-85bc8f27cb13\") " pod="kube-system/kube-proxy-f8cp5" Dec 13 08:48:17.811664 kubelet[2743]: I1213 08:48:17.811367 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/83155285-5495-4838-8645-05c859d29fa4-var-lib-calico\") pod \"tigera-operator-c7ccbd65-hlb2b\" (UID: \"83155285-5495-4838-8645-05c859d29fa4\") " pod="tigera-operator/tigera-operator-c7ccbd65-hlb2b" Dec 13 08:48:17.811664 kubelet[2743]: I1213 08:48:17.811418 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf4qk\" (UniqueName: \"kubernetes.io/projected/a26ae021-221c-4e5f-813d-85bc8f27cb13-kube-api-access-bf4qk\") pod \"kube-proxy-f8cp5\" (UID: \"a26ae021-221c-4e5f-813d-85bc8f27cb13\") " pod="kube-system/kube-proxy-f8cp5" Dec 13 08:48:17.811664 kubelet[2743]: I1213 08:48:17.811453 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a26ae021-221c-4e5f-813d-85bc8f27cb13-kube-proxy\") pod \"kube-proxy-f8cp5\" (UID: \"a26ae021-221c-4e5f-813d-85bc8f27cb13\") " pod="kube-system/kube-proxy-f8cp5" Dec 13 08:48:18.026758 containerd[1595]: time="2024-12-13T08:48:18.025964317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-hlb2b,Uid:83155285-5495-4838-8645-05c859d29fa4,Namespace:tigera-operator,Attempt:0,}" Dec 13 08:48:18.073298 kubelet[2743]: E1213 08:48:18.070909 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:18.073616 containerd[1595]: time="2024-12-13T08:48:18.073256226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f8cp5,Uid:a26ae021-221c-4e5f-813d-85bc8f27cb13,Namespace:kube-system,Attempt:0,}" Dec 13 08:48:18.090264 containerd[1595]: time="2024-12-13T08:48:18.089739992Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:48:18.090264 containerd[1595]: time="2024-12-13T08:48:18.089835200Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:48:18.090264 containerd[1595]: time="2024-12-13T08:48:18.089852410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:18.090646 containerd[1595]: time="2024-12-13T08:48:18.090410296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:18.152507 containerd[1595]: time="2024-12-13T08:48:18.151513480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:48:18.152925 containerd[1595]: time="2024-12-13T08:48:18.152535528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:48:18.152925 containerd[1595]: time="2024-12-13T08:48:18.152802096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:18.153414 containerd[1595]: time="2024-12-13T08:48:18.153228776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:18.217402 containerd[1595]: time="2024-12-13T08:48:18.217262879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-hlb2b,Uid:83155285-5495-4838-8645-05c859d29fa4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f183a7781c2668832025424096d92105f30cf758dcb5eef7c817883f75c094df\"" Dec 13 08:48:18.222753 containerd[1595]: time="2024-12-13T08:48:18.222452808Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 08:48:18.230444 containerd[1595]: time="2024-12-13T08:48:18.230153623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f8cp5,Uid:a26ae021-221c-4e5f-813d-85bc8f27cb13,Namespace:kube-system,Attempt:0,} returns sandbox id \"e94226bb76b3d932112accc0a21b9f8e1905fff00faa2632f5595358e9bd3cc8\"" Dec 13 08:48:18.231421 kubelet[2743]: E1213 08:48:18.231395 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:18.235514 containerd[1595]: time="2024-12-13T08:48:18.235381111Z" level=info msg="CreateContainer within sandbox \"e94226bb76b3d932112accc0a21b9f8e1905fff00faa2632f5595358e9bd3cc8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 08:48:18.273145 containerd[1595]: time="2024-12-13T08:48:18.273030724Z" level=info msg="CreateContainer within sandbox \"e94226bb76b3d932112accc0a21b9f8e1905fff00faa2632f5595358e9bd3cc8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"25bef11f476b5f755ec71130b27f257a2c08d1e035744e0344094eae8209fc92\"" Dec 13 08:48:18.276482 containerd[1595]: time="2024-12-13T08:48:18.274281728Z" level=info msg="StartContainer for \"25bef11f476b5f755ec71130b27f257a2c08d1e035744e0344094eae8209fc92\"" Dec 13 08:48:18.364445 containerd[1595]: time="2024-12-13T08:48:18.363937676Z" level=info msg="StartContainer for \"25bef11f476b5f755ec71130b27f257a2c08d1e035744e0344094eae8209fc92\" returns successfully" Dec 13 08:48:19.148429 kubelet[2743]: E1213 08:48:19.147472 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:19.194427 kubelet[2743]: I1213 08:48:19.194201 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-f8cp5" podStartSLOduration=2.194139124 podStartE2EDuration="2.194139124s" podCreationTimestamp="2024-12-13 08:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:48:19.192098824 +0000 UTC m=+14.392913335" watchObservedRunningTime="2024-12-13 08:48:19.194139124 +0000 UTC m=+14.394953632" Dec 13 08:48:20.057611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3134619804.mount: Deactivated successfully. Dec 13 08:48:20.731817 containerd[1595]: time="2024-12-13T08:48:20.731657559Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:20.735022 containerd[1595]: time="2024-12-13T08:48:20.734854663Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764297" Dec 13 08:48:20.751793 containerd[1595]: time="2024-12-13T08:48:20.751661498Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:20.759391 containerd[1595]: time="2024-12-13T08:48:20.757510701Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:20.759664 containerd[1595]: time="2024-12-13T08:48:20.759303028Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.536765549s" Dec 13 08:48:20.759808 containerd[1595]: time="2024-12-13T08:48:20.759780065Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 08:48:20.770513 containerd[1595]: time="2024-12-13T08:48:20.770453881Z" level=info msg="CreateContainer within sandbox \"f183a7781c2668832025424096d92105f30cf758dcb5eef7c817883f75c094df\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 08:48:20.806331 containerd[1595]: time="2024-12-13T08:48:20.806099869Z" level=info msg="CreateContainer within sandbox \"f183a7781c2668832025424096d92105f30cf758dcb5eef7c817883f75c094df\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bccfb29b58b8ec89f3d6dfd345774ecf8eb22375388b018740f218a69f87eaea\"" Dec 13 08:48:20.807561 containerd[1595]: time="2024-12-13T08:48:20.807500471Z" level=info msg="StartContainer for \"bccfb29b58b8ec89f3d6dfd345774ecf8eb22375388b018740f218a69f87eaea\"" Dec 13 08:48:20.919052 containerd[1595]: time="2024-12-13T08:48:20.918712130Z" level=info msg="StartContainer for \"bccfb29b58b8ec89f3d6dfd345774ecf8eb22375388b018740f218a69f87eaea\" returns successfully" Dec 13 08:48:24.292895 kubelet[2743]: I1213 08:48:24.291218 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-hlb2b" podStartSLOduration=4.746670655 podStartE2EDuration="7.291147392s" podCreationTimestamp="2024-12-13 08:48:17 +0000 UTC" firstStartedPulling="2024-12-13 08:48:18.219948455 +0000 UTC m=+13.420762940" lastFinishedPulling="2024-12-13 08:48:20.76442517 +0000 UTC m=+15.965239677" observedRunningTime="2024-12-13 08:48:21.175308981 +0000 UTC m=+16.376123488" watchObservedRunningTime="2024-12-13 08:48:24.291147392 +0000 UTC m=+19.491961899" Dec 13 08:48:24.292895 kubelet[2743]: I1213 08:48:24.291681 2743 topology_manager.go:215] "Topology Admit Handler" podUID="c0db4cbb-4c8f-41c0-a2d6-1d10fd9a3ee2" podNamespace="calico-system" podName="calico-typha-66996554b9-bpn5c" Dec 13 08:48:24.359431 kubelet[2743]: I1213 08:48:24.359059 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rwmw\" (UniqueName: \"kubernetes.io/projected/c0db4cbb-4c8f-41c0-a2d6-1d10fd9a3ee2-kube-api-access-6rwmw\") pod \"calico-typha-66996554b9-bpn5c\" (UID: \"c0db4cbb-4c8f-41c0-a2d6-1d10fd9a3ee2\") " pod="calico-system/calico-typha-66996554b9-bpn5c" Dec 13 08:48:24.359431 kubelet[2743]: I1213 08:48:24.359134 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c0db4cbb-4c8f-41c0-a2d6-1d10fd9a3ee2-typha-certs\") pod \"calico-typha-66996554b9-bpn5c\" (UID: \"c0db4cbb-4c8f-41c0-a2d6-1d10fd9a3ee2\") " pod="calico-system/calico-typha-66996554b9-bpn5c" Dec 13 08:48:24.359431 kubelet[2743]: I1213 08:48:24.359178 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0db4cbb-4c8f-41c0-a2d6-1d10fd9a3ee2-tigera-ca-bundle\") pod \"calico-typha-66996554b9-bpn5c\" (UID: \"c0db4cbb-4c8f-41c0-a2d6-1d10fd9a3ee2\") " pod="calico-system/calico-typha-66996554b9-bpn5c" Dec 13 08:48:24.532880 kubelet[2743]: I1213 08:48:24.532826 2743 topology_manager.go:215] "Topology Admit Handler" podUID="598e8726-686b-4daf-a927-5fa0dfcbee9c" podNamespace="calico-system" podName="calico-node-dmbrr" Dec 13 08:48:24.562611 kubelet[2743]: I1213 08:48:24.560879 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/598e8726-686b-4daf-a927-5fa0dfcbee9c-var-run-calico\") pod \"calico-node-dmbrr\" (UID: \"598e8726-686b-4daf-a927-5fa0dfcbee9c\") " pod="calico-system/calico-node-dmbrr" Dec 13 08:48:24.562611 kubelet[2743]: I1213 08:48:24.560963 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/598e8726-686b-4daf-a927-5fa0dfcbee9c-var-lib-calico\") pod \"calico-node-dmbrr\" (UID: \"598e8726-686b-4daf-a927-5fa0dfcbee9c\") " pod="calico-system/calico-node-dmbrr" Dec 13 08:48:24.562611 kubelet[2743]: I1213 08:48:24.561002 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/598e8726-686b-4daf-a927-5fa0dfcbee9c-policysync\") pod \"calico-node-dmbrr\" (UID: \"598e8726-686b-4daf-a927-5fa0dfcbee9c\") " pod="calico-system/calico-node-dmbrr" Dec 13 08:48:24.562611 kubelet[2743]: I1213 08:48:24.561030 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/598e8726-686b-4daf-a927-5fa0dfcbee9c-cni-bin-dir\") pod \"calico-node-dmbrr\" (UID: \"598e8726-686b-4daf-a927-5fa0dfcbee9c\") " pod="calico-system/calico-node-dmbrr" Dec 13 08:48:24.562611 kubelet[2743]: I1213 08:48:24.561069 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/598e8726-686b-4daf-a927-5fa0dfcbee9c-xtables-lock\") pod \"calico-node-dmbrr\" (UID: \"598e8726-686b-4daf-a927-5fa0dfcbee9c\") " pod="calico-system/calico-node-dmbrr" Dec 13 08:48:24.562920 kubelet[2743]: I1213 08:48:24.561099 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/598e8726-686b-4daf-a927-5fa0dfcbee9c-lib-modules\") pod \"calico-node-dmbrr\" (UID: \"598e8726-686b-4daf-a927-5fa0dfcbee9c\") " pod="calico-system/calico-node-dmbrr" Dec 13 08:48:24.562920 kubelet[2743]: I1213 08:48:24.561136 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/598e8726-686b-4daf-a927-5fa0dfcbee9c-cni-net-dir\") pod \"calico-node-dmbrr\" (UID: \"598e8726-686b-4daf-a927-5fa0dfcbee9c\") " pod="calico-system/calico-node-dmbrr" Dec 13 08:48:24.599538 kubelet[2743]: I1213 08:48:24.561177 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rvkj\" (UniqueName: \"kubernetes.io/projected/598e8726-686b-4daf-a927-5fa0dfcbee9c-kube-api-access-2rvkj\") pod \"calico-node-dmbrr\" (UID: \"598e8726-686b-4daf-a927-5fa0dfcbee9c\") " pod="calico-system/calico-node-dmbrr" Dec 13 08:48:24.599538 kubelet[2743]: I1213 08:48:24.599347 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/598e8726-686b-4daf-a927-5fa0dfcbee9c-cni-log-dir\") pod \"calico-node-dmbrr\" (UID: \"598e8726-686b-4daf-a927-5fa0dfcbee9c\") " pod="calico-system/calico-node-dmbrr" Dec 13 08:48:24.599538 kubelet[2743]: I1213 08:48:24.599379 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/598e8726-686b-4daf-a927-5fa0dfcbee9c-tigera-ca-bundle\") pod \"calico-node-dmbrr\" (UID: \"598e8726-686b-4daf-a927-5fa0dfcbee9c\") " pod="calico-system/calico-node-dmbrr" Dec 13 08:48:24.599538 kubelet[2743]: I1213 08:48:24.599404 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/598e8726-686b-4daf-a927-5fa0dfcbee9c-node-certs\") pod \"calico-node-dmbrr\" (UID: \"598e8726-686b-4daf-a927-5fa0dfcbee9c\") " pod="calico-system/calico-node-dmbrr" Dec 13 08:48:24.599538 kubelet[2743]: I1213 08:48:24.599436 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/598e8726-686b-4daf-a927-5fa0dfcbee9c-flexvol-driver-host\") pod \"calico-node-dmbrr\" (UID: \"598e8726-686b-4daf-a927-5fa0dfcbee9c\") " pod="calico-system/calico-node-dmbrr" Dec 13 08:48:24.614233 kubelet[2743]: E1213 08:48:24.614182 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:24.657567 containerd[1595]: time="2024-12-13T08:48:24.657505382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66996554b9-bpn5c,Uid:c0db4cbb-4c8f-41c0-a2d6-1d10fd9a3ee2,Namespace:calico-system,Attempt:0,}" Dec 13 08:48:24.711749 kubelet[2743]: E1213 08:48:24.711700 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.712137 kubelet[2743]: W1213 08:48:24.711954 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.712137 kubelet[2743]: E1213 08:48:24.711991 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.712516 kubelet[2743]: E1213 08:48:24.712406 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.712516 kubelet[2743]: W1213 08:48:24.712420 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.712516 kubelet[2743]: E1213 08:48:24.712436 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.712782 kubelet[2743]: E1213 08:48:24.712770 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.713029 kubelet[2743]: W1213 08:48:24.712852 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.713115 kubelet[2743]: E1213 08:48:24.713103 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.713367 kubelet[2743]: E1213 08:48:24.713244 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.713550 kubelet[2743]: W1213 08:48:24.713445 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.713550 kubelet[2743]: E1213 08:48:24.713479 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.714369 kubelet[2743]: E1213 08:48:24.713900 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.714369 kubelet[2743]: W1213 08:48:24.713916 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.714369 kubelet[2743]: E1213 08:48:24.713937 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.732352 kubelet[2743]: I1213 08:48:24.730334 2743 topology_manager.go:215] "Topology Admit Handler" podUID="b522e9d4-562e-4448-88f3-7d6870e65d2f" podNamespace="calico-system" podName="csi-node-driver-h2vzm" Dec 13 08:48:24.732352 kubelet[2743]: E1213 08:48:24.730643 2743 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h2vzm" podUID="b522e9d4-562e-4448-88f3-7d6870e65d2f" Dec 13 08:48:24.732352 kubelet[2743]: E1213 08:48:24.731427 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.732352 kubelet[2743]: W1213 08:48:24.731446 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.732352 kubelet[2743]: E1213 08:48:24.731472 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.759411 kubelet[2743]: E1213 08:48:24.758971 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.759411 kubelet[2743]: W1213 08:48:24.758996 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.759411 kubelet[2743]: E1213 08:48:24.759047 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.759411 kubelet[2743]: E1213 08:48:24.759283 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.759411 kubelet[2743]: W1213 08:48:24.759293 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.759411 kubelet[2743]: E1213 08:48:24.759340 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.760581 kubelet[2743]: E1213 08:48:24.760198 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.760581 kubelet[2743]: W1213 08:48:24.760214 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.760581 kubelet[2743]: E1213 08:48:24.760508 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.760581 kubelet[2743]: W1213 08:48:24.760521 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.760581 kubelet[2743]: E1213 08:48:24.760539 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.761676 kubelet[2743]: E1213 08:48:24.760878 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.761676 kubelet[2743]: E1213 08:48:24.761632 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.761925 kubelet[2743]: W1213 08:48:24.761648 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.761925 kubelet[2743]: E1213 08:48:24.761863 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.762499 kubelet[2743]: E1213 08:48:24.762394 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.762499 kubelet[2743]: W1213 08:48:24.762425 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.762499 kubelet[2743]: E1213 08:48:24.762447 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.762925 kubelet[2743]: E1213 08:48:24.762913 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.763011 kubelet[2743]: W1213 08:48:24.763001 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.763159 kubelet[2743]: E1213 08:48:24.763075 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.763393 kubelet[2743]: E1213 08:48:24.763373 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.763547 kubelet[2743]: W1213 08:48:24.763451 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.763547 kubelet[2743]: E1213 08:48:24.763485 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.763858 kubelet[2743]: E1213 08:48:24.763812 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.763858 kubelet[2743]: W1213 08:48:24.763824 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.763858 kubelet[2743]: E1213 08:48:24.763836 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.764214 kubelet[2743]: E1213 08:48:24.764195 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.764369 kubelet[2743]: W1213 08:48:24.764266 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.764369 kubelet[2743]: E1213 08:48:24.764301 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.764713 kubelet[2743]: E1213 08:48:24.764702 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.764974 kubelet[2743]: W1213 08:48:24.764903 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.764974 kubelet[2743]: E1213 08:48:24.764924 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.765354 kubelet[2743]: E1213 08:48:24.765218 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.765354 kubelet[2743]: W1213 08:48:24.765228 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.765354 kubelet[2743]: E1213 08:48:24.765258 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.765626 kubelet[2743]: E1213 08:48:24.765615 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.765686 kubelet[2743]: W1213 08:48:24.765677 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.765846 kubelet[2743]: E1213 08:48:24.765733 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.766042 kubelet[2743]: E1213 08:48:24.766031 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.766218 kubelet[2743]: W1213 08:48:24.766117 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.766218 kubelet[2743]: E1213 08:48:24.766138 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.766436 kubelet[2743]: E1213 08:48:24.766426 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.766574 kubelet[2743]: W1213 08:48:24.766492 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.766709 kubelet[2743]: E1213 08:48:24.766632 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.766861 kubelet[2743]: E1213 08:48:24.766851 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.767010 kubelet[2743]: W1213 08:48:24.766911 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.767010 kubelet[2743]: E1213 08:48:24.766944 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.767363 kubelet[2743]: E1213 08:48:24.767191 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.767363 kubelet[2743]: W1213 08:48:24.767216 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.767363 kubelet[2743]: E1213 08:48:24.767228 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.767631 kubelet[2743]: E1213 08:48:24.767620 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.767756 kubelet[2743]: W1213 08:48:24.767680 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.767756 kubelet[2743]: E1213 08:48:24.767696 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.767986 kubelet[2743]: E1213 08:48:24.767975 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.768120 kubelet[2743]: W1213 08:48:24.768041 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.768120 kubelet[2743]: E1213 08:48:24.768057 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.768397 kubelet[2743]: E1213 08:48:24.768293 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.768397 kubelet[2743]: W1213 08:48:24.768302 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.768397 kubelet[2743]: E1213 08:48:24.768332 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.768619 kubelet[2743]: E1213 08:48:24.768610 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.768750 kubelet[2743]: W1213 08:48:24.768667 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.768750 kubelet[2743]: E1213 08:48:24.768682 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.782746 containerd[1595]: time="2024-12-13T08:48:24.780001933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:48:24.782746 containerd[1595]: time="2024-12-13T08:48:24.780111359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:48:24.782746 containerd[1595]: time="2024-12-13T08:48:24.780137785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:24.782746 containerd[1595]: time="2024-12-13T08:48:24.780338197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:24.806364 kubelet[2743]: E1213 08:48:24.806127 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.806364 kubelet[2743]: W1213 08:48:24.806159 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.806364 kubelet[2743]: E1213 08:48:24.806200 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.806364 kubelet[2743]: I1213 08:48:24.806264 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b522e9d4-562e-4448-88f3-7d6870e65d2f-varrun\") pod \"csi-node-driver-h2vzm\" (UID: \"b522e9d4-562e-4448-88f3-7d6870e65d2f\") " pod="calico-system/csi-node-driver-h2vzm" Dec 13 08:48:24.807286 kubelet[2743]: E1213 08:48:24.806990 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.807286 kubelet[2743]: W1213 08:48:24.807045 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.807286 kubelet[2743]: E1213 08:48:24.807082 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.807286 kubelet[2743]: I1213 08:48:24.807215 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b522e9d4-562e-4448-88f3-7d6870e65d2f-socket-dir\") pod \"csi-node-driver-h2vzm\" (UID: \"b522e9d4-562e-4448-88f3-7d6870e65d2f\") " pod="calico-system/csi-node-driver-h2vzm" Dec 13 08:48:24.808140 kubelet[2743]: E1213 08:48:24.807894 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.808140 kubelet[2743]: W1213 08:48:24.807916 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.808140 kubelet[2743]: E1213 08:48:24.807947 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.809590 kubelet[2743]: E1213 08:48:24.809353 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.809590 kubelet[2743]: W1213 08:48:24.809369 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.809590 kubelet[2743]: E1213 08:48:24.809404 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.811218 kubelet[2743]: E1213 08:48:24.811195 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.811524 kubelet[2743]: W1213 08:48:24.811371 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.811524 kubelet[2743]: E1213 08:48:24.811409 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.812369 kubelet[2743]: I1213 08:48:24.811711 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b522e9d4-562e-4448-88f3-7d6870e65d2f-registration-dir\") pod \"csi-node-driver-h2vzm\" (UID: \"b522e9d4-562e-4448-88f3-7d6870e65d2f\") " pod="calico-system/csi-node-driver-h2vzm" Dec 13 08:48:24.814307 kubelet[2743]: E1213 08:48:24.812937 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.814307 kubelet[2743]: W1213 08:48:24.812962 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.814307 kubelet[2743]: E1213 08:48:24.812988 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.818666 kubelet[2743]: E1213 08:48:24.815374 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.818666 kubelet[2743]: W1213 08:48:24.816650 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.818666 kubelet[2743]: E1213 08:48:24.816690 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.818666 kubelet[2743]: I1213 08:48:24.816735 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b522e9d4-562e-4448-88f3-7d6870e65d2f-kubelet-dir\") pod \"csi-node-driver-h2vzm\" (UID: \"b522e9d4-562e-4448-88f3-7d6870e65d2f\") " pod="calico-system/csi-node-driver-h2vzm" Dec 13 08:48:24.820429 kubelet[2743]: E1213 08:48:24.819898 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.820429 kubelet[2743]: W1213 08:48:24.819925 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.820429 kubelet[2743]: E1213 08:48:24.819953 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.824236 kubelet[2743]: E1213 08:48:24.822775 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.824236 kubelet[2743]: W1213 08:48:24.822863 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.824236 kubelet[2743]: E1213 08:48:24.822900 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.825765 kubelet[2743]: E1213 08:48:24.825411 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.826181 kubelet[2743]: W1213 08:48:24.826065 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.833478 kubelet[2743]: E1213 08:48:24.827560 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.833478 kubelet[2743]: E1213 08:48:24.829732 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.833478 kubelet[2743]: W1213 08:48:24.829776 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.833478 kubelet[2743]: E1213 08:48:24.829814 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.833478 kubelet[2743]: E1213 08:48:24.831449 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.833478 kubelet[2743]: W1213 08:48:24.831466 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.833478 kubelet[2743]: E1213 08:48:24.831490 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.833478 kubelet[2743]: E1213 08:48:24.833386 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.833478 kubelet[2743]: W1213 08:48:24.833405 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.833478 kubelet[2743]: E1213 08:48:24.833432 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.837866 kubelet[2743]: I1213 08:48:24.834173 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msclt\" (UniqueName: \"kubernetes.io/projected/b522e9d4-562e-4448-88f3-7d6870e65d2f-kube-api-access-msclt\") pod \"csi-node-driver-h2vzm\" (UID: \"b522e9d4-562e-4448-88f3-7d6870e65d2f\") " pod="calico-system/csi-node-driver-h2vzm" Dec 13 08:48:24.840962 kubelet[2743]: E1213 08:48:24.840045 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.840962 kubelet[2743]: W1213 08:48:24.840076 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.840962 kubelet[2743]: E1213 08:48:24.840106 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.842546 kubelet[2743]: E1213 08:48:24.842509 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:24.843967 kubelet[2743]: E1213 08:48:24.843154 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.844546 kubelet[2743]: W1213 08:48:24.844141 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.844546 kubelet[2743]: E1213 08:48:24.844178 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.848464 containerd[1595]: time="2024-12-13T08:48:24.847648429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dmbrr,Uid:598e8726-686b-4daf-a927-5fa0dfcbee9c,Namespace:calico-system,Attempt:0,}" Dec 13 08:48:24.940468 kubelet[2743]: E1213 08:48:24.939019 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.940468 kubelet[2743]: W1213 08:48:24.939100 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.940468 kubelet[2743]: E1213 08:48:24.939146 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.940468 kubelet[2743]: E1213 08:48:24.939976 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.940468 kubelet[2743]: W1213 08:48:24.939997 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.940468 kubelet[2743]: E1213 08:48:24.940060 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.940927 kubelet[2743]: E1213 08:48:24.940619 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.940927 kubelet[2743]: W1213 08:48:24.940631 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.940927 kubelet[2743]: E1213 08:48:24.940760 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.941126 kubelet[2743]: E1213 08:48:24.941082 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.941126 kubelet[2743]: W1213 08:48:24.941104 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.946308 kubelet[2743]: E1213 08:48:24.941375 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.946308 kubelet[2743]: E1213 08:48:24.941871 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.946308 kubelet[2743]: W1213 08:48:24.941883 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.946308 kubelet[2743]: E1213 08:48:24.941905 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.946308 kubelet[2743]: E1213 08:48:24.942415 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.946308 kubelet[2743]: W1213 08:48:24.942520 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.946308 kubelet[2743]: E1213 08:48:24.942545 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.946308 kubelet[2743]: E1213 08:48:24.944028 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.946308 kubelet[2743]: W1213 08:48:24.944043 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.946308 kubelet[2743]: E1213 08:48:24.944969 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.953379 kubelet[2743]: E1213 08:48:24.946080 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.953379 kubelet[2743]: W1213 08:48:24.946218 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.953379 kubelet[2743]: E1213 08:48:24.946495 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.953379 kubelet[2743]: E1213 08:48:24.947893 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.953379 kubelet[2743]: W1213 08:48:24.947918 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.955588 kubelet[2743]: E1213 08:48:24.954916 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.955588 kubelet[2743]: W1213 08:48:24.954966 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.959115 kubelet[2743]: E1213 08:48:24.958857 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.971003 kubelet[2743]: E1213 08:48:24.970481 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.971003 kubelet[2743]: E1213 08:48:24.970620 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.971003 kubelet[2743]: W1213 08:48:24.970634 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.978213 containerd[1595]: time="2024-12-13T08:48:24.977209011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:48:24.978213 containerd[1595]: time="2024-12-13T08:48:24.978305869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:48:24.978665 containerd[1595]: time="2024-12-13T08:48:24.978380573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:24.979443 containerd[1595]: time="2024-12-13T08:48:24.978605187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:24.991374 kubelet[2743]: E1213 08:48:24.990429 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.994652 kubelet[2743]: E1213 08:48:24.994612 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.995092 kubelet[2743]: W1213 08:48:24.994743 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.995555 kubelet[2743]: E1213 08:48:24.995409 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.995555 kubelet[2743]: W1213 08:48:24.995431 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.996036 kubelet[2743]: E1213 08:48:24.995904 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.996036 kubelet[2743]: W1213 08:48:24.995923 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.996701 kubelet[2743]: E1213 08:48:24.996404 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.996701 kubelet[2743]: W1213 08:48:24.996422 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.997350 kubelet[2743]: E1213 08:48:24.997215 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:24.997350 kubelet[2743]: W1213 08:48:24.997233 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:24.997350 kubelet[2743]: E1213 08:48:24.997258 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.999600 kubelet[2743]: E1213 08:48:24.998961 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:24.999600 kubelet[2743]: E1213 08:48:24.999049 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:25.000832 kubelet[2743]: E1213 08:48:25.000430 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:25.000832 kubelet[2743]: W1213 08:48:25.000460 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:25.000832 kubelet[2743]: E1213 08:48:25.000491 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:25.011629 kubelet[2743]: E1213 08:48:25.007978 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:25.011629 kubelet[2743]: E1213 08:48:25.009096 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:25.014278 kubelet[2743]: E1213 08:48:25.014233 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:25.014632 kubelet[2743]: W1213 08:48:25.014523 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:25.014812 kubelet[2743]: E1213 08:48:25.014793 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:25.016735 kubelet[2743]: E1213 08:48:25.016433 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:25.016735 kubelet[2743]: W1213 08:48:25.016459 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:25.016735 kubelet[2743]: E1213 08:48:25.016493 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:25.020089 kubelet[2743]: E1213 08:48:25.018873 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:25.020089 kubelet[2743]: W1213 08:48:25.018902 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:25.020089 kubelet[2743]: E1213 08:48:25.018931 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:25.021199 kubelet[2743]: E1213 08:48:25.020484 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:25.021199 kubelet[2743]: W1213 08:48:25.020500 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:25.021199 kubelet[2743]: E1213 08:48:25.020524 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:25.023662 kubelet[2743]: E1213 08:48:25.023567 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:25.023662 kubelet[2743]: W1213 08:48:25.023595 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:25.023662 kubelet[2743]: E1213 08:48:25.023629 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:25.025598 kubelet[2743]: E1213 08:48:25.025546 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:25.025598 kubelet[2743]: W1213 08:48:25.025589 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:25.025880 kubelet[2743]: E1213 08:48:25.025616 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:25.027859 kubelet[2743]: E1213 08:48:25.027825 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:25.027859 kubelet[2743]: W1213 08:48:25.027848 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:25.028075 kubelet[2743]: E1213 08:48:25.027875 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:25.030387 kubelet[2743]: E1213 08:48:25.029632 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:25.030387 kubelet[2743]: W1213 08:48:25.029652 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:25.030387 kubelet[2743]: E1213 08:48:25.029677 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:25.103024 kubelet[2743]: E1213 08:48:25.102886 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:25.110494 kubelet[2743]: W1213 08:48:25.110427 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:25.110837 kubelet[2743]: E1213 08:48:25.110814 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:25.115444 containerd[1595]: time="2024-12-13T08:48:25.115372191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66996554b9-bpn5c,Uid:c0db4cbb-4c8f-41c0-a2d6-1d10fd9a3ee2,Namespace:calico-system,Attempt:0,} returns sandbox id \"3a2f6a5281f74762b8c81bec96cfba36ebeb293914c2074be2932ee45128ce46\"" Dec 13 08:48:25.117076 kubelet[2743]: E1213 08:48:25.117012 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:25.126555 containerd[1595]: time="2024-12-13T08:48:25.126498504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 08:48:25.145607 containerd[1595]: time="2024-12-13T08:48:25.145516117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dmbrr,Uid:598e8726-686b-4daf-a927-5fa0dfcbee9c,Namespace:calico-system,Attempt:0,} returns sandbox id \"fabf26fe25a08d64f0bdb0eb6f91e1e5411b4bd285457edf50238268f39d5e9c\"" Dec 13 08:48:25.147662 kubelet[2743]: E1213 08:48:25.146719 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:26.821685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1559021972.mount: Deactivated successfully. Dec 13 08:48:26.853918 systemd-journald[1139]: Under memory pressure, flushing caches. Dec 13 08:48:26.853206 systemd-resolved[1476]: Under memory pressure, flushing caches. Dec 13 08:48:26.853752 systemd-resolved[1476]: Flushed all caches. Dec 13 08:48:27.035436 kubelet[2743]: E1213 08:48:27.034944 2743 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h2vzm" podUID="b522e9d4-562e-4448-88f3-7d6870e65d2f" Dec 13 08:48:27.753734 containerd[1595]: time="2024-12-13T08:48:27.753560884Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:27.757138 containerd[1595]: time="2024-12-13T08:48:27.757029831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Dec 13 08:48:27.763050 containerd[1595]: time="2024-12-13T08:48:27.762922371Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:27.770438 containerd[1595]: time="2024-12-13T08:48:27.770272033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:27.773354 containerd[1595]: time="2024-12-13T08:48:27.771762949Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.64488363s" Dec 13 08:48:27.773354 containerd[1595]: time="2024-12-13T08:48:27.771832311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 08:48:27.775601 containerd[1595]: time="2024-12-13T08:48:27.775543525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 08:48:27.804878 containerd[1595]: time="2024-12-13T08:48:27.804838312Z" level=info msg="CreateContainer within sandbox \"3a2f6a5281f74762b8c81bec96cfba36ebeb293914c2074be2932ee45128ce46\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 08:48:27.837780 containerd[1595]: time="2024-12-13T08:48:27.837710498Z" level=info msg="CreateContainer within sandbox \"3a2f6a5281f74762b8c81bec96cfba36ebeb293914c2074be2932ee45128ce46\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4cbcad1c00288090c7887eaf5c64ae89cceab567c5826957ebd3929467c09c6b\"" Dec 13 08:48:27.840012 containerd[1595]: time="2024-12-13T08:48:27.839966425Z" level=info msg="StartContainer for \"4cbcad1c00288090c7887eaf5c64ae89cceab567c5826957ebd3929467c09c6b\"" Dec 13 08:48:27.978528 containerd[1595]: time="2024-12-13T08:48:27.978362775Z" level=info msg="StartContainer for \"4cbcad1c00288090c7887eaf5c64ae89cceab567c5826957ebd3929467c09c6b\" returns successfully" Dec 13 08:48:28.212112 kubelet[2743]: E1213 08:48:28.211549 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:28.236344 kubelet[2743]: E1213 08:48:28.236154 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.236344 kubelet[2743]: W1213 08:48:28.236181 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.236344 kubelet[2743]: E1213 08:48:28.236210 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.236744 kubelet[2743]: E1213 08:48:28.236495 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.236985 kubelet[2743]: W1213 08:48:28.236508 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.236985 kubelet[2743]: E1213 08:48:28.236858 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.237383 kubelet[2743]: E1213 08:48:28.237152 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.237383 kubelet[2743]: W1213 08:48:28.237164 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.237383 kubelet[2743]: E1213 08:48:28.237180 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.239269 kubelet[2743]: E1213 08:48:28.238009 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.239269 kubelet[2743]: W1213 08:48:28.239149 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.239269 kubelet[2743]: E1213 08:48:28.239176 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.240236 kubelet[2743]: E1213 08:48:28.240050 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.240236 kubelet[2743]: W1213 08:48:28.240079 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.240236 kubelet[2743]: E1213 08:48:28.240103 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.240767 kubelet[2743]: E1213 08:48:28.240406 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.240767 kubelet[2743]: W1213 08:48:28.240417 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.240767 kubelet[2743]: E1213 08:48:28.240435 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.241041 kubelet[2743]: E1213 08:48:28.241009 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.241041 kubelet[2743]: W1213 08:48:28.241022 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.241248 kubelet[2743]: E1213 08:48:28.241136 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.241447 kubelet[2743]: E1213 08:48:28.241436 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.241674 kubelet[2743]: W1213 08:48:28.241509 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.241674 kubelet[2743]: E1213 08:48:28.241526 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.241896 kubelet[2743]: E1213 08:48:28.241886 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.242053 kubelet[2743]: W1213 08:48:28.241982 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.242053 kubelet[2743]: E1213 08:48:28.242002 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.242611 kubelet[2743]: E1213 08:48:28.242503 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.242611 kubelet[2743]: W1213 08:48:28.242515 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.242611 kubelet[2743]: E1213 08:48:28.242530 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.242917 kubelet[2743]: E1213 08:48:28.242765 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.242917 kubelet[2743]: W1213 08:48:28.242777 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.243161 kubelet[2743]: E1213 08:48:28.243053 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.243414 kubelet[2743]: E1213 08:48:28.243394 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.244350 kubelet[2743]: W1213 08:48:28.244177 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.244350 kubelet[2743]: E1213 08:48:28.244204 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.246800 kubelet[2743]: E1213 08:48:28.245836 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.246800 kubelet[2743]: W1213 08:48:28.245851 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.246800 kubelet[2743]: E1213 08:48:28.245866 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.249367 kubelet[2743]: E1213 08:48:28.248865 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.249367 kubelet[2743]: W1213 08:48:28.248918 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.249367 kubelet[2743]: E1213 08:48:28.248988 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.252350 kubelet[2743]: E1213 08:48:28.252299 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.252350 kubelet[2743]: W1213 08:48:28.252349 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.252598 kubelet[2743]: E1213 08:48:28.252380 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.343354 kubelet[2743]: E1213 08:48:28.343016 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.343354 kubelet[2743]: W1213 08:48:28.343054 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.343354 kubelet[2743]: E1213 08:48:28.343094 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.346233 kubelet[2743]: E1213 08:48:28.345531 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.346233 kubelet[2743]: W1213 08:48:28.345667 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.346233 kubelet[2743]: E1213 08:48:28.345712 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.347717 kubelet[2743]: E1213 08:48:28.347248 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.347717 kubelet[2743]: W1213 08:48:28.347275 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.347717 kubelet[2743]: E1213 08:48:28.347328 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.347717 kubelet[2743]: E1213 08:48:28.347689 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.347717 kubelet[2743]: W1213 08:48:28.347705 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.347717 kubelet[2743]: E1213 08:48:28.347724 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.351083 kubelet[2743]: E1213 08:48:28.347963 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.351083 kubelet[2743]: W1213 08:48:28.347992 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.351083 kubelet[2743]: E1213 08:48:28.348012 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.351083 kubelet[2743]: E1213 08:48:28.349117 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.351083 kubelet[2743]: W1213 08:48:28.349133 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.351083 kubelet[2743]: E1213 08:48:28.349414 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.351083 kubelet[2743]: W1213 08:48:28.349427 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.351083 kubelet[2743]: E1213 08:48:28.349539 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.351083 kubelet[2743]: E1213 08:48:28.349789 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.351083 kubelet[2743]: E1213 08:48:28.349923 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.351398 kubelet[2743]: W1213 08:48:28.349938 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.351398 kubelet[2743]: E1213 08:48:28.349961 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.351398 kubelet[2743]: E1213 08:48:28.350216 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.351398 kubelet[2743]: W1213 08:48:28.350228 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.351398 kubelet[2743]: E1213 08:48:28.350244 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.351398 kubelet[2743]: E1213 08:48:28.350482 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.351398 kubelet[2743]: W1213 08:48:28.350494 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.351398 kubelet[2743]: E1213 08:48:28.350511 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.351398 kubelet[2743]: E1213 08:48:28.350764 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.351398 kubelet[2743]: W1213 08:48:28.350777 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.353940 kubelet[2743]: E1213 08:48:28.350793 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.353940 kubelet[2743]: E1213 08:48:28.353661 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.353940 kubelet[2743]: W1213 08:48:28.353681 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.353940 kubelet[2743]: E1213 08:48:28.353744 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.355221 kubelet[2743]: E1213 08:48:28.354690 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.355221 kubelet[2743]: W1213 08:48:28.354711 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.355221 kubelet[2743]: E1213 08:48:28.354738 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.355221 kubelet[2743]: E1213 08:48:28.355161 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.358588 kubelet[2743]: W1213 08:48:28.357368 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.358588 kubelet[2743]: E1213 08:48:28.357820 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.358588 kubelet[2743]: W1213 08:48:28.357837 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.358588 kubelet[2743]: E1213 08:48:28.357863 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.358588 kubelet[2743]: E1213 08:48:28.357913 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.358588 kubelet[2743]: E1213 08:48:28.358529 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.358588 kubelet[2743]: W1213 08:48:28.358545 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.358588 kubelet[2743]: E1213 08:48:28.358567 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.358950 kubelet[2743]: E1213 08:48:28.358890 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.358950 kubelet[2743]: W1213 08:48:28.358908 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.358950 kubelet[2743]: E1213 08:48:28.358931 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:28.361374 kubelet[2743]: E1213 08:48:28.360898 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:28.361374 kubelet[2743]: W1213 08:48:28.360925 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:28.361374 kubelet[2743]: E1213 08:48:28.360948 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.035635 kubelet[2743]: E1213 08:48:29.034757 2743 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h2vzm" podUID="b522e9d4-562e-4448-88f3-7d6870e65d2f" Dec 13 08:48:29.213584 kubelet[2743]: I1213 08:48:29.213438 2743 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 08:48:29.215065 kubelet[2743]: E1213 08:48:29.215037 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:29.260611 kubelet[2743]: E1213 08:48:29.260402 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.260611 kubelet[2743]: W1213 08:48:29.260430 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.260611 kubelet[2743]: E1213 08:48:29.260462 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.261279 kubelet[2743]: E1213 08:48:29.261116 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.261279 kubelet[2743]: W1213 08:48:29.261133 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.261279 kubelet[2743]: E1213 08:48:29.261156 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.261846 kubelet[2743]: E1213 08:48:29.261705 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.261846 kubelet[2743]: W1213 08:48:29.261718 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.261846 kubelet[2743]: E1213 08:48:29.261737 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.262302 kubelet[2743]: E1213 08:48:29.262180 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.262302 kubelet[2743]: W1213 08:48:29.262196 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.262302 kubelet[2743]: E1213 08:48:29.262218 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.262910 kubelet[2743]: E1213 08:48:29.262783 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.262910 kubelet[2743]: W1213 08:48:29.262798 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.262910 kubelet[2743]: E1213 08:48:29.262816 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.263587 kubelet[2743]: E1213 08:48:29.263426 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.263587 kubelet[2743]: W1213 08:48:29.263440 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.263587 kubelet[2743]: E1213 08:48:29.263459 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.264041 kubelet[2743]: E1213 08:48:29.263859 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.264041 kubelet[2743]: W1213 08:48:29.263877 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.264041 kubelet[2743]: E1213 08:48:29.263902 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.264486 kubelet[2743]: E1213 08:48:29.264369 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.264486 kubelet[2743]: W1213 08:48:29.264387 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.264486 kubelet[2743]: E1213 08:48:29.264406 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.265059 kubelet[2743]: E1213 08:48:29.264888 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.265059 kubelet[2743]: W1213 08:48:29.264905 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.265059 kubelet[2743]: E1213 08:48:29.264924 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.265640 kubelet[2743]: E1213 08:48:29.265438 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.265640 kubelet[2743]: W1213 08:48:29.265457 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.265640 kubelet[2743]: E1213 08:48:29.265479 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.266279 kubelet[2743]: E1213 08:48:29.266174 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.266279 kubelet[2743]: W1213 08:48:29.266191 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.266279 kubelet[2743]: E1213 08:48:29.266210 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.267018 kubelet[2743]: E1213 08:48:29.266863 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.267018 kubelet[2743]: W1213 08:48:29.266878 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.267018 kubelet[2743]: E1213 08:48:29.266896 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.267456 kubelet[2743]: E1213 08:48:29.267259 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.267456 kubelet[2743]: W1213 08:48:29.267276 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.267456 kubelet[2743]: E1213 08:48:29.267299 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.267879 kubelet[2743]: E1213 08:48:29.267787 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.267879 kubelet[2743]: W1213 08:48:29.267799 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.267879 kubelet[2743]: E1213 08:48:29.267816 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.268247 kubelet[2743]: E1213 08:48:29.268167 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.268247 kubelet[2743]: W1213 08:48:29.268178 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.268247 kubelet[2743]: E1213 08:48:29.268190 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.360958 kubelet[2743]: E1213 08:48:29.360660 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.360958 kubelet[2743]: W1213 08:48:29.360684 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.360958 kubelet[2743]: E1213 08:48:29.360713 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.362748 kubelet[2743]: E1213 08:48:29.362600 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.362748 kubelet[2743]: W1213 08:48:29.362631 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.362748 kubelet[2743]: E1213 08:48:29.362664 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.364390 kubelet[2743]: E1213 08:48:29.364109 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.364390 kubelet[2743]: W1213 08:48:29.364168 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.364390 kubelet[2743]: E1213 08:48:29.364206 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.366447 kubelet[2743]: E1213 08:48:29.365076 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.366447 kubelet[2743]: W1213 08:48:29.365151 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.366447 kubelet[2743]: E1213 08:48:29.365247 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.366447 kubelet[2743]: E1213 08:48:29.365621 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.366447 kubelet[2743]: W1213 08:48:29.365633 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.366447 kubelet[2743]: E1213 08:48:29.365725 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.366447 kubelet[2743]: E1213 08:48:29.365959 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.366447 kubelet[2743]: W1213 08:48:29.365969 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.366447 kubelet[2743]: E1213 08:48:29.366061 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.366447 kubelet[2743]: E1213 08:48:29.366301 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.367028 kubelet[2743]: W1213 08:48:29.366322 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.367028 kubelet[2743]: E1213 08:48:29.366479 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.367028 kubelet[2743]: E1213 08:48:29.366656 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.367028 kubelet[2743]: W1213 08:48:29.366666 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.367028 kubelet[2743]: E1213 08:48:29.366685 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.367028 kubelet[2743]: E1213 08:48:29.367015 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.367028 kubelet[2743]: W1213 08:48:29.367026 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.367374 kubelet[2743]: E1213 08:48:29.367054 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.367374 kubelet[2743]: E1213 08:48:29.367348 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.367374 kubelet[2743]: W1213 08:48:29.367358 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.367526 kubelet[2743]: E1213 08:48:29.367441 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.369765 kubelet[2743]: E1213 08:48:29.368132 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.369765 kubelet[2743]: W1213 08:48:29.368171 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.369765 kubelet[2743]: E1213 08:48:29.368288 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.369765 kubelet[2743]: E1213 08:48:29.368724 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.369765 kubelet[2743]: W1213 08:48:29.368805 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.369765 kubelet[2743]: E1213 08:48:29.368953 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.369765 kubelet[2743]: E1213 08:48:29.369239 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.369765 kubelet[2743]: W1213 08:48:29.369254 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.369765 kubelet[2743]: E1213 08:48:29.369356 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.370879 kubelet[2743]: E1213 08:48:29.370584 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.370879 kubelet[2743]: W1213 08:48:29.370599 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.370879 kubelet[2743]: E1213 08:48:29.370631 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.371060 kubelet[2743]: E1213 08:48:29.370921 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.371060 kubelet[2743]: W1213 08:48:29.370932 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.371060 kubelet[2743]: E1213 08:48:29.371022 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.371804 kubelet[2743]: E1213 08:48:29.371682 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.371804 kubelet[2743]: W1213 08:48:29.371699 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.371804 kubelet[2743]: E1213 08:48:29.371722 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.374669 kubelet[2743]: E1213 08:48:29.374627 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.374806 kubelet[2743]: W1213 08:48:29.374701 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.379988 kubelet[2743]: E1213 08:48:29.379588 2743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:48:29.379988 kubelet[2743]: W1213 08:48:29.379613 2743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:48:29.379988 kubelet[2743]: E1213 08:48:29.379645 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.381605 kubelet[2743]: E1213 08:48:29.381528 2743 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:48:29.502661 containerd[1595]: time="2024-12-13T08:48:29.502532848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:29.507400 containerd[1595]: time="2024-12-13T08:48:29.506337745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Dec 13 08:48:29.509992 containerd[1595]: time="2024-12-13T08:48:29.509831318Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:29.518116 containerd[1595]: time="2024-12-13T08:48:29.516946028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:29.520592 containerd[1595]: time="2024-12-13T08:48:29.520431136Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.74403344s" Dec 13 08:48:29.520592 containerd[1595]: time="2024-12-13T08:48:29.520478226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 08:48:29.526524 containerd[1595]: time="2024-12-13T08:48:29.525018325Z" level=info msg="CreateContainer within sandbox \"fabf26fe25a08d64f0bdb0eb6f91e1e5411b4bd285457edf50238268f39d5e9c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 08:48:29.577679 containerd[1595]: time="2024-12-13T08:48:29.574446506Z" level=info msg="CreateContainer within sandbox \"fabf26fe25a08d64f0bdb0eb6f91e1e5411b4bd285457edf50238268f39d5e9c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e1d5553d3ecf8b40bb71251b3d95e488a6dce4d6a7214196cca5bdd73025b4ea\"" Dec 13 08:48:29.577679 containerd[1595]: time="2024-12-13T08:48:29.576113142Z" level=info msg="StartContainer for \"e1d5553d3ecf8b40bb71251b3d95e488a6dce4d6a7214196cca5bdd73025b4ea\"" Dec 13 08:48:29.658395 systemd[1]: run-containerd-runc-k8s.io-e1d5553d3ecf8b40bb71251b3d95e488a6dce4d6a7214196cca5bdd73025b4ea-runc.0t78zF.mount: Deactivated successfully. Dec 13 08:48:29.707803 containerd[1595]: time="2024-12-13T08:48:29.707101822Z" level=info msg="StartContainer for \"e1d5553d3ecf8b40bb71251b3d95e488a6dce4d6a7214196cca5bdd73025b4ea\" returns successfully" Dec 13 08:48:29.791840 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1d5553d3ecf8b40bb71251b3d95e488a6dce4d6a7214196cca5bdd73025b4ea-rootfs.mount: Deactivated successfully. Dec 13 08:48:29.812211 containerd[1595]: time="2024-12-13T08:48:29.784468167Z" level=info msg="shim disconnected" id=e1d5553d3ecf8b40bb71251b3d95e488a6dce4d6a7214196cca5bdd73025b4ea namespace=k8s.io Dec 13 08:48:29.812211 containerd[1595]: time="2024-12-13T08:48:29.811631488Z" level=warning msg="cleaning up after shim disconnected" id=e1d5553d3ecf8b40bb71251b3d95e488a6dce4d6a7214196cca5bdd73025b4ea namespace=k8s.io Dec 13 08:48:29.812211 containerd[1595]: time="2024-12-13T08:48:29.811653751Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 08:48:30.222498 kubelet[2743]: E1213 08:48:30.222418 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:30.226998 containerd[1595]: time="2024-12-13T08:48:30.226742194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 08:48:30.259069 kubelet[2743]: I1213 08:48:30.258805 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-66996554b9-bpn5c" podStartSLOduration=3.607571148 podStartE2EDuration="6.256272278s" podCreationTimestamp="2024-12-13 08:48:24 +0000 UTC" firstStartedPulling="2024-12-13 08:48:25.123817365 +0000 UTC m=+20.324631852" lastFinishedPulling="2024-12-13 08:48:27.772518473 +0000 UTC m=+22.973332982" observedRunningTime="2024-12-13 08:48:28.246676059 +0000 UTC m=+23.447490567" watchObservedRunningTime="2024-12-13 08:48:30.256272278 +0000 UTC m=+25.457086821" Dec 13 08:48:31.035153 kubelet[2743]: E1213 08:48:31.034592 2743 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h2vzm" podUID="b522e9d4-562e-4448-88f3-7d6870e65d2f" Dec 13 08:48:33.036820 kubelet[2743]: E1213 08:48:33.036769 2743 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h2vzm" podUID="b522e9d4-562e-4448-88f3-7d6870e65d2f" Dec 13 08:48:35.035862 kubelet[2743]: E1213 08:48:35.035817 2743 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h2vzm" podUID="b522e9d4-562e-4448-88f3-7d6870e65d2f" Dec 13 08:48:35.268518 containerd[1595]: time="2024-12-13T08:48:35.268446996Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:35.273339 containerd[1595]: time="2024-12-13T08:48:35.273092384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 08:48:35.275475 containerd[1595]: time="2024-12-13T08:48:35.275375543Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:35.282582 containerd[1595]: time="2024-12-13T08:48:35.281939892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:35.283797 containerd[1595]: time="2024-12-13T08:48:35.283728235Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.05692356s" Dec 13 08:48:35.283952 containerd[1595]: time="2024-12-13T08:48:35.283798646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 08:48:35.290996 containerd[1595]: time="2024-12-13T08:48:35.290416079Z" level=info msg="CreateContainer within sandbox \"fabf26fe25a08d64f0bdb0eb6f91e1e5411b4bd285457edf50238268f39d5e9c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 08:48:35.328973 containerd[1595]: time="2024-12-13T08:48:35.328510019Z" level=info msg="CreateContainer within sandbox \"fabf26fe25a08d64f0bdb0eb6f91e1e5411b4bd285457edf50238268f39d5e9c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5a48b14d6ac41cfc3e04a39128bc0dbd50b177442bdb1333d5ed05d1a1fe32d8\"" Dec 13 08:48:35.329928 containerd[1595]: time="2024-12-13T08:48:35.329822514Z" level=info msg="StartContainer for \"5a48b14d6ac41cfc3e04a39128bc0dbd50b177442bdb1333d5ed05d1a1fe32d8\"" Dec 13 08:48:35.543284 containerd[1595]: time="2024-12-13T08:48:35.543112053Z" level=info msg="StartContainer for \"5a48b14d6ac41cfc3e04a39128bc0dbd50b177442bdb1333d5ed05d1a1fe32d8\" returns successfully" Dec 13 08:48:36.250169 kubelet[2743]: E1213 08:48:36.250089 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:36.294606 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a48b14d6ac41cfc3e04a39128bc0dbd50b177442bdb1333d5ed05d1a1fe32d8-rootfs.mount: Deactivated successfully. Dec 13 08:48:36.297911 containerd[1595]: time="2024-12-13T08:48:36.297815145Z" level=info msg="shim disconnected" id=5a48b14d6ac41cfc3e04a39128bc0dbd50b177442bdb1333d5ed05d1a1fe32d8 namespace=k8s.io Dec 13 08:48:36.297911 containerd[1595]: time="2024-12-13T08:48:36.297894022Z" level=warning msg="cleaning up after shim disconnected" id=5a48b14d6ac41cfc3e04a39128bc0dbd50b177442bdb1333d5ed05d1a1fe32d8 namespace=k8s.io Dec 13 08:48:36.297911 containerd[1595]: time="2024-12-13T08:48:36.297906929Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 08:48:36.310685 kubelet[2743]: I1213 08:48:36.310079 2743 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 08:48:36.385623 kubelet[2743]: I1213 08:48:36.385535 2743 topology_manager.go:215] "Topology Admit Handler" podUID="8474ed1a-b932-4bf2-81e9-841701b51857" podNamespace="kube-system" podName="coredns-76f75df574-mncf7" Dec 13 08:48:36.405426 kubelet[2743]: I1213 08:48:36.400383 2743 topology_manager.go:215] "Topology Admit Handler" podUID="22a25218-506a-48e3-b4fd-f0ae3f1527a3" podNamespace="kube-system" podName="coredns-76f75df574-dnl5l" Dec 13 08:48:36.410236 kubelet[2743]: I1213 08:48:36.409737 2743 topology_manager.go:215] "Topology Admit Handler" podUID="15f81aca-82d2-4a86-bb58-7838817f9d2a" podNamespace="calico-apiserver" podName="calico-apiserver-96bd6dbc8-jm4wj" Dec 13 08:48:36.412409 kubelet[2743]: I1213 08:48:36.412229 2743 topology_manager.go:215] "Topology Admit Handler" podUID="ad7eff51-0c89-4d1a-be7b-2c099a9c4335" podNamespace="calico-system" podName="calico-kube-controllers-57f546d8b9-6fx86" Dec 13 08:48:36.415023 kubelet[2743]: I1213 08:48:36.414840 2743 topology_manager.go:215] "Topology Admit Handler" podUID="85ce8df6-d384-49f2-95e5-eb36247ebf47" podNamespace="calico-apiserver" podName="calico-apiserver-96bd6dbc8-gtk2k" Dec 13 08:48:36.424268 kubelet[2743]: I1213 08:48:36.424029 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/85ce8df6-d384-49f2-95e5-eb36247ebf47-calico-apiserver-certs\") pod \"calico-apiserver-96bd6dbc8-gtk2k\" (UID: \"85ce8df6-d384-49f2-95e5-eb36247ebf47\") " pod="calico-apiserver/calico-apiserver-96bd6dbc8-gtk2k" Dec 13 08:48:36.424643 kubelet[2743]: I1213 08:48:36.424468 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcbzl\" (UniqueName: \"kubernetes.io/projected/85ce8df6-d384-49f2-95e5-eb36247ebf47-kube-api-access-zcbzl\") pod \"calico-apiserver-96bd6dbc8-gtk2k\" (UID: \"85ce8df6-d384-49f2-95e5-eb36247ebf47\") " pod="calico-apiserver/calico-apiserver-96bd6dbc8-gtk2k" Dec 13 08:48:36.425558 kubelet[2743]: I1213 08:48:36.425435 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22a25218-506a-48e3-b4fd-f0ae3f1527a3-config-volume\") pod \"coredns-76f75df574-dnl5l\" (UID: \"22a25218-506a-48e3-b4fd-f0ae3f1527a3\") " pod="kube-system/coredns-76f75df574-dnl5l" Dec 13 08:48:36.425558 kubelet[2743]: I1213 08:48:36.425527 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/15f81aca-82d2-4a86-bb58-7838817f9d2a-calico-apiserver-certs\") pod \"calico-apiserver-96bd6dbc8-jm4wj\" (UID: \"15f81aca-82d2-4a86-bb58-7838817f9d2a\") " pod="calico-apiserver/calico-apiserver-96bd6dbc8-jm4wj" Dec 13 08:48:36.426106 kubelet[2743]: I1213 08:48:36.425818 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8474ed1a-b932-4bf2-81e9-841701b51857-config-volume\") pod \"coredns-76f75df574-mncf7\" (UID: \"8474ed1a-b932-4bf2-81e9-841701b51857\") " pod="kube-system/coredns-76f75df574-mncf7" Dec 13 08:48:36.426106 kubelet[2743]: I1213 08:48:36.426046 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svmkv\" (UniqueName: \"kubernetes.io/projected/8474ed1a-b932-4bf2-81e9-841701b51857-kube-api-access-svmkv\") pod \"coredns-76f75df574-mncf7\" (UID: \"8474ed1a-b932-4bf2-81e9-841701b51857\") " pod="kube-system/coredns-76f75df574-mncf7" Dec 13 08:48:36.426408 kubelet[2743]: I1213 08:48:36.426296 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr4g8\" (UniqueName: \"kubernetes.io/projected/22a25218-506a-48e3-b4fd-f0ae3f1527a3-kube-api-access-sr4g8\") pod \"coredns-76f75df574-dnl5l\" (UID: \"22a25218-506a-48e3-b4fd-f0ae3f1527a3\") " pod="kube-system/coredns-76f75df574-dnl5l" Dec 13 08:48:36.426408 kubelet[2743]: I1213 08:48:36.426386 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad7eff51-0c89-4d1a-be7b-2c099a9c4335-tigera-ca-bundle\") pod \"calico-kube-controllers-57f546d8b9-6fx86\" (UID: \"ad7eff51-0c89-4d1a-be7b-2c099a9c4335\") " pod="calico-system/calico-kube-controllers-57f546d8b9-6fx86" Dec 13 08:48:36.426767 kubelet[2743]: I1213 08:48:36.426530 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd4m4\" (UniqueName: \"kubernetes.io/projected/15f81aca-82d2-4a86-bb58-7838817f9d2a-kube-api-access-cd4m4\") pod \"calico-apiserver-96bd6dbc8-jm4wj\" (UID: \"15f81aca-82d2-4a86-bb58-7838817f9d2a\") " pod="calico-apiserver/calico-apiserver-96bd6dbc8-jm4wj" Dec 13 08:48:36.426767 kubelet[2743]: I1213 08:48:36.426694 2743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzn2n\" (UniqueName: \"kubernetes.io/projected/ad7eff51-0c89-4d1a-be7b-2c099a9c4335-kube-api-access-zzn2n\") pod \"calico-kube-controllers-57f546d8b9-6fx86\" (UID: \"ad7eff51-0c89-4d1a-be7b-2c099a9c4335\") " pod="calico-system/calico-kube-controllers-57f546d8b9-6fx86" Dec 13 08:48:36.695759 kubelet[2743]: E1213 08:48:36.695598 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:36.701174 containerd[1595]: time="2024-12-13T08:48:36.700774673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mncf7,Uid:8474ed1a-b932-4bf2-81e9-841701b51857,Namespace:kube-system,Attempt:0,}" Dec 13 08:48:36.722274 kubelet[2743]: E1213 08:48:36.721438 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:36.723673 containerd[1595]: time="2024-12-13T08:48:36.723625375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dnl5l,Uid:22a25218-506a-48e3-b4fd-f0ae3f1527a3,Namespace:kube-system,Attempt:0,}" Dec 13 08:48:36.736387 containerd[1595]: time="2024-12-13T08:48:36.736267932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96bd6dbc8-gtk2k,Uid:85ce8df6-d384-49f2-95e5-eb36247ebf47,Namespace:calico-apiserver,Attempt:0,}" Dec 13 08:48:36.738373 containerd[1595]: time="2024-12-13T08:48:36.738306583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96bd6dbc8-jm4wj,Uid:15f81aca-82d2-4a86-bb58-7838817f9d2a,Namespace:calico-apiserver,Attempt:0,}" Dec 13 08:48:36.739338 containerd[1595]: time="2024-12-13T08:48:36.739146161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57f546d8b9-6fx86,Uid:ad7eff51-0c89-4d1a-be7b-2c099a9c4335,Namespace:calico-system,Attempt:0,}" Dec 13 08:48:36.904390 systemd-journald[1139]: Under memory pressure, flushing caches. Dec 13 08:48:36.900565 systemd-resolved[1476]: Under memory pressure, flushing caches. Dec 13 08:48:36.900641 systemd-resolved[1476]: Flushed all caches. Dec 13 08:48:37.040698 containerd[1595]: time="2024-12-13T08:48:37.040594646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h2vzm,Uid:b522e9d4-562e-4448-88f3-7d6870e65d2f,Namespace:calico-system,Attempt:0,}" Dec 13 08:48:37.184645 containerd[1595]: time="2024-12-13T08:48:37.184035340Z" level=error msg="Failed to destroy network for sandbox \"f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:48:37.191201 containerd[1595]: time="2024-12-13T08:48:37.190757113Z" level=error msg="Failed to destroy network for sandbox \"88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:48:37.194356 containerd[1595]: time="2024-12-13T08:48:37.194269509Z" level=error msg="encountered an error cleaning up failed sandbox \"88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:48:37.195657 containerd[1595]: time="2024-12-13T08:48:37.195572750Z" level=error msg="encountered an error cleaning up failed sandbox \"f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:48:37.204414 containerd[1595]: time="2024-12-13T08:48:37.203180909Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57f546d8b9-6fx86,Uid:ad7eff51-0c89-4d1a-be7b-2c099a9c4335,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:48:37.215993 containerd[1595]: time="2024-12-13T08:48:37.214605446Z" level=error msg="Failed to destroy network for sandbox \"141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:48:37.215993 containerd[1595]: time="2024-12-13T08:48:37.215041532Z" level=error msg="encountered an error cleaning up failed sandbox \"141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:48:37.215993 containerd[1595]: time="2024-12-13T08:48:37.215118368Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mncf7,Uid:8474ed1a-b932-4bf2-81e9-841701b51857,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:48:37.215993 containerd[1595]: time="2024-12-13T08:48:37.215282223Z" level=error msg="Failed to destroy network for sandbox \"e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:48:37.215993 containerd[1595]: time="2024-12-13T08:48:37.215693714Z" level=error msg="encountered an error cleaning up failed sandbox \"e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:48:37.215993 containerd[1595]: time="2024-12-13T08:48:37.215751188Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dnl5l,Uid:22a25218-506a-48e3-b4fd-f0ae3f1527a3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:48:37.227455 containerd[1595]: time="2024-12-13T08:48:37.227382126Z" level=error msg="Failed to destroy network for sandbox \"bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:48:37.227874 containerd[1595]: time="2024-12-13T08:48:37.227782925Z" level=error msg="encountered an error cleaning up failed sandbox \"bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:48:37.228004 containerd[1595]: time="2024-12-13T08:48:37.227900392Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96bd6dbc8-jm4wj,Uid:15f81aca-82d2-4a86-bb58-7838817f9d2a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:48:37.228533 kubelet[2743]: E1213 08:48:37.228340 2743 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:48:37.228533 kubelet[2743]: E1213 08:48:37.228402 2743 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:48:37.228533 kubelet[2743]: E1213 08:48:37.228426 2743 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57f546d8b9-6fx86" Dec 13 08:48:37.228533 kubelet[2743]: E1213 08:48:37.228458 2743 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mncf7" Dec 13 08:48:37.228862 kubelet[2743]: E1213 08:48:37.228482 2743 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57f546d8b9-6fx86" Dec 13 08:48:37.228862 kubelet[2743]: E1213 08:48:37.228499 2743 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mncf7" Dec 13 08:48:37.228862 kubelet[2743]: E1213 08:48:37.228587 2743 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57f546d8b9-6fx86_calico-system(ad7eff51-0c89-4d1a-be7b-2c099a9c4335)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57f546d8b9-6fx86_calico-system(ad7eff51-0c89-4d1a-be7b-2c099a9c4335)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57f546d8b9-6fx86" podUID="ad7eff51-0c89-4d1a-be7b-2c099a9c4335" Dec 13 08:48:37.229720 kubelet[2743]: E1213 08:48:37.229286 2743 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-mncf7_kube-system(8474ed1a-b932-4bf2-81e9-841701b51857)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-mncf7_kube-system(8474ed1a-b932-4bf2-81e9-841701b51857)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-mncf7" podUID="8474ed1a-b932-4bf2-81e9-841701b51857" Dec 13 08:48:37.229720 kubelet[2743]: E1213 08:48:37.229391 2743 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:48:37.229720 kubelet[2743]: E1213 08:48:37.229446 2743 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-dnl5l" Dec 13 08:48:37.230020 kubelet[2743]: E1213 08:48:37.229476 2743 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-dnl5l" Dec 13 08:48:37.230020 kubelet[2743]: E1213 08:48:37.229537 2743 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-dnl5l_kube-system(22a25218-506a-48e3-b4fd-f0ae3f1527a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-dnl5l_kube-system(22a25218-506a-48e3-b4fd-f0ae3f1527a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-dnl5l" podUID="22a25218-506a-48e3-b4fd-f0ae3f1527a3" Dec 13 08:48:37.230020 kubelet[2743]: E1213 08:48:37.228346 2743 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:48:37.230234 kubelet[2743]: E1213 08:48:37.229593 2743 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96bd6dbc8-jm4wj" Dec 13 08:48:37.230234 kubelet[2743]: E1213 08:48:37.229615 2743 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96bd6dbc8-jm4wj" Dec 13 08:48:37.230234 kubelet[2743]: E1213 08:48:37.229668 2743 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-96bd6dbc8-jm4wj_calico-apiserver(15f81aca-82d2-4a86-bb58-7838817f9d2a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-96bd6dbc8-jm4wj_calico-apiserver(15f81aca-82d2-4a86-bb58-7838817f9d2a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-96bd6dbc8-jm4wj" podUID="15f81aca-82d2-4a86-bb58-7838817f9d2a" Dec 13 08:48:37.249804 containerd[1595]: time="2024-12-13T08:48:37.249714831Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96bd6dbc8-gtk2k,Uid:85ce8df6-d384-49f2-95e5-eb36247ebf47,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:48:37.250764 kubelet[2743]: E1213 08:48:37.250685 2743 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:48:37.250764 kubelet[2743]: E1213 08:48:37.250754 2743 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96bd6dbc8-gtk2k" Dec 13 08:48:37.253210 kubelet[2743]: E1213 08:48:37.250786 2743 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96bd6dbc8-gtk2k" Dec 13 08:48:37.253210 kubelet[2743]: E1213 08:48:37.252497 2743 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-96bd6dbc8-gtk2k_calico-apiserver(85ce8df6-d384-49f2-95e5-eb36247ebf47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-96bd6dbc8-gtk2k_calico-apiserver(85ce8df6-d384-49f2-95e5-eb36247ebf47)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-96bd6dbc8-gtk2k" podUID="85ce8df6-d384-49f2-95e5-eb36247ebf47" Dec 13 08:48:37.258804 kubelet[2743]: I1213 08:48:37.258237 2743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" Dec 13 08:48:37.268527 kubelet[2743]: E1213 08:48:37.266564 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:37.284118 containerd[1595]: time="2024-12-13T08:48:37.283388826Z" level=info msg="StopPodSandbox for \"f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d\"" Dec 13 08:48:37.288553 containerd[1595]: time="2024-12-13T08:48:37.288498761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 08:48:37.320387 containerd[1595]: time="2024-12-13T08:48:37.318631830Z" level=info msg="Ensure that sandbox f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d in task-service has been cleanup successfully" Dec 13 08:48:37.327941 kubelet[2743]: I1213 08:48:37.324010 2743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" Dec 13 08:48:37.330250 containerd[1595]: time="2024-12-13T08:48:37.330079013Z" level=info msg="StopPodSandbox for \"e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6\"" Dec 13 08:48:37.336505 containerd[1595]: time="2024-12-13T08:48:37.335589222Z" level=info msg="Ensure that sandbox e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6 in task-service has been cleanup successfully" Dec 13 08:48:37.340890 containerd[1595]: time="2024-12-13T08:48:37.340659952Z" level=error msg="Failed to destroy network for sandbox \"fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:48:37.343187 containerd[1595]: time="2024-12-13T08:48:37.343108331Z" level=error msg="encountered an error cleaning up failed sandbox \"fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:48:37.343669 containerd[1595]: time="2024-12-13T08:48:37.343525208Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h2vzm,Uid:b522e9d4-562e-4448-88f3-7d6870e65d2f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:48:37.344837 kubelet[2743]: E1213 08:48:37.344809 2743 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:48:37.345222 kubelet[2743]: E1213 08:48:37.345056 2743 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h2vzm" Dec 13 08:48:37.345222 kubelet[2743]: E1213 08:48:37.345101 2743 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h2vzm" Dec 13 08:48:37.346535 kubelet[2743]: E1213 08:48:37.345416 2743 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h2vzm_calico-system(b522e9d4-562e-4448-88f3-7d6870e65d2f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h2vzm_calico-system(b522e9d4-562e-4448-88f3-7d6870e65d2f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h2vzm" podUID="b522e9d4-562e-4448-88f3-7d6870e65d2f" Dec 13 08:48:37.348448 kubelet[2743]: I1213 08:48:37.347599 2743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" Dec 13 08:48:37.351425 containerd[1595]: time="2024-12-13T08:48:37.351173936Z" level=info msg="StopPodSandbox for \"141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa\"" Dec 13 08:48:37.352758 containerd[1595]: time="2024-12-13T08:48:37.351844270Z" level=info msg="Ensure that sandbox 141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa in task-service has been cleanup successfully" Dec 13 08:48:37.355570 kubelet[2743]: I1213 08:48:37.355128 2743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" Dec 13 08:48:37.356340 containerd[1595]: time="2024-12-13T08:48:37.356283082Z" level=info msg="StopPodSandbox for \"88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994\"" Dec 13 08:48:37.358635 containerd[1595]: time="2024-12-13T08:48:37.358154332Z" level=info msg="Ensure that sandbox 88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994 in task-service has been cleanup successfully" Dec 13 08:48:37.362338 kubelet[2743]: I1213 08:48:37.362167 2743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" Dec 13 08:48:37.363423 containerd[1595]: time="2024-12-13T08:48:37.363290820Z" level=info msg="StopPodSandbox for \"bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490\"" Dec 13 08:48:37.363598 containerd[1595]: time="2024-12-13T08:48:37.363565693Z" level=info msg="Ensure that sandbox bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490 in task-service has been cleanup successfully" Dec 13 08:48:37.446089 containerd[1595]: time="2024-12-13T08:48:37.445998337Z" level=error msg="StopPodSandbox for \"f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d\" failed" error="failed to destroy network for sandbox \"f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:48:37.447204 kubelet[2743]: E1213 08:48:37.446273 2743 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" Dec 13 08:48:37.447204 kubelet[2743]: E1213 08:48:37.446410 2743 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d"} Dec 13 08:48:37.447204 kubelet[2743]: E1213 08:48:37.446457 2743 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"85ce8df6-d384-49f2-95e5-eb36247ebf47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 08:48:37.447204 kubelet[2743]: E1213 08:48:37.446511 2743 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"85ce8df6-d384-49f2-95e5-eb36247ebf47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-96bd6dbc8-gtk2k" podUID="85ce8df6-d384-49f2-95e5-eb36247ebf47" Dec 13 08:48:37.467023 containerd[1595]: time="2024-12-13T08:48:37.466960137Z" level=error msg="StopPodSandbox for \"e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6\" failed" error="failed to destroy network for sandbox \"e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:48:37.467689 kubelet[2743]: E1213 08:48:37.467656 2743 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" Dec 13 08:48:37.468578 kubelet[2743]: E1213 08:48:37.468401 2743 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6"} Dec 13 08:48:37.468578 kubelet[2743]: E1213 08:48:37.468486 2743 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"22a25218-506a-48e3-b4fd-f0ae3f1527a3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 08:48:37.468578 kubelet[2743]: E1213 08:48:37.468547 2743 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"22a25218-506a-48e3-b4fd-f0ae3f1527a3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-dnl5l" podUID="22a25218-506a-48e3-b4fd-f0ae3f1527a3" Dec 13 08:48:37.478387 containerd[1595]: time="2024-12-13T08:48:37.478307015Z" level=error msg="StopPodSandbox for \"141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa\" failed" error="failed to destroy network for sandbox \"141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:48:37.478672 kubelet[2743]: E1213 08:48:37.478626 2743 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" Dec 13 08:48:37.478751 kubelet[2743]: E1213 08:48:37.478680 2743 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa"} Dec 13 08:48:37.478751 kubelet[2743]: E1213 08:48:37.478745 2743 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8474ed1a-b932-4bf2-81e9-841701b51857\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 08:48:37.478897 kubelet[2743]: E1213 08:48:37.478792 2743 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8474ed1a-b932-4bf2-81e9-841701b51857\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-mncf7" podUID="8474ed1a-b932-4bf2-81e9-841701b51857" Dec 13 08:48:37.485837 containerd[1595]: time="2024-12-13T08:48:37.485557866Z" level=error msg="StopPodSandbox for \"88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994\" failed" error="failed to destroy network for sandbox \"88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:48:37.486292 kubelet[2743]: E1213 08:48:37.485884 2743 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" Dec 13 08:48:37.486292 kubelet[2743]: E1213 08:48:37.485945 2743 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994"} Dec 13 08:48:37.486292 kubelet[2743]: E1213 08:48:37.486056 2743 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ad7eff51-0c89-4d1a-be7b-2c099a9c4335\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 08:48:37.486292 kubelet[2743]: E1213 08:48:37.486111 2743 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ad7eff51-0c89-4d1a-be7b-2c099a9c4335\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57f546d8b9-6fx86" podUID="ad7eff51-0c89-4d1a-be7b-2c099a9c4335" Dec 13 08:48:37.489979 containerd[1595]: time="2024-12-13T08:48:37.489696256Z" level=error msg="StopPodSandbox for \"bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490\" failed" error="failed to destroy network for sandbox \"bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:48:37.490143 kubelet[2743]: E1213 08:48:37.490025 2743 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" Dec 13 08:48:37.490143 kubelet[2743]: E1213 08:48:37.490079 2743 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490"} Dec 13 08:48:37.490143 kubelet[2743]: E1213 08:48:37.490137 2743 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15f81aca-82d2-4a86-bb58-7838817f9d2a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 08:48:37.490491 kubelet[2743]: E1213 08:48:37.490187 2743 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15f81aca-82d2-4a86-bb58-7838817f9d2a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-96bd6dbc8-jm4wj" podUID="15f81aca-82d2-4a86-bb58-7838817f9d2a" Dec 13 08:48:38.365911 kubelet[2743]: I1213 08:48:38.365869 2743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" Dec 13 08:48:38.367586 containerd[1595]: time="2024-12-13T08:48:38.367542367Z" level=info msg="StopPodSandbox for \"fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d\"" Dec 13 08:48:38.368133 containerd[1595]: time="2024-12-13T08:48:38.367867445Z" level=info msg="Ensure that sandbox fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d in task-service has been cleanup successfully" Dec 13 08:48:38.409742 containerd[1595]: time="2024-12-13T08:48:38.409679711Z" level=error msg="StopPodSandbox for \"fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d\" failed" error="failed to destroy network for sandbox \"fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:48:38.410024 kubelet[2743]: E1213 08:48:38.409997 2743 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" Dec 13 08:48:38.410117 kubelet[2743]: E1213 08:48:38.410050 2743 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d"} Dec 13 08:48:38.410117 kubelet[2743]: E1213 08:48:38.410106 2743 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b522e9d4-562e-4448-88f3-7d6870e65d2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 08:48:38.410279 kubelet[2743]: E1213 08:48:38.410151 2743 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b522e9d4-562e-4448-88f3-7d6870e65d2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h2vzm" podUID="b522e9d4-562e-4448-88f3-7d6870e65d2f" Dec 13 08:48:46.185625 kubelet[2743]: I1213 08:48:46.185568 2743 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 08:48:46.191305 kubelet[2743]: E1213 08:48:46.190959 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:46.343739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3705514410.mount: Deactivated successfully. Dec 13 08:48:46.394258 kubelet[2743]: E1213 08:48:46.394151 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:46.504177 containerd[1595]: time="2024-12-13T08:48:46.503070800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:46.513792 containerd[1595]: time="2024-12-13T08:48:46.484959151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 08:48:46.528985 containerd[1595]: time="2024-12-13T08:48:46.528922391Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:46.530552 containerd[1595]: time="2024-12-13T08:48:46.530470769Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:46.531848 containerd[1595]: time="2024-12-13T08:48:46.531784647Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 9.2430318s" Dec 13 08:48:46.531991 containerd[1595]: time="2024-12-13T08:48:46.531848830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 08:48:46.635415 containerd[1595]: time="2024-12-13T08:48:46.635217285Z" level=info msg="CreateContainer within sandbox \"fabf26fe25a08d64f0bdb0eb6f91e1e5411b4bd285457edf50238268f39d5e9c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 08:48:46.724802 containerd[1595]: time="2024-12-13T08:48:46.724598239Z" level=info msg="CreateContainer within sandbox \"fabf26fe25a08d64f0bdb0eb6f91e1e5411b4bd285457edf50238268f39d5e9c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d3df44d7d23cc1e77f2c51eb73848debc53743faf8b80630b500e2aa09fa9b67\"" Dec 13 08:48:46.752109 containerd[1595]: time="2024-12-13T08:48:46.751720169Z" level=info msg="StartContainer for \"d3df44d7d23cc1e77f2c51eb73848debc53743faf8b80630b500e2aa09fa9b67\"" Dec 13 08:48:46.887130 systemd-journald[1139]: Under memory pressure, flushing caches. Dec 13 08:48:46.886571 systemd-resolved[1476]: Under memory pressure, flushing caches. Dec 13 08:48:46.886662 systemd-resolved[1476]: Flushed all caches. Dec 13 08:48:46.952282 containerd[1595]: time="2024-12-13T08:48:46.950593570Z" level=info msg="StartContainer for \"d3df44d7d23cc1e77f2c51eb73848debc53743faf8b80630b500e2aa09fa9b67\" returns successfully" Dec 13 08:48:47.088221 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 08:48:47.088884 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 08:48:47.405977 kubelet[2743]: E1213 08:48:47.405909 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:47.448456 kubelet[2743]: I1213 08:48:47.448378 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-dmbrr" podStartSLOduration=2.057340499 podStartE2EDuration="23.441538956s" podCreationTimestamp="2024-12-13 08:48:24 +0000 UTC" firstStartedPulling="2024-12-13 08:48:25.148430954 +0000 UTC m=+20.349245438" lastFinishedPulling="2024-12-13 08:48:46.532629393 +0000 UTC m=+41.733443895" observedRunningTime="2024-12-13 08:48:47.439121189 +0000 UTC m=+42.639935698" watchObservedRunningTime="2024-12-13 08:48:47.441538956 +0000 UTC m=+42.642353504" Dec 13 08:48:48.036516 containerd[1595]: time="2024-12-13T08:48:48.036378481Z" level=info msg="StopPodSandbox for \"e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6\"" Dec 13 08:48:48.410498 kubelet[2743]: E1213 08:48:48.408647 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:48.515276 systemd[1]: run-containerd-runc-k8s.io-d3df44d7d23cc1e77f2c51eb73848debc53743faf8b80630b500e2aa09fa9b67-runc.AL2jKS.mount: Deactivated successfully. Dec 13 08:48:48.549853 containerd[1595]: 2024-12-13 08:48:48.144 [INFO][3926] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" Dec 13 08:48:48.549853 containerd[1595]: 2024-12-13 08:48:48.145 [INFO][3926] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" iface="eth0" netns="/var/run/netns/cni-e195dc96-9fa0-899a-7a6c-f9074f6b2411" Dec 13 08:48:48.549853 containerd[1595]: 2024-12-13 08:48:48.146 [INFO][3926] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" iface="eth0" netns="/var/run/netns/cni-e195dc96-9fa0-899a-7a6c-f9074f6b2411" Dec 13 08:48:48.549853 containerd[1595]: 2024-12-13 08:48:48.149 [INFO][3926] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" iface="eth0" netns="/var/run/netns/cni-e195dc96-9fa0-899a-7a6c-f9074f6b2411" Dec 13 08:48:48.549853 containerd[1595]: 2024-12-13 08:48:48.149 [INFO][3926] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" Dec 13 08:48:48.549853 containerd[1595]: 2024-12-13 08:48:48.149 [INFO][3926] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" Dec 13 08:48:48.549853 containerd[1595]: 2024-12-13 08:48:48.504 [INFO][3932] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" HandleID="k8s-pod-network.e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" Workload="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--dnl5l-eth0" Dec 13 08:48:48.549853 containerd[1595]: 2024-12-13 08:48:48.521 [INFO][3932] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:48:48.549853 containerd[1595]: 2024-12-13 08:48:48.522 [INFO][3932] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:48:48.549853 containerd[1595]: 2024-12-13 08:48:48.539 [WARNING][3932] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" HandleID="k8s-pod-network.e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" Workload="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--dnl5l-eth0" Dec 13 08:48:48.549853 containerd[1595]: 2024-12-13 08:48:48.540 [INFO][3932] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" HandleID="k8s-pod-network.e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" Workload="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--dnl5l-eth0" Dec 13 08:48:48.549853 containerd[1595]: 2024-12-13 08:48:48.542 [INFO][3932] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:48:48.549853 containerd[1595]: 2024-12-13 08:48:48.546 [INFO][3926] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" Dec 13 08:48:48.558494 containerd[1595]: time="2024-12-13T08:48:48.550584071Z" level=info msg="TearDown network for sandbox \"e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6\" successfully" Dec 13 08:48:48.558494 containerd[1595]: time="2024-12-13T08:48:48.550633133Z" level=info msg="StopPodSandbox for \"e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6\" returns successfully" Dec 13 08:48:48.558494 containerd[1595]: time="2024-12-13T08:48:48.555964070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dnl5l,Uid:22a25218-506a-48e3-b4fd-f0ae3f1527a3,Namespace:kube-system,Attempt:1,}" Dec 13 08:48:48.558651 kubelet[2743]: E1213 08:48:48.554766 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:48.561392 systemd[1]: run-netns-cni\x2de195dc96\x2d9fa0\x2d899a\x2d7a6c\x2df9074f6b2411.mount: Deactivated successfully. Dec 13 08:48:48.879218 systemd-networkd[1224]: cali1d3ce63dc0e: Link UP Dec 13 08:48:48.884295 systemd-networkd[1224]: cali1d3ce63dc0e: Gained carrier Dec 13 08:48:48.942225 systemd-journald[1139]: Under memory pressure, flushing caches. Dec 13 08:48:48.935188 systemd-resolved[1476]: Under memory pressure, flushing caches. Dec 13 08:48:48.945954 containerd[1595]: 2024-12-13 08:48:48.666 [INFO][3957] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 08:48:48.945954 containerd[1595]: 2024-12-13 08:48:48.692 [INFO][3957] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--dnl5l-eth0 coredns-76f75df574- kube-system 22a25218-506a-48e3-b4fd-f0ae3f1527a3 771 0 2024-12-13 08:48:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.2.1-f-1ee231485e coredns-76f75df574-dnl5l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1d3ce63dc0e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="946033eaf8a3961e9a6ee9ce8ff2b1a51a5215b56918a5ca22ea3f0c896580ab" Namespace="kube-system" Pod="coredns-76f75df574-dnl5l" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--dnl5l-" Dec 13 08:48:48.945954 containerd[1595]: 2024-12-13 08:48:48.692 [INFO][3957] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="946033eaf8a3961e9a6ee9ce8ff2b1a51a5215b56918a5ca22ea3f0c896580ab" Namespace="kube-system" Pod="coredns-76f75df574-dnl5l" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--dnl5l-eth0" Dec 13 08:48:48.945954 containerd[1595]: 2024-12-13 08:48:48.751 [INFO][3969] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="946033eaf8a3961e9a6ee9ce8ff2b1a51a5215b56918a5ca22ea3f0c896580ab" HandleID="k8s-pod-network.946033eaf8a3961e9a6ee9ce8ff2b1a51a5215b56918a5ca22ea3f0c896580ab" Workload="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--dnl5l-eth0" Dec 13 08:48:48.945954 containerd[1595]: 2024-12-13 08:48:48.776 [INFO][3969] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="946033eaf8a3961e9a6ee9ce8ff2b1a51a5215b56918a5ca22ea3f0c896580ab" HandleID="k8s-pod-network.946033eaf8a3961e9a6ee9ce8ff2b1a51a5215b56918a5ca22ea3f0c896580ab" Workload="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--dnl5l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332dd0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.1-f-1ee231485e", "pod":"coredns-76f75df574-dnl5l", "timestamp":"2024-12-13 08:48:48.751513764 +0000 UTC"}, Hostname:"ci-4081.2.1-f-1ee231485e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 08:48:48.945954 containerd[1595]: 2024-12-13 08:48:48.778 [INFO][3969] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:48:48.945954 containerd[1595]: 2024-12-13 08:48:48.778 [INFO][3969] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:48:48.945954 containerd[1595]: 2024-12-13 08:48:48.778 [INFO][3969] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-f-1ee231485e' Dec 13 08:48:48.945954 containerd[1595]: 2024-12-13 08:48:48.786 [INFO][3969] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.946033eaf8a3961e9a6ee9ce8ff2b1a51a5215b56918a5ca22ea3f0c896580ab" host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:48.945954 containerd[1595]: 2024-12-13 08:48:48.802 [INFO][3969] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:48.945954 containerd[1595]: 2024-12-13 08:48:48.812 [INFO][3969] ipam/ipam.go 489: Trying affinity for 192.168.24.64/26 host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:48.945954 containerd[1595]: 2024-12-13 08:48:48.815 [INFO][3969] ipam/ipam.go 155: Attempting to load block cidr=192.168.24.64/26 host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:48.945954 containerd[1595]: 2024-12-13 08:48:48.820 [INFO][3969] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.64/26 host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:48.945954 containerd[1595]: 2024-12-13 08:48:48.820 [INFO][3969] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.64/26 handle="k8s-pod-network.946033eaf8a3961e9a6ee9ce8ff2b1a51a5215b56918a5ca22ea3f0c896580ab" host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:48.945954 containerd[1595]: 2024-12-13 08:48:48.822 [INFO][3969] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.946033eaf8a3961e9a6ee9ce8ff2b1a51a5215b56918a5ca22ea3f0c896580ab Dec 13 08:48:48.945954 containerd[1595]: 2024-12-13 08:48:48.832 [INFO][3969] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.24.64/26 handle="k8s-pod-network.946033eaf8a3961e9a6ee9ce8ff2b1a51a5215b56918a5ca22ea3f0c896580ab" host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:48.945954 containerd[1595]: 2024-12-13 08:48:48.843 [INFO][3969] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.24.65/26] block=192.168.24.64/26 handle="k8s-pod-network.946033eaf8a3961e9a6ee9ce8ff2b1a51a5215b56918a5ca22ea3f0c896580ab" host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:48.945954 containerd[1595]: 2024-12-13 08:48:48.844 [INFO][3969] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.65/26] handle="k8s-pod-network.946033eaf8a3961e9a6ee9ce8ff2b1a51a5215b56918a5ca22ea3f0c896580ab" host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:48.945954 containerd[1595]: 2024-12-13 08:48:48.846 [INFO][3969] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:48:48.945954 containerd[1595]: 2024-12-13 08:48:48.846 [INFO][3969] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.65/26] IPv6=[] ContainerID="946033eaf8a3961e9a6ee9ce8ff2b1a51a5215b56918a5ca22ea3f0c896580ab" HandleID="k8s-pod-network.946033eaf8a3961e9a6ee9ce8ff2b1a51a5215b56918a5ca22ea3f0c896580ab" Workload="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--dnl5l-eth0" Dec 13 08:48:48.935197 systemd-resolved[1476]: Flushed all caches. Dec 13 08:48:48.953209 containerd[1595]: 2024-12-13 08:48:48.856 [INFO][3957] cni-plugin/k8s.go 386: Populated endpoint ContainerID="946033eaf8a3961e9a6ee9ce8ff2b1a51a5215b56918a5ca22ea3f0c896580ab" Namespace="kube-system" Pod="coredns-76f75df574-dnl5l" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--dnl5l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--dnl5l-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"22a25218-506a-48e3-b4fd-f0ae3f1527a3", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 48, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-f-1ee231485e", ContainerID:"", Pod:"coredns-76f75df574-dnl5l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d3ce63dc0e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:48:48.953209 containerd[1595]: 2024-12-13 08:48:48.857 [INFO][3957] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.24.65/32] ContainerID="946033eaf8a3961e9a6ee9ce8ff2b1a51a5215b56918a5ca22ea3f0c896580ab" Namespace="kube-system" Pod="coredns-76f75df574-dnl5l" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--dnl5l-eth0" Dec 13 08:48:48.953209 containerd[1595]: 2024-12-13 08:48:48.857 [INFO][3957] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d3ce63dc0e ContainerID="946033eaf8a3961e9a6ee9ce8ff2b1a51a5215b56918a5ca22ea3f0c896580ab" Namespace="kube-system" Pod="coredns-76f75df574-dnl5l" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--dnl5l-eth0" Dec 13 08:48:48.953209 containerd[1595]: 2024-12-13 08:48:48.885 [INFO][3957] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="946033eaf8a3961e9a6ee9ce8ff2b1a51a5215b56918a5ca22ea3f0c896580ab" Namespace="kube-system" Pod="coredns-76f75df574-dnl5l" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--dnl5l-eth0" Dec 13 08:48:48.953209 containerd[1595]: 2024-12-13 08:48:48.887 [INFO][3957] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="946033eaf8a3961e9a6ee9ce8ff2b1a51a5215b56918a5ca22ea3f0c896580ab" Namespace="kube-system" Pod="coredns-76f75df574-dnl5l" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--dnl5l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--dnl5l-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"22a25218-506a-48e3-b4fd-f0ae3f1527a3", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 48, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-f-1ee231485e", ContainerID:"946033eaf8a3961e9a6ee9ce8ff2b1a51a5215b56918a5ca22ea3f0c896580ab", Pod:"coredns-76f75df574-dnl5l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d3ce63dc0e", MAC:"be:36:99:7c:6f:aa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:48:48.953209 containerd[1595]: 2024-12-13 08:48:48.920 [INFO][3957] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="946033eaf8a3961e9a6ee9ce8ff2b1a51a5215b56918a5ca22ea3f0c896580ab" Namespace="kube-system" Pod="coredns-76f75df574-dnl5l" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--dnl5l-eth0" Dec 13 08:48:49.047335 containerd[1595]: time="2024-12-13T08:48:49.038808915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:48:49.047335 containerd[1595]: time="2024-12-13T08:48:49.038890050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:48:49.047335 containerd[1595]: time="2024-12-13T08:48:49.038914772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:49.047335 containerd[1595]: time="2024-12-13T08:48:49.039041788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:49.218908 containerd[1595]: time="2024-12-13T08:48:49.218853357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dnl5l,Uid:22a25218-506a-48e3-b4fd-f0ae3f1527a3,Namespace:kube-system,Attempt:1,} returns sandbox id \"946033eaf8a3961e9a6ee9ce8ff2b1a51a5215b56918a5ca22ea3f0c896580ab\"" Dec 13 08:48:49.221634 kubelet[2743]: E1213 08:48:49.221363 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:49.225405 containerd[1595]: time="2024-12-13T08:48:49.225210529Z" level=info msg="CreateContainer within sandbox \"946033eaf8a3961e9a6ee9ce8ff2b1a51a5215b56918a5ca22ea3f0c896580ab\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 08:48:49.268700 containerd[1595]: time="2024-12-13T08:48:49.267879949Z" level=info msg="CreateContainer within sandbox \"946033eaf8a3961e9a6ee9ce8ff2b1a51a5215b56918a5ca22ea3f0c896580ab\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"de5af5f4d2ccdb5b25b944e591b66737e1d7604f3830944026b59a05a4278480\"" Dec 13 08:48:49.270012 containerd[1595]: time="2024-12-13T08:48:49.269947330Z" level=info msg="StartContainer for \"de5af5f4d2ccdb5b25b944e591b66737e1d7604f3830944026b59a05a4278480\"" Dec 13 08:48:49.436535 kubelet[2743]: E1213 08:48:49.436413 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:49.450859 containerd[1595]: time="2024-12-13T08:48:49.450500421Z" level=info msg="StartContainer for \"de5af5f4d2ccdb5b25b944e591b66737e1d7604f3830944026b59a05a4278480\" returns successfully" Dec 13 08:48:49.601361 kernel: bpftool[4193]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 08:48:50.040985 containerd[1595]: time="2024-12-13T08:48:50.040767172Z" level=info msg="StopPodSandbox for \"f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d\"" Dec 13 08:48:50.041598 systemd-networkd[1224]: vxlan.calico: Link UP Dec 13 08:48:50.041605 systemd-networkd[1224]: vxlan.calico: Gained carrier Dec 13 08:48:50.308988 containerd[1595]: 2024-12-13 08:48:50.227 [INFO][4256] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" Dec 13 08:48:50.308988 containerd[1595]: 2024-12-13 08:48:50.228 [INFO][4256] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" iface="eth0" netns="/var/run/netns/cni-dff875b3-e072-ef2b-96e7-0c8d15cb03ec" Dec 13 08:48:50.308988 containerd[1595]: 2024-12-13 08:48:50.228 [INFO][4256] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" iface="eth0" netns="/var/run/netns/cni-dff875b3-e072-ef2b-96e7-0c8d15cb03ec" Dec 13 08:48:50.308988 containerd[1595]: 2024-12-13 08:48:50.230 [INFO][4256] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" iface="eth0" netns="/var/run/netns/cni-dff875b3-e072-ef2b-96e7-0c8d15cb03ec" Dec 13 08:48:50.308988 containerd[1595]: 2024-12-13 08:48:50.230 [INFO][4256] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" Dec 13 08:48:50.308988 containerd[1595]: 2024-12-13 08:48:50.231 [INFO][4256] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" Dec 13 08:48:50.308988 containerd[1595]: 2024-12-13 08:48:50.287 [INFO][4265] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" HandleID="k8s-pod-network.f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--gtk2k-eth0" Dec 13 08:48:50.308988 containerd[1595]: 2024-12-13 08:48:50.287 [INFO][4265] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:48:50.308988 containerd[1595]: 2024-12-13 08:48:50.287 [INFO][4265] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:48:50.308988 containerd[1595]: 2024-12-13 08:48:50.300 [WARNING][4265] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" HandleID="k8s-pod-network.f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--gtk2k-eth0" Dec 13 08:48:50.308988 containerd[1595]: 2024-12-13 08:48:50.300 [INFO][4265] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" HandleID="k8s-pod-network.f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--gtk2k-eth0" Dec 13 08:48:50.308988 containerd[1595]: 2024-12-13 08:48:50.303 [INFO][4265] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:48:50.308988 containerd[1595]: 2024-12-13 08:48:50.305 [INFO][4256] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" Dec 13 08:48:50.313750 containerd[1595]: time="2024-12-13T08:48:50.309171578Z" level=info msg="TearDown network for sandbox \"f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d\" successfully" Dec 13 08:48:50.313750 containerd[1595]: time="2024-12-13T08:48:50.309457974Z" level=info msg="StopPodSandbox for \"f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d\" returns successfully" Dec 13 08:48:50.313750 containerd[1595]: time="2024-12-13T08:48:50.311920398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96bd6dbc8-gtk2k,Uid:85ce8df6-d384-49f2-95e5-eb36247ebf47,Namespace:calico-apiserver,Attempt:1,}" Dec 13 08:48:50.319397 systemd[1]: run-netns-cni\x2ddff875b3\x2de072\x2def2b\x2d96e7\x2d0c8d15cb03ec.mount: Deactivated successfully. Dec 13 08:48:50.403642 systemd-networkd[1224]: cali1d3ce63dc0e: Gained IPv6LL Dec 13 08:48:50.441720 kubelet[2743]: E1213 08:48:50.441603 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:50.519633 kubelet[2743]: I1213 08:48:50.519366 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-dnl5l" podStartSLOduration=33.519272722 podStartE2EDuration="33.519272722s" podCreationTimestamp="2024-12-13 08:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:48:50.502129371 +0000 UTC m=+45.702943877" watchObservedRunningTime="2024-12-13 08:48:50.519272722 +0000 UTC m=+45.720087221" Dec 13 08:48:50.635839 systemd-networkd[1224]: calidcb285b5f97: Link UP Dec 13 08:48:50.636239 systemd-networkd[1224]: calidcb285b5f97: Gained carrier Dec 13 08:48:50.677791 containerd[1595]: 2024-12-13 08:48:50.425 [INFO][4273] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--gtk2k-eth0 calico-apiserver-96bd6dbc8- calico-apiserver 85ce8df6-d384-49f2-95e5-eb36247ebf47 788 0 2024-12-13 08:48:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:96bd6dbc8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.2.1-f-1ee231485e calico-apiserver-96bd6dbc8-gtk2k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidcb285b5f97 [] []}} ContainerID="8e78c7fe87e20c740d2c5eba1e88e34a7c37b307500b3059f0078e75304474d7" Namespace="calico-apiserver" Pod="calico-apiserver-96bd6dbc8-gtk2k" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--gtk2k-" Dec 13 08:48:50.677791 containerd[1595]: 2024-12-13 08:48:50.426 [INFO][4273] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8e78c7fe87e20c740d2c5eba1e88e34a7c37b307500b3059f0078e75304474d7" Namespace="calico-apiserver" Pod="calico-apiserver-96bd6dbc8-gtk2k" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--gtk2k-eth0" Dec 13 08:48:50.677791 containerd[1595]: 2024-12-13 08:48:50.524 [INFO][4285] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e78c7fe87e20c740d2c5eba1e88e34a7c37b307500b3059f0078e75304474d7" HandleID="k8s-pod-network.8e78c7fe87e20c740d2c5eba1e88e34a7c37b307500b3059f0078e75304474d7" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--gtk2k-eth0" Dec 13 08:48:50.677791 containerd[1595]: 2024-12-13 08:48:50.567 [INFO][4285] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8e78c7fe87e20c740d2c5eba1e88e34a7c37b307500b3059f0078e75304474d7" HandleID="k8s-pod-network.8e78c7fe87e20c740d2c5eba1e88e34a7c37b307500b3059f0078e75304474d7" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--gtk2k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051890), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.1-f-1ee231485e", "pod":"calico-apiserver-96bd6dbc8-gtk2k", "timestamp":"2024-12-13 08:48:50.524668875 +0000 UTC"}, Hostname:"ci-4081.2.1-f-1ee231485e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 08:48:50.677791 containerd[1595]: 2024-12-13 08:48:50.568 [INFO][4285] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:48:50.677791 containerd[1595]: 2024-12-13 08:48:50.569 [INFO][4285] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:48:50.677791 containerd[1595]: 2024-12-13 08:48:50.571 [INFO][4285] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-f-1ee231485e' Dec 13 08:48:50.677791 containerd[1595]: 2024-12-13 08:48:50.576 [INFO][4285] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8e78c7fe87e20c740d2c5eba1e88e34a7c37b307500b3059f0078e75304474d7" host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:50.677791 containerd[1595]: 2024-12-13 08:48:50.585 [INFO][4285] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:50.677791 containerd[1595]: 2024-12-13 08:48:50.594 [INFO][4285] ipam/ipam.go 489: Trying affinity for 192.168.24.64/26 host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:50.677791 containerd[1595]: 2024-12-13 08:48:50.597 [INFO][4285] ipam/ipam.go 155: Attempting to load block cidr=192.168.24.64/26 host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:50.677791 containerd[1595]: 2024-12-13 08:48:50.601 [INFO][4285] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.64/26 host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:50.677791 containerd[1595]: 2024-12-13 08:48:50.602 [INFO][4285] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.64/26 handle="k8s-pod-network.8e78c7fe87e20c740d2c5eba1e88e34a7c37b307500b3059f0078e75304474d7" host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:50.677791 containerd[1595]: 2024-12-13 08:48:50.604 [INFO][4285] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8e78c7fe87e20c740d2c5eba1e88e34a7c37b307500b3059f0078e75304474d7 Dec 13 08:48:50.677791 containerd[1595]: 2024-12-13 08:48:50.612 [INFO][4285] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.24.64/26 handle="k8s-pod-network.8e78c7fe87e20c740d2c5eba1e88e34a7c37b307500b3059f0078e75304474d7" host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:50.677791 containerd[1595]: 2024-12-13 08:48:50.624 [INFO][4285] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.24.66/26] block=192.168.24.64/26 handle="k8s-pod-network.8e78c7fe87e20c740d2c5eba1e88e34a7c37b307500b3059f0078e75304474d7" host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:50.677791 containerd[1595]: 2024-12-13 08:48:50.624 [INFO][4285] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.66/26] handle="k8s-pod-network.8e78c7fe87e20c740d2c5eba1e88e34a7c37b307500b3059f0078e75304474d7" host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:50.677791 containerd[1595]: 2024-12-13 08:48:50.624 [INFO][4285] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:48:50.677791 containerd[1595]: 2024-12-13 08:48:50.624 [INFO][4285] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.66/26] IPv6=[] ContainerID="8e78c7fe87e20c740d2c5eba1e88e34a7c37b307500b3059f0078e75304474d7" HandleID="k8s-pod-network.8e78c7fe87e20c740d2c5eba1e88e34a7c37b307500b3059f0078e75304474d7" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--gtk2k-eth0" Dec 13 08:48:50.681957 containerd[1595]: 2024-12-13 08:48:50.629 [INFO][4273] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8e78c7fe87e20c740d2c5eba1e88e34a7c37b307500b3059f0078e75304474d7" Namespace="calico-apiserver" Pod="calico-apiserver-96bd6dbc8-gtk2k" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--gtk2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--gtk2k-eth0", GenerateName:"calico-apiserver-96bd6dbc8-", Namespace:"calico-apiserver", SelfLink:"", UID:"85ce8df6-d384-49f2-95e5-eb36247ebf47", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 48, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"96bd6dbc8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-f-1ee231485e", ContainerID:"", Pod:"calico-apiserver-96bd6dbc8-gtk2k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidcb285b5f97", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:48:50.681957 containerd[1595]: 2024-12-13 08:48:50.630 [INFO][4273] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.24.66/32] ContainerID="8e78c7fe87e20c740d2c5eba1e88e34a7c37b307500b3059f0078e75304474d7" Namespace="calico-apiserver" Pod="calico-apiserver-96bd6dbc8-gtk2k" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--gtk2k-eth0" Dec 13 08:48:50.681957 containerd[1595]: 2024-12-13 08:48:50.630 [INFO][4273] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidcb285b5f97 ContainerID="8e78c7fe87e20c740d2c5eba1e88e34a7c37b307500b3059f0078e75304474d7" Namespace="calico-apiserver" Pod="calico-apiserver-96bd6dbc8-gtk2k" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--gtk2k-eth0" Dec 13 08:48:50.681957 containerd[1595]: 2024-12-13 08:48:50.635 [INFO][4273] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e78c7fe87e20c740d2c5eba1e88e34a7c37b307500b3059f0078e75304474d7" Namespace="calico-apiserver" Pod="calico-apiserver-96bd6dbc8-gtk2k" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--gtk2k-eth0" Dec 13 08:48:50.681957 containerd[1595]: 2024-12-13 08:48:50.635 [INFO][4273] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8e78c7fe87e20c740d2c5eba1e88e34a7c37b307500b3059f0078e75304474d7" Namespace="calico-apiserver" Pod="calico-apiserver-96bd6dbc8-gtk2k" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--gtk2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--gtk2k-eth0", GenerateName:"calico-apiserver-96bd6dbc8-", Namespace:"calico-apiserver", SelfLink:"", UID:"85ce8df6-d384-49f2-95e5-eb36247ebf47", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 48, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"96bd6dbc8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-f-1ee231485e", ContainerID:"8e78c7fe87e20c740d2c5eba1e88e34a7c37b307500b3059f0078e75304474d7", Pod:"calico-apiserver-96bd6dbc8-gtk2k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidcb285b5f97", MAC:"c2:42:3a:2d:c8:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:48:50.681957 containerd[1595]: 2024-12-13 08:48:50.658 [INFO][4273] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8e78c7fe87e20c740d2c5eba1e88e34a7c37b307500b3059f0078e75304474d7" Namespace="calico-apiserver" Pod="calico-apiserver-96bd6dbc8-gtk2k" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--gtk2k-eth0" Dec 13 08:48:50.790295 containerd[1595]: time="2024-12-13T08:48:50.788506376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:48:50.790295 containerd[1595]: time="2024-12-13T08:48:50.788680935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:48:50.790295 containerd[1595]: time="2024-12-13T08:48:50.788722352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:50.790295 containerd[1595]: time="2024-12-13T08:48:50.788938092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:50.928477 containerd[1595]: time="2024-12-13T08:48:50.928214731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96bd6dbc8-gtk2k,Uid:85ce8df6-d384-49f2-95e5-eb36247ebf47,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8e78c7fe87e20c740d2c5eba1e88e34a7c37b307500b3059f0078e75304474d7\"" Dec 13 08:48:50.934893 containerd[1595]: time="2024-12-13T08:48:50.933177989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 08:48:51.440904 kubelet[2743]: E1213 08:48:51.440855 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:51.939585 systemd-networkd[1224]: calidcb285b5f97: Gained IPv6LL Dec 13 08:48:52.004389 systemd-networkd[1224]: vxlan.calico: Gained IPv6LL Dec 13 08:48:52.037093 containerd[1595]: time="2024-12-13T08:48:52.036928295Z" level=info msg="StopPodSandbox for \"fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d\"" Dec 13 08:48:52.037741 containerd[1595]: time="2024-12-13T08:48:52.037293196Z" level=info msg="StopPodSandbox for \"141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa\"" Dec 13 08:48:52.044921 containerd[1595]: time="2024-12-13T08:48:52.044512614Z" level=info msg="StopPodSandbox for \"bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490\"" Dec 13 08:48:52.294148 containerd[1595]: 2024-12-13 08:48:52.144 [INFO][4425] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" Dec 13 08:48:52.294148 containerd[1595]: 2024-12-13 08:48:52.145 [INFO][4425] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" iface="eth0" netns="/var/run/netns/cni-d18d28d5-7f64-027b-3b85-31cc7e2094c5" Dec 13 08:48:52.294148 containerd[1595]: 2024-12-13 08:48:52.145 [INFO][4425] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" iface="eth0" netns="/var/run/netns/cni-d18d28d5-7f64-027b-3b85-31cc7e2094c5" Dec 13 08:48:52.294148 containerd[1595]: 2024-12-13 08:48:52.146 [INFO][4425] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" iface="eth0" netns="/var/run/netns/cni-d18d28d5-7f64-027b-3b85-31cc7e2094c5" Dec 13 08:48:52.294148 containerd[1595]: 2024-12-13 08:48:52.146 [INFO][4425] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" Dec 13 08:48:52.294148 containerd[1595]: 2024-12-13 08:48:52.146 [INFO][4425] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" Dec 13 08:48:52.294148 containerd[1595]: 2024-12-13 08:48:52.268 [INFO][4442] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" HandleID="k8s-pod-network.fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" Workload="ci--4081.2.1--f--1ee231485e-k8s-csi--node--driver--h2vzm-eth0" Dec 13 08:48:52.294148 containerd[1595]: 2024-12-13 08:48:52.270 [INFO][4442] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:48:52.294148 containerd[1595]: 2024-12-13 08:48:52.270 [INFO][4442] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:48:52.294148 containerd[1595]: 2024-12-13 08:48:52.280 [WARNING][4442] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" HandleID="k8s-pod-network.fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" Workload="ci--4081.2.1--f--1ee231485e-k8s-csi--node--driver--h2vzm-eth0" Dec 13 08:48:52.294148 containerd[1595]: 2024-12-13 08:48:52.280 [INFO][4442] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" HandleID="k8s-pod-network.fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" Workload="ci--4081.2.1--f--1ee231485e-k8s-csi--node--driver--h2vzm-eth0" Dec 13 08:48:52.294148 containerd[1595]: 2024-12-13 08:48:52.282 [INFO][4442] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:48:52.294148 containerd[1595]: 2024-12-13 08:48:52.288 [INFO][4425] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" Dec 13 08:48:52.298561 containerd[1595]: time="2024-12-13T08:48:52.296552247Z" level=info msg="TearDown network for sandbox \"fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d\" successfully" Dec 13 08:48:52.299631 containerd[1595]: time="2024-12-13T08:48:52.298571650Z" level=info msg="StopPodSandbox for \"fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d\" returns successfully" Dec 13 08:48:52.303145 systemd[1]: run-netns-cni\x2dd18d28d5\x2d7f64\x2d027b\x2d3b85\x2d31cc7e2094c5.mount: Deactivated successfully. Dec 13 08:48:52.303832 containerd[1595]: time="2024-12-13T08:48:52.303615393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h2vzm,Uid:b522e9d4-562e-4448-88f3-7d6870e65d2f,Namespace:calico-system,Attempt:1,}" Dec 13 08:48:52.326328 containerd[1595]: 2024-12-13 08:48:52.233 [INFO][4426] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" Dec 13 08:48:52.326328 containerd[1595]: 2024-12-13 08:48:52.233 [INFO][4426] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" iface="eth0" netns="/var/run/netns/cni-1b7185e6-c039-2a14-b7e0-ad0fbff16f96" Dec 13 08:48:52.326328 containerd[1595]: 2024-12-13 08:48:52.233 [INFO][4426] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" iface="eth0" netns="/var/run/netns/cni-1b7185e6-c039-2a14-b7e0-ad0fbff16f96" Dec 13 08:48:52.326328 containerd[1595]: 2024-12-13 08:48:52.234 [INFO][4426] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" iface="eth0" netns="/var/run/netns/cni-1b7185e6-c039-2a14-b7e0-ad0fbff16f96" Dec 13 08:48:52.326328 containerd[1595]: 2024-12-13 08:48:52.234 [INFO][4426] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" Dec 13 08:48:52.326328 containerd[1595]: 2024-12-13 08:48:52.234 [INFO][4426] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" Dec 13 08:48:52.326328 containerd[1595]: 2024-12-13 08:48:52.286 [INFO][4453] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" HandleID="k8s-pod-network.bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--jm4wj-eth0" Dec 13 08:48:52.326328 containerd[1595]: 2024-12-13 08:48:52.286 [INFO][4453] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:48:52.326328 containerd[1595]: 2024-12-13 08:48:52.286 [INFO][4453] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:48:52.326328 containerd[1595]: 2024-12-13 08:48:52.314 [WARNING][4453] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" HandleID="k8s-pod-network.bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--jm4wj-eth0" Dec 13 08:48:52.326328 containerd[1595]: 2024-12-13 08:48:52.314 [INFO][4453] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" HandleID="k8s-pod-network.bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--jm4wj-eth0" Dec 13 08:48:52.326328 containerd[1595]: 2024-12-13 08:48:52.317 [INFO][4453] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:48:52.326328 containerd[1595]: 2024-12-13 08:48:52.321 [INFO][4426] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" Dec 13 08:48:52.328849 containerd[1595]: time="2024-12-13T08:48:52.328642920Z" level=info msg="TearDown network for sandbox \"bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490\" successfully" Dec 13 08:48:52.328849 containerd[1595]: time="2024-12-13T08:48:52.328685866Z" level=info msg="StopPodSandbox for \"bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490\" returns successfully" Dec 13 08:48:52.335465 containerd[1595]: time="2024-12-13T08:48:52.332635957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96bd6dbc8-jm4wj,Uid:15f81aca-82d2-4a86-bb58-7838817f9d2a,Namespace:calico-apiserver,Attempt:1,}" Dec 13 08:48:52.334593 systemd[1]: run-netns-cni\x2d1b7185e6\x2dc039\x2d2a14\x2db7e0\x2dad0fbff16f96.mount: Deactivated successfully. Dec 13 08:48:52.396725 containerd[1595]: 2024-12-13 08:48:52.207 [INFO][4421] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" Dec 13 08:48:52.396725 containerd[1595]: 2024-12-13 08:48:52.207 [INFO][4421] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" iface="eth0" netns="/var/run/netns/cni-46d65563-1304-6c37-4242-f9fa809bbac6" Dec 13 08:48:52.396725 containerd[1595]: 2024-12-13 08:48:52.207 [INFO][4421] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" iface="eth0" netns="/var/run/netns/cni-46d65563-1304-6c37-4242-f9fa809bbac6" Dec 13 08:48:52.396725 containerd[1595]: 2024-12-13 08:48:52.208 [INFO][4421] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" iface="eth0" netns="/var/run/netns/cni-46d65563-1304-6c37-4242-f9fa809bbac6" Dec 13 08:48:52.396725 containerd[1595]: 2024-12-13 08:48:52.209 [INFO][4421] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" Dec 13 08:48:52.396725 containerd[1595]: 2024-12-13 08:48:52.209 [INFO][4421] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" Dec 13 08:48:52.396725 containerd[1595]: 2024-12-13 08:48:52.347 [INFO][4448] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" HandleID="k8s-pod-network.141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" Workload="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--mncf7-eth0" Dec 13 08:48:52.396725 containerd[1595]: 2024-12-13 08:48:52.349 [INFO][4448] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:48:52.396725 containerd[1595]: 2024-12-13 08:48:52.350 [INFO][4448] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:48:52.396725 containerd[1595]: 2024-12-13 08:48:52.367 [WARNING][4448] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" HandleID="k8s-pod-network.141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" Workload="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--mncf7-eth0" Dec 13 08:48:52.396725 containerd[1595]: 2024-12-13 08:48:52.367 [INFO][4448] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" HandleID="k8s-pod-network.141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" Workload="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--mncf7-eth0" Dec 13 08:48:52.396725 containerd[1595]: 2024-12-13 08:48:52.372 [INFO][4448] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:48:52.396725 containerd[1595]: 2024-12-13 08:48:52.382 [INFO][4421] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" Dec 13 08:48:52.404530 containerd[1595]: time="2024-12-13T08:48:52.403247074Z" level=info msg="TearDown network for sandbox \"141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa\" successfully" Dec 13 08:48:52.406594 containerd[1595]: time="2024-12-13T08:48:52.406261868Z" level=info msg="StopPodSandbox for \"141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa\" returns successfully" Dec 13 08:48:52.409355 kubelet[2743]: E1213 08:48:52.408980 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:52.414090 containerd[1595]: time="2024-12-13T08:48:52.412444641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mncf7,Uid:8474ed1a-b932-4bf2-81e9-841701b51857,Namespace:kube-system,Attempt:1,}" Dec 13 08:48:52.466630 kubelet[2743]: E1213 08:48:52.466578 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:52.731083 systemd-networkd[1224]: cali06d4119a2f2: Link UP Dec 13 08:48:52.733765 systemd-networkd[1224]: cali06d4119a2f2: Gained carrier Dec 13 08:48:52.799098 containerd[1595]: 2024-12-13 08:48:52.479 [INFO][4463] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--f--1ee231485e-k8s-csi--node--driver--h2vzm-eth0 csi-node-driver- calico-system b522e9d4-562e-4448-88f3-7d6870e65d2f 810 0 2024-12-13 08:48:24 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.2.1-f-1ee231485e csi-node-driver-h2vzm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali06d4119a2f2 [] []}} ContainerID="97d1a89cb42ee10cb13ab0f8166f1859bf3f7c361bafd735a079a26862143ebe" Namespace="calico-system" Pod="csi-node-driver-h2vzm" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-csi--node--driver--h2vzm-" Dec 13 08:48:52.799098 containerd[1595]: 2024-12-13 08:48:52.479 [INFO][4463] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="97d1a89cb42ee10cb13ab0f8166f1859bf3f7c361bafd735a079a26862143ebe" Namespace="calico-system" Pod="csi-node-driver-h2vzm" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-csi--node--driver--h2vzm-eth0" Dec 13 08:48:52.799098 containerd[1595]: 2024-12-13 08:48:52.575 [INFO][4494] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="97d1a89cb42ee10cb13ab0f8166f1859bf3f7c361bafd735a079a26862143ebe" HandleID="k8s-pod-network.97d1a89cb42ee10cb13ab0f8166f1859bf3f7c361bafd735a079a26862143ebe" Workload="ci--4081.2.1--f--1ee231485e-k8s-csi--node--driver--h2vzm-eth0" Dec 13 08:48:52.799098 containerd[1595]: 2024-12-13 08:48:52.599 [INFO][4494] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="97d1a89cb42ee10cb13ab0f8166f1859bf3f7c361bafd735a079a26862143ebe" HandleID="k8s-pod-network.97d1a89cb42ee10cb13ab0f8166f1859bf3f7c361bafd735a079a26862143ebe" Workload="ci--4081.2.1--f--1ee231485e-k8s-csi--node--driver--h2vzm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011bc30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.1-f-1ee231485e", "pod":"csi-node-driver-h2vzm", "timestamp":"2024-12-13 08:48:52.575086386 +0000 UTC"}, Hostname:"ci-4081.2.1-f-1ee231485e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 08:48:52.799098 containerd[1595]: 2024-12-13 08:48:52.599 [INFO][4494] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:48:52.799098 containerd[1595]: 2024-12-13 08:48:52.600 [INFO][4494] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:48:52.799098 containerd[1595]: 2024-12-13 08:48:52.600 [INFO][4494] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-f-1ee231485e' Dec 13 08:48:52.799098 containerd[1595]: 2024-12-13 08:48:52.604 [INFO][4494] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.97d1a89cb42ee10cb13ab0f8166f1859bf3f7c361bafd735a079a26862143ebe" host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:52.799098 containerd[1595]: 2024-12-13 08:48:52.618 [INFO][4494] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:52.799098 containerd[1595]: 2024-12-13 08:48:52.638 [INFO][4494] ipam/ipam.go 489: Trying affinity for 192.168.24.64/26 host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:52.799098 containerd[1595]: 2024-12-13 08:48:52.649 [INFO][4494] ipam/ipam.go 155: Attempting to load block cidr=192.168.24.64/26 host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:52.799098 containerd[1595]: 2024-12-13 08:48:52.656 [INFO][4494] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.64/26 host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:52.799098 containerd[1595]: 2024-12-13 08:48:52.656 [INFO][4494] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.64/26 handle="k8s-pod-network.97d1a89cb42ee10cb13ab0f8166f1859bf3f7c361bafd735a079a26862143ebe" host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:52.799098 containerd[1595]: 2024-12-13 08:48:52.661 [INFO][4494] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.97d1a89cb42ee10cb13ab0f8166f1859bf3f7c361bafd735a079a26862143ebe Dec 13 08:48:52.799098 containerd[1595]: 2024-12-13 08:48:52.673 [INFO][4494] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.24.64/26 handle="k8s-pod-network.97d1a89cb42ee10cb13ab0f8166f1859bf3f7c361bafd735a079a26862143ebe" host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:52.799098 containerd[1595]: 2024-12-13 08:48:52.705 [INFO][4494] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.24.67/26] block=192.168.24.64/26 handle="k8s-pod-network.97d1a89cb42ee10cb13ab0f8166f1859bf3f7c361bafd735a079a26862143ebe" host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:52.799098 containerd[1595]: 2024-12-13 08:48:52.706 [INFO][4494] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.67/26] handle="k8s-pod-network.97d1a89cb42ee10cb13ab0f8166f1859bf3f7c361bafd735a079a26862143ebe" host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:52.799098 containerd[1595]: 2024-12-13 08:48:52.706 [INFO][4494] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:48:52.799098 containerd[1595]: 2024-12-13 08:48:52.706 [INFO][4494] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.67/26] IPv6=[] ContainerID="97d1a89cb42ee10cb13ab0f8166f1859bf3f7c361bafd735a079a26862143ebe" HandleID="k8s-pod-network.97d1a89cb42ee10cb13ab0f8166f1859bf3f7c361bafd735a079a26862143ebe" Workload="ci--4081.2.1--f--1ee231485e-k8s-csi--node--driver--h2vzm-eth0" Dec 13 08:48:52.804920 containerd[1595]: 2024-12-13 08:48:52.719 [INFO][4463] cni-plugin/k8s.go 386: Populated endpoint ContainerID="97d1a89cb42ee10cb13ab0f8166f1859bf3f7c361bafd735a079a26862143ebe" Namespace="calico-system" Pod="csi-node-driver-h2vzm" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-csi--node--driver--h2vzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--f--1ee231485e-k8s-csi--node--driver--h2vzm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b522e9d4-562e-4448-88f3-7d6870e65d2f", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 48, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-f-1ee231485e", ContainerID:"", Pod:"csi-node-driver-h2vzm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali06d4119a2f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:48:52.804920 containerd[1595]: 2024-12-13 08:48:52.720 [INFO][4463] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.24.67/32] ContainerID="97d1a89cb42ee10cb13ab0f8166f1859bf3f7c361bafd735a079a26862143ebe" Namespace="calico-system" Pod="csi-node-driver-h2vzm" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-csi--node--driver--h2vzm-eth0" Dec 13 08:48:52.804920 containerd[1595]: 2024-12-13 08:48:52.720 [INFO][4463] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali06d4119a2f2 ContainerID="97d1a89cb42ee10cb13ab0f8166f1859bf3f7c361bafd735a079a26862143ebe" Namespace="calico-system" Pod="csi-node-driver-h2vzm" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-csi--node--driver--h2vzm-eth0" Dec 13 08:48:52.804920 containerd[1595]: 2024-12-13 08:48:52.741 [INFO][4463] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="97d1a89cb42ee10cb13ab0f8166f1859bf3f7c361bafd735a079a26862143ebe" Namespace="calico-system" Pod="csi-node-driver-h2vzm" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-csi--node--driver--h2vzm-eth0" Dec 13 08:48:52.804920 containerd[1595]: 2024-12-13 08:48:52.745 [INFO][4463] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="97d1a89cb42ee10cb13ab0f8166f1859bf3f7c361bafd735a079a26862143ebe" Namespace="calico-system" Pod="csi-node-driver-h2vzm" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-csi--node--driver--h2vzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--f--1ee231485e-k8s-csi--node--driver--h2vzm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b522e9d4-562e-4448-88f3-7d6870e65d2f", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 48, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-f-1ee231485e", ContainerID:"97d1a89cb42ee10cb13ab0f8166f1859bf3f7c361bafd735a079a26862143ebe", Pod:"csi-node-driver-h2vzm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali06d4119a2f2", MAC:"fe:0d:3d:80:a6:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:48:52.804920 containerd[1595]: 2024-12-13 08:48:52.789 [INFO][4463] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="97d1a89cb42ee10cb13ab0f8166f1859bf3f7c361bafd735a079a26862143ebe" Namespace="calico-system" Pod="csi-node-driver-h2vzm" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-csi--node--driver--h2vzm-eth0" Dec 13 08:48:52.893825 containerd[1595]: time="2024-12-13T08:48:52.884652120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:48:52.893825 containerd[1595]: time="2024-12-13T08:48:52.884747704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:48:52.893825 containerd[1595]: time="2024-12-13T08:48:52.884792994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:52.893825 containerd[1595]: time="2024-12-13T08:48:52.885004914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:52.925294 systemd-networkd[1224]: calidb2c0d6fef0: Link UP Dec 13 08:48:52.935349 systemd-networkd[1224]: calidb2c0d6fef0: Gained carrier Dec 13 08:48:52.986739 containerd[1595]: 2024-12-13 08:48:52.562 [INFO][4472] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--jm4wj-eth0 calico-apiserver-96bd6dbc8- calico-apiserver 15f81aca-82d2-4a86-bb58-7838817f9d2a 812 0 2024-12-13 08:48:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:96bd6dbc8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.2.1-f-1ee231485e calico-apiserver-96bd6dbc8-jm4wj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidb2c0d6fef0 [] []}} ContainerID="6604bdcfb8d31c5dab8b2b1bd7def8d04229ee2c4a4008f2b1438e4a0f905275" Namespace="calico-apiserver" Pod="calico-apiserver-96bd6dbc8-jm4wj" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--jm4wj-" Dec 13 08:48:52.986739 containerd[1595]: 2024-12-13 08:48:52.562 [INFO][4472] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6604bdcfb8d31c5dab8b2b1bd7def8d04229ee2c4a4008f2b1438e4a0f905275" Namespace="calico-apiserver" Pod="calico-apiserver-96bd6dbc8-jm4wj" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--jm4wj-eth0" Dec 13 08:48:52.986739 containerd[1595]: 2024-12-13 08:48:52.677 [INFO][4503] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6604bdcfb8d31c5dab8b2b1bd7def8d04229ee2c4a4008f2b1438e4a0f905275" HandleID="k8s-pod-network.6604bdcfb8d31c5dab8b2b1bd7def8d04229ee2c4a4008f2b1438e4a0f905275" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--jm4wj-eth0" Dec 13 08:48:52.986739 containerd[1595]: 2024-12-13 08:48:52.716 [INFO][4503] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6604bdcfb8d31c5dab8b2b1bd7def8d04229ee2c4a4008f2b1438e4a0f905275" HandleID="k8s-pod-network.6604bdcfb8d31c5dab8b2b1bd7def8d04229ee2c4a4008f2b1438e4a0f905275" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--jm4wj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000453bb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.1-f-1ee231485e", "pod":"calico-apiserver-96bd6dbc8-jm4wj", "timestamp":"2024-12-13 08:48:52.677693705 +0000 UTC"}, Hostname:"ci-4081.2.1-f-1ee231485e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 08:48:52.986739 containerd[1595]: 2024-12-13 08:48:52.716 [INFO][4503] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:48:52.986739 containerd[1595]: 2024-12-13 08:48:52.716 [INFO][4503] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:48:52.986739 containerd[1595]: 2024-12-13 08:48:52.716 [INFO][4503] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-f-1ee231485e' Dec 13 08:48:52.986739 containerd[1595]: 2024-12-13 08:48:52.720 [INFO][4503] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6604bdcfb8d31c5dab8b2b1bd7def8d04229ee2c4a4008f2b1438e4a0f905275" host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:52.986739 containerd[1595]: 2024-12-13 08:48:52.741 [INFO][4503] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:52.986739 containerd[1595]: 2024-12-13 08:48:52.789 [INFO][4503] ipam/ipam.go 489: Trying affinity for 192.168.24.64/26 host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:52.986739 containerd[1595]: 2024-12-13 08:48:52.797 [INFO][4503] ipam/ipam.go 155: Attempting to load block cidr=192.168.24.64/26 host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:52.986739 containerd[1595]: 2024-12-13 08:48:52.819 [INFO][4503] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.64/26 host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:52.986739 containerd[1595]: 2024-12-13 08:48:52.819 [INFO][4503] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.64/26 handle="k8s-pod-network.6604bdcfb8d31c5dab8b2b1bd7def8d04229ee2c4a4008f2b1438e4a0f905275" host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:52.986739 containerd[1595]: 2024-12-13 08:48:52.825 [INFO][4503] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6604bdcfb8d31c5dab8b2b1bd7def8d04229ee2c4a4008f2b1438e4a0f905275 Dec 13 08:48:52.986739 containerd[1595]: 2024-12-13 08:48:52.845 [INFO][4503] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.24.64/26 handle="k8s-pod-network.6604bdcfb8d31c5dab8b2b1bd7def8d04229ee2c4a4008f2b1438e4a0f905275" host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:52.986739 containerd[1595]: 2024-12-13 08:48:52.892 [INFO][4503] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.24.68/26] block=192.168.24.64/26 handle="k8s-pod-network.6604bdcfb8d31c5dab8b2b1bd7def8d04229ee2c4a4008f2b1438e4a0f905275" host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:52.986739 containerd[1595]: 2024-12-13 08:48:52.892 [INFO][4503] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.68/26] handle="k8s-pod-network.6604bdcfb8d31c5dab8b2b1bd7def8d04229ee2c4a4008f2b1438e4a0f905275" host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:52.986739 containerd[1595]: 2024-12-13 08:48:52.892 [INFO][4503] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:48:52.986739 containerd[1595]: 2024-12-13 08:48:52.892 [INFO][4503] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.68/26] IPv6=[] ContainerID="6604bdcfb8d31c5dab8b2b1bd7def8d04229ee2c4a4008f2b1438e4a0f905275" HandleID="k8s-pod-network.6604bdcfb8d31c5dab8b2b1bd7def8d04229ee2c4a4008f2b1438e4a0f905275" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--jm4wj-eth0" Dec 13 08:48:52.992828 containerd[1595]: 2024-12-13 08:48:52.910 [INFO][4472] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6604bdcfb8d31c5dab8b2b1bd7def8d04229ee2c4a4008f2b1438e4a0f905275" Namespace="calico-apiserver" Pod="calico-apiserver-96bd6dbc8-jm4wj" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--jm4wj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--jm4wj-eth0", GenerateName:"calico-apiserver-96bd6dbc8-", Namespace:"calico-apiserver", SelfLink:"", UID:"15f81aca-82d2-4a86-bb58-7838817f9d2a", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 48, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"96bd6dbc8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-f-1ee231485e", ContainerID:"", Pod:"calico-apiserver-96bd6dbc8-jm4wj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidb2c0d6fef0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:48:52.992828 containerd[1595]: 2024-12-13 08:48:52.910 [INFO][4472] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.24.68/32] ContainerID="6604bdcfb8d31c5dab8b2b1bd7def8d04229ee2c4a4008f2b1438e4a0f905275" Namespace="calico-apiserver" Pod="calico-apiserver-96bd6dbc8-jm4wj" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--jm4wj-eth0" Dec 13 08:48:52.992828 containerd[1595]: 2024-12-13 08:48:52.911 [INFO][4472] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb2c0d6fef0 ContainerID="6604bdcfb8d31c5dab8b2b1bd7def8d04229ee2c4a4008f2b1438e4a0f905275" Namespace="calico-apiserver" Pod="calico-apiserver-96bd6dbc8-jm4wj" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--jm4wj-eth0" Dec 13 08:48:52.992828 containerd[1595]: 2024-12-13 08:48:52.948 [INFO][4472] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6604bdcfb8d31c5dab8b2b1bd7def8d04229ee2c4a4008f2b1438e4a0f905275" Namespace="calico-apiserver" Pod="calico-apiserver-96bd6dbc8-jm4wj" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--jm4wj-eth0" Dec 13 08:48:52.992828 containerd[1595]: 2024-12-13 08:48:52.949 [INFO][4472] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6604bdcfb8d31c5dab8b2b1bd7def8d04229ee2c4a4008f2b1438e4a0f905275" Namespace="calico-apiserver" Pod="calico-apiserver-96bd6dbc8-jm4wj" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--jm4wj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--jm4wj-eth0", GenerateName:"calico-apiserver-96bd6dbc8-", Namespace:"calico-apiserver", SelfLink:"", UID:"15f81aca-82d2-4a86-bb58-7838817f9d2a", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 48, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"96bd6dbc8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-f-1ee231485e", ContainerID:"6604bdcfb8d31c5dab8b2b1bd7def8d04229ee2c4a4008f2b1438e4a0f905275", Pod:"calico-apiserver-96bd6dbc8-jm4wj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidb2c0d6fef0", MAC:"6a:7d:49:4c:2e:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:48:52.992828 containerd[1595]: 2024-12-13 08:48:52.972 [INFO][4472] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6604bdcfb8d31c5dab8b2b1bd7def8d04229ee2c4a4008f2b1438e4a0f905275" Namespace="calico-apiserver" Pod="calico-apiserver-96bd6dbc8-jm4wj" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--jm4wj-eth0" Dec 13 08:48:53.059766 containerd[1595]: time="2024-12-13T08:48:53.058966920Z" level=info msg="StopPodSandbox for \"88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994\"" Dec 13 08:48:53.084629 systemd-networkd[1224]: caliadf124cfcfb: Link UP Dec 13 08:48:53.085130 systemd-networkd[1224]: caliadf124cfcfb: Gained carrier Dec 13 08:48:53.166836 containerd[1595]: time="2024-12-13T08:48:53.166370221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:48:53.167271 containerd[1595]: time="2024-12-13T08:48:53.167203628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:48:53.167573 containerd[1595]: time="2024-12-13T08:48:53.167513890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:53.168484 containerd[1595]: 2024-12-13 08:48:52.602 [INFO][4484] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--mncf7-eth0 coredns-76f75df574- kube-system 8474ed1a-b932-4bf2-81e9-841701b51857 811 0 2024-12-13 08:48:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.2.1-f-1ee231485e coredns-76f75df574-mncf7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliadf124cfcfb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="80d346ffb8305da2179ee626dae32887528e170d50fdd8e65699262ccad811f5" Namespace="kube-system" Pod="coredns-76f75df574-mncf7" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--mncf7-" Dec 13 08:48:53.168484 containerd[1595]: 2024-12-13 08:48:52.603 [INFO][4484] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="80d346ffb8305da2179ee626dae32887528e170d50fdd8e65699262ccad811f5" Namespace="kube-system" Pod="coredns-76f75df574-mncf7" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--mncf7-eth0" Dec 13 08:48:53.168484 containerd[1595]: 2024-12-13 08:48:52.737 [INFO][4509] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="80d346ffb8305da2179ee626dae32887528e170d50fdd8e65699262ccad811f5" HandleID="k8s-pod-network.80d346ffb8305da2179ee626dae32887528e170d50fdd8e65699262ccad811f5" Workload="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--mncf7-eth0" Dec 13 08:48:53.168484 containerd[1595]: 2024-12-13 08:48:52.794 [INFO][4509] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="80d346ffb8305da2179ee626dae32887528e170d50fdd8e65699262ccad811f5" HandleID="k8s-pod-network.80d346ffb8305da2179ee626dae32887528e170d50fdd8e65699262ccad811f5" Workload="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--mncf7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ae650), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.1-f-1ee231485e", "pod":"coredns-76f75df574-mncf7", "timestamp":"2024-12-13 08:48:52.737542517 +0000 UTC"}, Hostname:"ci-4081.2.1-f-1ee231485e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 08:48:53.168484 containerd[1595]: 2024-12-13 08:48:52.795 [INFO][4509] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:48:53.168484 containerd[1595]: 2024-12-13 08:48:52.892 [INFO][4509] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:48:53.168484 containerd[1595]: 2024-12-13 08:48:52.892 [INFO][4509] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-f-1ee231485e' Dec 13 08:48:53.168484 containerd[1595]: 2024-12-13 08:48:52.899 [INFO][4509] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.80d346ffb8305da2179ee626dae32887528e170d50fdd8e65699262ccad811f5" host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:53.168484 containerd[1595]: 2024-12-13 08:48:52.913 [INFO][4509] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:53.168484 containerd[1595]: 2024-12-13 08:48:52.959 [INFO][4509] ipam/ipam.go 489: Trying affinity for 192.168.24.64/26 host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:53.168484 containerd[1595]: 2024-12-13 08:48:52.963 [INFO][4509] ipam/ipam.go 155: Attempting to load block cidr=192.168.24.64/26 host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:53.168484 containerd[1595]: 2024-12-13 08:48:52.982 [INFO][4509] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.64/26 host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:53.168484 containerd[1595]: 2024-12-13 08:48:52.982 [INFO][4509] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.64/26 handle="k8s-pod-network.80d346ffb8305da2179ee626dae32887528e170d50fdd8e65699262ccad811f5" host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:53.168484 containerd[1595]: 2024-12-13 08:48:52.990 [INFO][4509] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.80d346ffb8305da2179ee626dae32887528e170d50fdd8e65699262ccad811f5 Dec 13 08:48:53.168484 containerd[1595]: 2024-12-13 08:48:53.016 [INFO][4509] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.24.64/26 handle="k8s-pod-network.80d346ffb8305da2179ee626dae32887528e170d50fdd8e65699262ccad811f5" host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:53.168484 containerd[1595]: 2024-12-13 08:48:53.044 [INFO][4509] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.24.69/26] block=192.168.24.64/26 handle="k8s-pod-network.80d346ffb8305da2179ee626dae32887528e170d50fdd8e65699262ccad811f5" host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:53.168484 containerd[1595]: 2024-12-13 08:48:53.055 [INFO][4509] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.69/26] handle="k8s-pod-network.80d346ffb8305da2179ee626dae32887528e170d50fdd8e65699262ccad811f5" host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:53.168484 containerd[1595]: 2024-12-13 08:48:53.056 [INFO][4509] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:48:53.168484 containerd[1595]: 2024-12-13 08:48:53.057 [INFO][4509] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.69/26] IPv6=[] ContainerID="80d346ffb8305da2179ee626dae32887528e170d50fdd8e65699262ccad811f5" HandleID="k8s-pod-network.80d346ffb8305da2179ee626dae32887528e170d50fdd8e65699262ccad811f5" Workload="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--mncf7-eth0" Dec 13 08:48:53.170152 containerd[1595]: 2024-12-13 08:48:53.072 [INFO][4484] cni-plugin/k8s.go 386: Populated endpoint ContainerID="80d346ffb8305da2179ee626dae32887528e170d50fdd8e65699262ccad811f5" Namespace="kube-system" Pod="coredns-76f75df574-mncf7" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--mncf7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--mncf7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8474ed1a-b932-4bf2-81e9-841701b51857", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 48, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-f-1ee231485e", ContainerID:"", Pod:"coredns-76f75df574-mncf7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliadf124cfcfb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:48:53.170152 containerd[1595]: 2024-12-13 08:48:53.073 [INFO][4484] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.24.69/32] ContainerID="80d346ffb8305da2179ee626dae32887528e170d50fdd8e65699262ccad811f5" Namespace="kube-system" Pod="coredns-76f75df574-mncf7" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--mncf7-eth0" Dec 13 08:48:53.170152 containerd[1595]: 2024-12-13 08:48:53.073 [INFO][4484] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliadf124cfcfb ContainerID="80d346ffb8305da2179ee626dae32887528e170d50fdd8e65699262ccad811f5" Namespace="kube-system" Pod="coredns-76f75df574-mncf7" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--mncf7-eth0" Dec 13 08:48:53.170152 containerd[1595]: 2024-12-13 08:48:53.084 [INFO][4484] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="80d346ffb8305da2179ee626dae32887528e170d50fdd8e65699262ccad811f5" Namespace="kube-system" Pod="coredns-76f75df574-mncf7" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--mncf7-eth0" Dec 13 08:48:53.170152 containerd[1595]: 2024-12-13 08:48:53.086 [INFO][4484] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="80d346ffb8305da2179ee626dae32887528e170d50fdd8e65699262ccad811f5" Namespace="kube-system" Pod="coredns-76f75df574-mncf7" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--mncf7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--mncf7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8474ed1a-b932-4bf2-81e9-841701b51857", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 48, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-f-1ee231485e", ContainerID:"80d346ffb8305da2179ee626dae32887528e170d50fdd8e65699262ccad811f5", Pod:"coredns-76f75df574-mncf7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliadf124cfcfb", MAC:"b2:16:1a:49:84:27", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:48:53.170152 containerd[1595]: 2024-12-13 08:48:53.144 [INFO][4484] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="80d346ffb8305da2179ee626dae32887528e170d50fdd8e65699262ccad811f5" Namespace="kube-system" Pod="coredns-76f75df574-mncf7" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--mncf7-eth0" Dec 13 08:48:53.171612 containerd[1595]: time="2024-12-13T08:48:53.170006860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:53.218979 containerd[1595]: time="2024-12-13T08:48:53.218767324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h2vzm,Uid:b522e9d4-562e-4448-88f3-7d6870e65d2f,Namespace:calico-system,Attempt:1,} returns sandbox id \"97d1a89cb42ee10cb13ab0f8166f1859bf3f7c361bafd735a079a26862143ebe\"" Dec 13 08:48:53.328562 systemd[1]: run-netns-cni\x2d46d65563\x2d1304\x2d6c37\x2d4242\x2df9fa809bbac6.mount: Deactivated successfully. Dec 13 08:48:53.360124 containerd[1595]: time="2024-12-13T08:48:53.359884297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:48:53.362810 containerd[1595]: time="2024-12-13T08:48:53.362700581Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:48:53.365551 containerd[1595]: time="2024-12-13T08:48:53.363403161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:53.365551 containerd[1595]: time="2024-12-13T08:48:53.363750036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:53.382826 containerd[1595]: time="2024-12-13T08:48:53.382756507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96bd6dbc8-jm4wj,Uid:15f81aca-82d2-4a86-bb58-7838817f9d2a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6604bdcfb8d31c5dab8b2b1bd7def8d04229ee2c4a4008f2b1438e4a0f905275\"" Dec 13 08:48:53.566207 containerd[1595]: time="2024-12-13T08:48:53.566163791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mncf7,Uid:8474ed1a-b932-4bf2-81e9-841701b51857,Namespace:kube-system,Attempt:1,} returns sandbox id \"80d346ffb8305da2179ee626dae32887528e170d50fdd8e65699262ccad811f5\"" Dec 13 08:48:53.566631 containerd[1595]: 2024-12-13 08:48:53.411 [INFO][4604] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" Dec 13 08:48:53.566631 containerd[1595]: 2024-12-13 08:48:53.412 [INFO][4604] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" iface="eth0" netns="/var/run/netns/cni-656a4943-1c83-69ab-2664-d8930040133b" Dec 13 08:48:53.566631 containerd[1595]: 2024-12-13 08:48:53.413 [INFO][4604] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" iface="eth0" netns="/var/run/netns/cni-656a4943-1c83-69ab-2664-d8930040133b" Dec 13 08:48:53.566631 containerd[1595]: 2024-12-13 08:48:53.413 [INFO][4604] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" iface="eth0" netns="/var/run/netns/cni-656a4943-1c83-69ab-2664-d8930040133b" Dec 13 08:48:53.566631 containerd[1595]: 2024-12-13 08:48:53.413 [INFO][4604] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" Dec 13 08:48:53.566631 containerd[1595]: 2024-12-13 08:48:53.413 [INFO][4604] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" Dec 13 08:48:53.566631 containerd[1595]: 2024-12-13 08:48:53.538 [INFO][4682] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" HandleID="k8s-pod-network.88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--kube--controllers--57f546d8b9--6fx86-eth0" Dec 13 08:48:53.566631 containerd[1595]: 2024-12-13 08:48:53.539 [INFO][4682] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:48:53.566631 containerd[1595]: 2024-12-13 08:48:53.539 [INFO][4682] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:48:53.566631 containerd[1595]: 2024-12-13 08:48:53.551 [WARNING][4682] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" HandleID="k8s-pod-network.88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--kube--controllers--57f546d8b9--6fx86-eth0" Dec 13 08:48:53.566631 containerd[1595]: 2024-12-13 08:48:53.552 [INFO][4682] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" HandleID="k8s-pod-network.88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--kube--controllers--57f546d8b9--6fx86-eth0" Dec 13 08:48:53.566631 containerd[1595]: 2024-12-13 08:48:53.554 [INFO][4682] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:48:53.566631 containerd[1595]: 2024-12-13 08:48:53.560 [INFO][4604] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" Dec 13 08:48:53.567581 containerd[1595]: time="2024-12-13T08:48:53.567553086Z" level=info msg="TearDown network for sandbox \"88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994\" successfully" Dec 13 08:48:53.567693 containerd[1595]: time="2024-12-13T08:48:53.567676594Z" level=info msg="StopPodSandbox for \"88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994\" returns successfully" Dec 13 08:48:53.570697 containerd[1595]: time="2024-12-13T08:48:53.570647239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57f546d8b9-6fx86,Uid:ad7eff51-0c89-4d1a-be7b-2c099a9c4335,Namespace:calico-system,Attempt:1,}" Dec 13 08:48:53.576912 kubelet[2743]: E1213 08:48:53.576857 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:53.578520 systemd[1]: run-netns-cni\x2d656a4943\x2d1c83\x2d69ab\x2d2664\x2dd8930040133b.mount: Deactivated successfully. Dec 13 08:48:53.588533 containerd[1595]: time="2024-12-13T08:48:53.588199681Z" level=info msg="CreateContainer within sandbox \"80d346ffb8305da2179ee626dae32887528e170d50fdd8e65699262ccad811f5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 08:48:53.684052 containerd[1595]: time="2024-12-13T08:48:53.683844054Z" level=info msg="CreateContainer within sandbox \"80d346ffb8305da2179ee626dae32887528e170d50fdd8e65699262ccad811f5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f281bf5ff020ef52333e7c8c711cda092e5bb58d0e5ac08f4db047eeb04b8c6d\"" Dec 13 08:48:53.685085 containerd[1595]: time="2024-12-13T08:48:53.684631385Z" level=info msg="StartContainer for \"f281bf5ff020ef52333e7c8c711cda092e5bb58d0e5ac08f4db047eeb04b8c6d\"" Dec 13 08:48:53.856060 containerd[1595]: time="2024-12-13T08:48:53.855075481Z" level=info msg="StartContainer for \"f281bf5ff020ef52333e7c8c711cda092e5bb58d0e5ac08f4db047eeb04b8c6d\" returns successfully" Dec 13 08:48:53.924489 systemd-networkd[1224]: cali06d4119a2f2: Gained IPv6LL Dec 13 08:48:54.049587 systemd-networkd[1224]: calie130ee86485: Link UP Dec 13 08:48:54.052235 systemd-networkd[1224]: calie130ee86485: Gained carrier Dec 13 08:48:54.096517 containerd[1595]: 2024-12-13 08:48:53.794 [INFO][4712] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--f--1ee231485e-k8s-calico--kube--controllers--57f546d8b9--6fx86-eth0 calico-kube-controllers-57f546d8b9- calico-system ad7eff51-0c89-4d1a-be7b-2c099a9c4335 828 0 2024-12-13 08:48:24 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:57f546d8b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.2.1-f-1ee231485e calico-kube-controllers-57f546d8b9-6fx86 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie130ee86485 [] []}} ContainerID="a7b372091e39ca2786d6d350d9cc9bfd8d8b692691438d7dded428f8c38b48e2" Namespace="calico-system" Pod="calico-kube-controllers-57f546d8b9-6fx86" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-calico--kube--controllers--57f546d8b9--6fx86-" Dec 13 08:48:54.096517 containerd[1595]: 2024-12-13 08:48:53.795 [INFO][4712] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a7b372091e39ca2786d6d350d9cc9bfd8d8b692691438d7dded428f8c38b48e2" Namespace="calico-system" Pod="calico-kube-controllers-57f546d8b9-6fx86" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-calico--kube--controllers--57f546d8b9--6fx86-eth0" Dec 13 08:48:54.096517 containerd[1595]: 2024-12-13 08:48:53.902 [INFO][4750] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a7b372091e39ca2786d6d350d9cc9bfd8d8b692691438d7dded428f8c38b48e2" HandleID="k8s-pod-network.a7b372091e39ca2786d6d350d9cc9bfd8d8b692691438d7dded428f8c38b48e2" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--kube--controllers--57f546d8b9--6fx86-eth0" Dec 13 08:48:54.096517 containerd[1595]: 2024-12-13 08:48:53.943 [INFO][4750] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a7b372091e39ca2786d6d350d9cc9bfd8d8b692691438d7dded428f8c38b48e2" HandleID="k8s-pod-network.a7b372091e39ca2786d6d350d9cc9bfd8d8b692691438d7dded428f8c38b48e2" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--kube--controllers--57f546d8b9--6fx86-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050db0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.1-f-1ee231485e", "pod":"calico-kube-controllers-57f546d8b9-6fx86", "timestamp":"2024-12-13 08:48:53.902036404 +0000 UTC"}, Hostname:"ci-4081.2.1-f-1ee231485e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 08:48:54.096517 containerd[1595]: 2024-12-13 08:48:53.943 [INFO][4750] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:48:54.096517 containerd[1595]: 2024-12-13 08:48:53.943 [INFO][4750] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:48:54.096517 containerd[1595]: 2024-12-13 08:48:53.943 [INFO][4750] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-f-1ee231485e' Dec 13 08:48:54.096517 containerd[1595]: 2024-12-13 08:48:53.953 [INFO][4750] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a7b372091e39ca2786d6d350d9cc9bfd8d8b692691438d7dded428f8c38b48e2" host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:54.096517 containerd[1595]: 2024-12-13 08:48:53.965 [INFO][4750] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:54.096517 containerd[1595]: 2024-12-13 08:48:53.979 [INFO][4750] ipam/ipam.go 489: Trying affinity for 192.168.24.64/26 host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:54.096517 containerd[1595]: 2024-12-13 08:48:53.985 [INFO][4750] ipam/ipam.go 155: Attempting to load block cidr=192.168.24.64/26 host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:54.096517 containerd[1595]: 2024-12-13 08:48:53.996 [INFO][4750] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.64/26 host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:54.096517 containerd[1595]: 2024-12-13 08:48:53.998 [INFO][4750] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.64/26 handle="k8s-pod-network.a7b372091e39ca2786d6d350d9cc9bfd8d8b692691438d7dded428f8c38b48e2" host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:54.096517 containerd[1595]: 2024-12-13 08:48:54.002 [INFO][4750] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a7b372091e39ca2786d6d350d9cc9bfd8d8b692691438d7dded428f8c38b48e2 Dec 13 08:48:54.096517 containerd[1595]: 2024-12-13 08:48:54.017 [INFO][4750] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.24.64/26 handle="k8s-pod-network.a7b372091e39ca2786d6d350d9cc9bfd8d8b692691438d7dded428f8c38b48e2" host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:54.096517 containerd[1595]: 2024-12-13 08:48:54.038 [INFO][4750] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.24.70/26] block=192.168.24.64/26 handle="k8s-pod-network.a7b372091e39ca2786d6d350d9cc9bfd8d8b692691438d7dded428f8c38b48e2" host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:54.096517 containerd[1595]: 2024-12-13 08:48:54.038 [INFO][4750] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.70/26] handle="k8s-pod-network.a7b372091e39ca2786d6d350d9cc9bfd8d8b692691438d7dded428f8c38b48e2" host="ci-4081.2.1-f-1ee231485e" Dec 13 08:48:54.096517 containerd[1595]: 2024-12-13 08:48:54.038 [INFO][4750] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:48:54.096517 containerd[1595]: 2024-12-13 08:48:54.038 [INFO][4750] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.70/26] IPv6=[] ContainerID="a7b372091e39ca2786d6d350d9cc9bfd8d8b692691438d7dded428f8c38b48e2" HandleID="k8s-pod-network.a7b372091e39ca2786d6d350d9cc9bfd8d8b692691438d7dded428f8c38b48e2" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--kube--controllers--57f546d8b9--6fx86-eth0" Dec 13 08:48:54.101497 containerd[1595]: 2024-12-13 08:48:54.043 [INFO][4712] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a7b372091e39ca2786d6d350d9cc9bfd8d8b692691438d7dded428f8c38b48e2" Namespace="calico-system" Pod="calico-kube-controllers-57f546d8b9-6fx86" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-calico--kube--controllers--57f546d8b9--6fx86-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--f--1ee231485e-k8s-calico--kube--controllers--57f546d8b9--6fx86-eth0", GenerateName:"calico-kube-controllers-57f546d8b9-", Namespace:"calico-system", SelfLink:"", UID:"ad7eff51-0c89-4d1a-be7b-2c099a9c4335", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 48, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57f546d8b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-f-1ee231485e", ContainerID:"", Pod:"calico-kube-controllers-57f546d8b9-6fx86", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie130ee86485", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:48:54.101497 containerd[1595]: 2024-12-13 08:48:54.043 [INFO][4712] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.24.70/32] ContainerID="a7b372091e39ca2786d6d350d9cc9bfd8d8b692691438d7dded428f8c38b48e2" Namespace="calico-system" Pod="calico-kube-controllers-57f546d8b9-6fx86" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-calico--kube--controllers--57f546d8b9--6fx86-eth0" Dec 13 08:48:54.101497 containerd[1595]: 2024-12-13 08:48:54.043 [INFO][4712] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie130ee86485 ContainerID="a7b372091e39ca2786d6d350d9cc9bfd8d8b692691438d7dded428f8c38b48e2" Namespace="calico-system" Pod="calico-kube-controllers-57f546d8b9-6fx86" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-calico--kube--controllers--57f546d8b9--6fx86-eth0" Dec 13 08:48:54.101497 containerd[1595]: 2024-12-13 08:48:54.054 [INFO][4712] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a7b372091e39ca2786d6d350d9cc9bfd8d8b692691438d7dded428f8c38b48e2" Namespace="calico-system" Pod="calico-kube-controllers-57f546d8b9-6fx86" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-calico--kube--controllers--57f546d8b9--6fx86-eth0" Dec 13 08:48:54.101497 containerd[1595]: 2024-12-13 08:48:54.055 [INFO][4712] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a7b372091e39ca2786d6d350d9cc9bfd8d8b692691438d7dded428f8c38b48e2" Namespace="calico-system" Pod="calico-kube-controllers-57f546d8b9-6fx86" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-calico--kube--controllers--57f546d8b9--6fx86-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--f--1ee231485e-k8s-calico--kube--controllers--57f546d8b9--6fx86-eth0", GenerateName:"calico-kube-controllers-57f546d8b9-", Namespace:"calico-system", SelfLink:"", UID:"ad7eff51-0c89-4d1a-be7b-2c099a9c4335", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 48, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57f546d8b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-f-1ee231485e", ContainerID:"a7b372091e39ca2786d6d350d9cc9bfd8d8b692691438d7dded428f8c38b48e2", Pod:"calico-kube-controllers-57f546d8b9-6fx86", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie130ee86485", MAC:"c2:5a:e8:bf:b3:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:48:54.101497 containerd[1595]: 2024-12-13 08:48:54.085 [INFO][4712] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a7b372091e39ca2786d6d350d9cc9bfd8d8b692691438d7dded428f8c38b48e2" Namespace="calico-system" Pod="calico-kube-controllers-57f546d8b9-6fx86" WorkloadEndpoint="ci--4081.2.1--f--1ee231485e-k8s-calico--kube--controllers--57f546d8b9--6fx86-eth0" Dec 13 08:48:54.168541 containerd[1595]: time="2024-12-13T08:48:54.168304685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:48:54.168541 containerd[1595]: time="2024-12-13T08:48:54.168454423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:48:54.168541 containerd[1595]: time="2024-12-13T08:48:54.168489998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:54.168785 containerd[1595]: time="2024-12-13T08:48:54.168706669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:54.281865 containerd[1595]: time="2024-12-13T08:48:54.281786050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57f546d8b9-6fx86,Uid:ad7eff51-0c89-4d1a-be7b-2c099a9c4335,Namespace:calico-system,Attempt:1,} returns sandbox id \"a7b372091e39ca2786d6d350d9cc9bfd8d8b692691438d7dded428f8c38b48e2\"" Dec 13 08:48:54.508837 kubelet[2743]: E1213 08:48:54.508734 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:54.592636 kubelet[2743]: I1213 08:48:54.591119 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-mncf7" podStartSLOduration=37.590833334 podStartE2EDuration="37.590833334s" podCreationTimestamp="2024-12-13 08:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:48:54.539849809 +0000 UTC m=+49.740664317" watchObservedRunningTime="2024-12-13 08:48:54.590833334 +0000 UTC m=+49.791647842" Dec 13 08:48:54.691484 systemd-networkd[1224]: caliadf124cfcfb: Gained IPv6LL Dec 13 08:48:54.821008 systemd-networkd[1224]: calidb2c0d6fef0: Gained IPv6LL Dec 13 08:48:55.295942 containerd[1595]: time="2024-12-13T08:48:55.294009430Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:55.299426 containerd[1595]: time="2024-12-13T08:48:55.299346211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 13 08:48:55.304176 containerd[1595]: time="2024-12-13T08:48:55.302814889Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:55.312885 containerd[1595]: time="2024-12-13T08:48:55.311858710Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:55.313997 containerd[1595]: time="2024-12-13T08:48:55.313452241Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 4.380023607s" Dec 13 08:48:55.313997 containerd[1595]: time="2024-12-13T08:48:55.313507584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 08:48:55.321509 containerd[1595]: time="2024-12-13T08:48:55.317554837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 08:48:55.321509 containerd[1595]: time="2024-12-13T08:48:55.321116131Z" level=info msg="CreateContainer within sandbox \"8e78c7fe87e20c740d2c5eba1e88e34a7c37b307500b3059f0078e75304474d7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 08:48:55.372799 containerd[1595]: time="2024-12-13T08:48:55.372749430Z" level=info msg="CreateContainer within sandbox \"8e78c7fe87e20c740d2c5eba1e88e34a7c37b307500b3059f0078e75304474d7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ed1b0835a7b5c8c7a71c21f47a18612c94571063e65adc00b75e751ce3e99ed1\"" Dec 13 08:48:55.374120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount157868252.mount: Deactivated successfully. Dec 13 08:48:55.379824 containerd[1595]: time="2024-12-13T08:48:55.379163140Z" level=info msg="StartContainer for \"ed1b0835a7b5c8c7a71c21f47a18612c94571063e65adc00b75e751ce3e99ed1\"" Dec 13 08:48:55.459571 systemd-networkd[1224]: calie130ee86485: Gained IPv6LL Dec 13 08:48:55.497000 systemd[1]: run-containerd-runc-k8s.io-ed1b0835a7b5c8c7a71c21f47a18612c94571063e65adc00b75e751ce3e99ed1-runc.IRORPm.mount: Deactivated successfully. Dec 13 08:48:55.539218 kubelet[2743]: E1213 08:48:55.539148 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:55.628968 containerd[1595]: time="2024-12-13T08:48:55.627120973Z" level=info msg="StartContainer for \"ed1b0835a7b5c8c7a71c21f47a18612c94571063e65adc00b75e751ce3e99ed1\" returns successfully" Dec 13 08:48:56.567468 kubelet[2743]: E1213 08:48:56.567429 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:56.583681 kubelet[2743]: I1213 08:48:56.582946 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-96bd6dbc8-gtk2k" podStartSLOduration=28.200830279 podStartE2EDuration="32.582894684s" podCreationTimestamp="2024-12-13 08:48:24 +0000 UTC" firstStartedPulling="2024-12-13 08:48:50.932652307 +0000 UTC m=+46.133466796" lastFinishedPulling="2024-12-13 08:48:55.314716703 +0000 UTC m=+50.515531201" observedRunningTime="2024-12-13 08:48:56.578866492 +0000 UTC m=+51.779681000" watchObservedRunningTime="2024-12-13 08:48:56.582894684 +0000 UTC m=+51.783709188" Dec 13 08:48:57.362052 containerd[1595]: time="2024-12-13T08:48:57.361971895Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:57.367116 containerd[1595]: time="2024-12-13T08:48:57.366981115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 08:48:57.372279 containerd[1595]: time="2024-12-13T08:48:57.372206694Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:57.380799 containerd[1595]: time="2024-12-13T08:48:57.380041059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:57.384524 containerd[1595]: time="2024-12-13T08:48:57.384374312Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.066760638s" Dec 13 08:48:57.384524 containerd[1595]: time="2024-12-13T08:48:57.384515614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 08:48:57.386837 containerd[1595]: time="2024-12-13T08:48:57.386477403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 08:48:57.390895 containerd[1595]: time="2024-12-13T08:48:57.390840323Z" level=info msg="CreateContainer within sandbox \"97d1a89cb42ee10cb13ab0f8166f1859bf3f7c361bafd735a079a26862143ebe\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 08:48:57.435379 containerd[1595]: time="2024-12-13T08:48:57.435272952Z" level=info msg="CreateContainer within sandbox \"97d1a89cb42ee10cb13ab0f8166f1859bf3f7c361bafd735a079a26862143ebe\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"cae5213208459ad50604b0a41084dfd6247f67fbd32183544ea5f8fd8c4c568b\"" Dec 13 08:48:57.436809 containerd[1595]: time="2024-12-13T08:48:57.436392234Z" level=info msg="StartContainer for \"cae5213208459ad50604b0a41084dfd6247f67fbd32183544ea5f8fd8c4c568b\"" Dec 13 08:48:57.554393 kubelet[2743]: I1213 08:48:57.553777 2743 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 08:48:57.597599 containerd[1595]: time="2024-12-13T08:48:57.597431035Z" level=info msg="StartContainer for \"cae5213208459ad50604b0a41084dfd6247f67fbd32183544ea5f8fd8c4c568b\" returns successfully" Dec 13 08:48:57.868287 containerd[1595]: time="2024-12-13T08:48:57.868191871Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:57.870780 containerd[1595]: time="2024-12-13T08:48:57.870675360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 08:48:57.876276 containerd[1595]: time="2024-12-13T08:48:57.876190834Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 489.668897ms" Dec 13 08:48:57.876276 containerd[1595]: time="2024-12-13T08:48:57.876276916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 08:48:57.877997 containerd[1595]: time="2024-12-13T08:48:57.877353529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 08:48:57.880881 containerd[1595]: time="2024-12-13T08:48:57.880620948Z" level=info msg="CreateContainer within sandbox \"6604bdcfb8d31c5dab8b2b1bd7def8d04229ee2c4a4008f2b1438e4a0f905275\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 08:48:57.920789 containerd[1595]: time="2024-12-13T08:48:57.920476516Z" level=info msg="CreateContainer within sandbox \"6604bdcfb8d31c5dab8b2b1bd7def8d04229ee2c4a4008f2b1438e4a0f905275\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ab2cdbe194871aa580c783d08aa8fe0e09575dd2d2d0da4e6520705ddef236fb\"" Dec 13 08:48:57.922053 containerd[1595]: time="2024-12-13T08:48:57.921288253Z" level=info msg="StartContainer for \"ab2cdbe194871aa580c783d08aa8fe0e09575dd2d2d0da4e6520705ddef236fb\"" Dec 13 08:48:58.015414 systemd[1]: Started sshd@7-146.190.59.17:22-147.75.109.163:45254.service - OpenSSH per-connection server daemon (147.75.109.163:45254). Dec 13 08:48:58.078907 containerd[1595]: time="2024-12-13T08:48:58.077691765Z" level=info msg="StartContainer for \"ab2cdbe194871aa580c783d08aa8fe0e09575dd2d2d0da4e6520705ddef236fb\" returns successfully" Dec 13 08:48:58.165760 sshd[4939]: Accepted publickey for core from 147.75.109.163 port 45254 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:48:58.172113 sshd[4939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:48:58.200870 systemd-logind[1572]: New session 8 of user core. Dec 13 08:48:58.207780 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 08:48:58.922645 systemd-journald[1139]: Under memory pressure, flushing caches. Dec 13 08:48:58.919970 systemd-resolved[1476]: Under memory pressure, flushing caches. Dec 13 08:48:58.920037 systemd-resolved[1476]: Flushed all caches. Dec 13 08:48:59.013415 sshd[4939]: pam_unix(sshd:session): session closed for user core Dec 13 08:48:59.025360 systemd[1]: sshd@7-146.190.59.17:22-147.75.109.163:45254.service: Deactivated successfully. Dec 13 08:48:59.039850 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 08:48:59.048147 systemd-logind[1572]: Session 8 logged out. Waiting for processes to exit. Dec 13 08:48:59.052785 systemd-logind[1572]: Removed session 8. Dec 13 08:48:59.589908 kubelet[2743]: I1213 08:48:59.589256 2743 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 08:49:01.085104 containerd[1595]: time="2024-12-13T08:49:01.084463494Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:49:01.089654 containerd[1595]: time="2024-12-13T08:49:01.089554607Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 08:49:01.095051 containerd[1595]: time="2024-12-13T08:49:01.094946588Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:49:01.101988 containerd[1595]: time="2024-12-13T08:49:01.101846615Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:49:01.103454 containerd[1595]: time="2024-12-13T08:49:01.103382620Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.225825272s" Dec 13 08:49:01.103454 containerd[1595]: time="2024-12-13T08:49:01.103455124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 08:49:01.104955 containerd[1595]: time="2024-12-13T08:49:01.104913131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 08:49:01.145798 containerd[1595]: time="2024-12-13T08:49:01.145517305Z" level=info msg="CreateContainer within sandbox \"a7b372091e39ca2786d6d350d9cc9bfd8d8b692691438d7dded428f8c38b48e2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 08:49:01.213086 containerd[1595]: time="2024-12-13T08:49:01.213010590Z" level=info msg="CreateContainer within sandbox \"a7b372091e39ca2786d6d350d9cc9bfd8d8b692691438d7dded428f8c38b48e2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1ac3fb2d86f58a15abc7a7c0080bbdc5a76facc8aadb70704bd04a91b89d4831\"" Dec 13 08:49:01.214641 containerd[1595]: time="2024-12-13T08:49:01.214365665Z" level=info msg="StartContainer for \"1ac3fb2d86f58a15abc7a7c0080bbdc5a76facc8aadb70704bd04a91b89d4831\"" Dec 13 08:49:01.358059 containerd[1595]: time="2024-12-13T08:49:01.357844869Z" level=info msg="StartContainer for \"1ac3fb2d86f58a15abc7a7c0080bbdc5a76facc8aadb70704bd04a91b89d4831\" returns successfully" Dec 13 08:49:01.669194 kubelet[2743]: I1213 08:49:01.668972 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-96bd6dbc8-jm4wj" podStartSLOduration=33.195114003 podStartE2EDuration="37.668900068s" podCreationTimestamp="2024-12-13 08:48:24 +0000 UTC" firstStartedPulling="2024-12-13 08:48:53.40313995 +0000 UTC m=+48.603954446" lastFinishedPulling="2024-12-13 08:48:57.876926007 +0000 UTC m=+53.077740511" observedRunningTime="2024-12-13 08:48:58.664849308 +0000 UTC m=+53.865663817" watchObservedRunningTime="2024-12-13 08:49:01.668900068 +0000 UTC m=+56.869714577" Dec 13 08:49:01.670098 kubelet[2743]: I1213 08:49:01.669427 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-57f546d8b9-6fx86" podStartSLOduration=30.849619468 podStartE2EDuration="37.669373885s" podCreationTimestamp="2024-12-13 08:48:24 +0000 UTC" firstStartedPulling="2024-12-13 08:48:54.284214406 +0000 UTC m=+49.485028899" lastFinishedPulling="2024-12-13 08:49:01.103968811 +0000 UTC m=+56.304783316" observedRunningTime="2024-12-13 08:49:01.667398171 +0000 UTC m=+56.868212710" watchObservedRunningTime="2024-12-13 08:49:01.669373885 +0000 UTC m=+56.870188393" Dec 13 08:49:03.364037 containerd[1595]: time="2024-12-13T08:49:03.363945818Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:49:03.369007 containerd[1595]: time="2024-12-13T08:49:03.368567246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 08:49:03.374208 containerd[1595]: time="2024-12-13T08:49:03.373579958Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:49:03.379013 containerd[1595]: time="2024-12-13T08:49:03.378948237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:49:03.380923 containerd[1595]: time="2024-12-13T08:49:03.380865162Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.27566066s" Dec 13 08:49:03.381161 containerd[1595]: time="2024-12-13T08:49:03.381128143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 08:49:03.386421 containerd[1595]: time="2024-12-13T08:49:03.386369323Z" level=info msg="CreateContainer within sandbox \"97d1a89cb42ee10cb13ab0f8166f1859bf3f7c361bafd735a079a26862143ebe\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 08:49:03.426729 containerd[1595]: time="2024-12-13T08:49:03.426513621Z" level=info msg="CreateContainer within sandbox \"97d1a89cb42ee10cb13ab0f8166f1859bf3f7c361bafd735a079a26862143ebe\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bf760067eb07638bfaca13243f9ca72454cb8dea3a3f43efca0919c3798b0f83\"" Dec 13 08:49:03.428693 containerd[1595]: time="2024-12-13T08:49:03.428496787Z" level=info msg="StartContainer for \"bf760067eb07638bfaca13243f9ca72454cb8dea3a3f43efca0919c3798b0f83\"" Dec 13 08:49:03.577996 containerd[1595]: time="2024-12-13T08:49:03.577911946Z" level=info msg="StartContainer for \"bf760067eb07638bfaca13243f9ca72454cb8dea3a3f43efca0919c3798b0f83\" returns successfully" Dec 13 08:49:04.029431 systemd[1]: Started sshd@8-146.190.59.17:22-147.75.109.163:45266.service - OpenSSH per-connection server daemon (147.75.109.163:45266). Dec 13 08:49:04.211489 sshd[5079]: Accepted publickey for core from 147.75.109.163 port 45266 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:04.215818 sshd[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:04.226613 systemd-logind[1572]: New session 9 of user core. Dec 13 08:49:04.233819 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 08:49:04.438984 kubelet[2743]: I1213 08:49:04.438237 2743 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 08:49:04.452366 kubelet[2743]: I1213 08:49:04.450398 2743 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 08:49:04.604490 sshd[5079]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:04.611672 systemd[1]: sshd@8-146.190.59.17:22-147.75.109.163:45266.service: Deactivated successfully. Dec 13 08:49:04.622767 systemd-logind[1572]: Session 9 logged out. Waiting for processes to exit. Dec 13 08:49:04.623794 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 08:49:04.627010 systemd-logind[1572]: Removed session 9. Dec 13 08:49:04.871227 systemd-journald[1139]: Under memory pressure, flushing caches. Dec 13 08:49:04.867785 systemd-resolved[1476]: Under memory pressure, flushing caches. Dec 13 08:49:04.868163 systemd-resolved[1476]: Flushed all caches. Dec 13 08:49:05.032417 containerd[1595]: time="2024-12-13T08:49:05.031953099Z" level=info msg="StopPodSandbox for \"fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d\"" Dec 13 08:49:05.205482 containerd[1595]: 2024-12-13 08:49:05.137 [WARNING][5110] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--f--1ee231485e-k8s-csi--node--driver--h2vzm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b522e9d4-562e-4448-88f3-7d6870e65d2f", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 48, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-f-1ee231485e", ContainerID:"97d1a89cb42ee10cb13ab0f8166f1859bf3f7c361bafd735a079a26862143ebe", Pod:"csi-node-driver-h2vzm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali06d4119a2f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:49:05.205482 containerd[1595]: 2024-12-13 08:49:05.138 [INFO][5110] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" Dec 13 08:49:05.205482 containerd[1595]: 2024-12-13 08:49:05.138 [INFO][5110] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" iface="eth0" netns="" Dec 13 08:49:05.205482 containerd[1595]: 2024-12-13 08:49:05.138 [INFO][5110] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" Dec 13 08:49:05.205482 containerd[1595]: 2024-12-13 08:49:05.138 [INFO][5110] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" Dec 13 08:49:05.205482 containerd[1595]: 2024-12-13 08:49:05.186 [INFO][5116] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" HandleID="k8s-pod-network.fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" Workload="ci--4081.2.1--f--1ee231485e-k8s-csi--node--driver--h2vzm-eth0" Dec 13 08:49:05.205482 containerd[1595]: 2024-12-13 08:49:05.186 [INFO][5116] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:49:05.205482 containerd[1595]: 2024-12-13 08:49:05.187 [INFO][5116] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:49:05.205482 containerd[1595]: 2024-12-13 08:49:05.196 [WARNING][5116] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" HandleID="k8s-pod-network.fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" Workload="ci--4081.2.1--f--1ee231485e-k8s-csi--node--driver--h2vzm-eth0" Dec 13 08:49:05.205482 containerd[1595]: 2024-12-13 08:49:05.196 [INFO][5116] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" HandleID="k8s-pod-network.fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" Workload="ci--4081.2.1--f--1ee231485e-k8s-csi--node--driver--h2vzm-eth0" Dec 13 08:49:05.205482 containerd[1595]: 2024-12-13 08:49:05.199 [INFO][5116] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:49:05.205482 containerd[1595]: 2024-12-13 08:49:05.201 [INFO][5110] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" Dec 13 08:49:05.205482 containerd[1595]: time="2024-12-13T08:49:05.205387618Z" level=info msg="TearDown network for sandbox \"fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d\" successfully" Dec 13 08:49:05.205482 containerd[1595]: time="2024-12-13T08:49:05.205431810Z" level=info msg="StopPodSandbox for \"fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d\" returns successfully" Dec 13 08:49:05.207955 containerd[1595]: time="2024-12-13T08:49:05.207901762Z" level=info msg="RemovePodSandbox for \"fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d\"" Dec 13 08:49:05.208096 containerd[1595]: time="2024-12-13T08:49:05.207971661Z" level=info msg="Forcibly stopping sandbox \"fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d\"" Dec 13 08:49:05.324397 containerd[1595]: 2024-12-13 08:49:05.277 [WARNING][5134] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--f--1ee231485e-k8s-csi--node--driver--h2vzm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b522e9d4-562e-4448-88f3-7d6870e65d2f", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 48, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-f-1ee231485e", ContainerID:"97d1a89cb42ee10cb13ab0f8166f1859bf3f7c361bafd735a079a26862143ebe", Pod:"csi-node-driver-h2vzm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali06d4119a2f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:49:05.324397 containerd[1595]: 2024-12-13 08:49:05.278 [INFO][5134] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" Dec 13 08:49:05.324397 containerd[1595]: 2024-12-13 08:49:05.278 [INFO][5134] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" iface="eth0" netns="" Dec 13 08:49:05.324397 containerd[1595]: 2024-12-13 08:49:05.278 [INFO][5134] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" Dec 13 08:49:05.324397 containerd[1595]: 2024-12-13 08:49:05.278 [INFO][5134] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" Dec 13 08:49:05.324397 containerd[1595]: 2024-12-13 08:49:05.308 [INFO][5140] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" HandleID="k8s-pod-network.fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" Workload="ci--4081.2.1--f--1ee231485e-k8s-csi--node--driver--h2vzm-eth0" Dec 13 08:49:05.324397 containerd[1595]: 2024-12-13 08:49:05.308 [INFO][5140] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:49:05.324397 containerd[1595]: 2024-12-13 08:49:05.308 [INFO][5140] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:49:05.324397 containerd[1595]: 2024-12-13 08:49:05.316 [WARNING][5140] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" HandleID="k8s-pod-network.fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" Workload="ci--4081.2.1--f--1ee231485e-k8s-csi--node--driver--h2vzm-eth0" Dec 13 08:49:05.324397 containerd[1595]: 2024-12-13 08:49:05.316 [INFO][5140] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" HandleID="k8s-pod-network.fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" Workload="ci--4081.2.1--f--1ee231485e-k8s-csi--node--driver--h2vzm-eth0" Dec 13 08:49:05.324397 containerd[1595]: 2024-12-13 08:49:05.319 [INFO][5140] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:49:05.324397 containerd[1595]: 2024-12-13 08:49:05.321 [INFO][5134] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d" Dec 13 08:49:05.325236 containerd[1595]: time="2024-12-13T08:49:05.324448665Z" level=info msg="TearDown network for sandbox \"fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d\" successfully" Dec 13 08:49:05.345785 containerd[1595]: time="2024-12-13T08:49:05.345678422Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 08:49:05.345977 containerd[1595]: time="2024-12-13T08:49:05.345834534Z" level=info msg="RemovePodSandbox \"fca3c514f7f8c4d204e70a110dac8175cda6f29e0b4f44b16ad50975c0d8d96d\" returns successfully" Dec 13 08:49:05.347009 containerd[1595]: time="2024-12-13T08:49:05.346957746Z" level=info msg="StopPodSandbox for \"141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa\"" Dec 13 08:49:05.470420 containerd[1595]: 2024-12-13 08:49:05.415 [WARNING][5158] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--mncf7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8474ed1a-b932-4bf2-81e9-841701b51857", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 48, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-f-1ee231485e", ContainerID:"80d346ffb8305da2179ee626dae32887528e170d50fdd8e65699262ccad811f5", Pod:"coredns-76f75df574-mncf7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliadf124cfcfb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:49:05.470420 containerd[1595]: 2024-12-13 08:49:05.416 [INFO][5158] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" Dec 13 08:49:05.470420 containerd[1595]: 2024-12-13 08:49:05.416 [INFO][5158] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" iface="eth0" netns="" Dec 13 08:49:05.470420 containerd[1595]: 2024-12-13 08:49:05.416 [INFO][5158] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" Dec 13 08:49:05.470420 containerd[1595]: 2024-12-13 08:49:05.416 [INFO][5158] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" Dec 13 08:49:05.470420 containerd[1595]: 2024-12-13 08:49:05.453 [INFO][5164] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" HandleID="k8s-pod-network.141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" Workload="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--mncf7-eth0" Dec 13 08:49:05.470420 containerd[1595]: 2024-12-13 08:49:05.453 [INFO][5164] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:49:05.470420 containerd[1595]: 2024-12-13 08:49:05.453 [INFO][5164] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:49:05.470420 containerd[1595]: 2024-12-13 08:49:05.462 [WARNING][5164] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" HandleID="k8s-pod-network.141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" Workload="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--mncf7-eth0" Dec 13 08:49:05.470420 containerd[1595]: 2024-12-13 08:49:05.462 [INFO][5164] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" HandleID="k8s-pod-network.141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" Workload="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--mncf7-eth0" Dec 13 08:49:05.470420 containerd[1595]: 2024-12-13 08:49:05.465 [INFO][5164] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:49:05.470420 containerd[1595]: 2024-12-13 08:49:05.467 [INFO][5158] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" Dec 13 08:49:05.474196 containerd[1595]: time="2024-12-13T08:49:05.470294343Z" level=info msg="TearDown network for sandbox \"141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa\" successfully" Dec 13 08:49:05.474196 containerd[1595]: time="2024-12-13T08:49:05.471089413Z" level=info msg="StopPodSandbox for \"141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa\" returns successfully" Dec 13 08:49:05.474196 containerd[1595]: time="2024-12-13T08:49:05.471898214Z" level=info msg="RemovePodSandbox for \"141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa\"" Dec 13 08:49:05.474196 containerd[1595]: time="2024-12-13T08:49:05.471953197Z" level=info msg="Forcibly stopping sandbox \"141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa\"" Dec 13 08:49:05.611372 containerd[1595]: 2024-12-13 08:49:05.546 [WARNING][5182] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--mncf7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8474ed1a-b932-4bf2-81e9-841701b51857", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 48, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-f-1ee231485e", ContainerID:"80d346ffb8305da2179ee626dae32887528e170d50fdd8e65699262ccad811f5", Pod:"coredns-76f75df574-mncf7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliadf124cfcfb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:49:05.611372 containerd[1595]: 2024-12-13 08:49:05.546 [INFO][5182] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" Dec 13 08:49:05.611372 containerd[1595]: 2024-12-13 08:49:05.546 [INFO][5182] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" iface="eth0" netns="" Dec 13 08:49:05.611372 containerd[1595]: 2024-12-13 08:49:05.546 [INFO][5182] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" Dec 13 08:49:05.611372 containerd[1595]: 2024-12-13 08:49:05.546 [INFO][5182] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" Dec 13 08:49:05.611372 containerd[1595]: 2024-12-13 08:49:05.593 [INFO][5188] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" HandleID="k8s-pod-network.141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" Workload="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--mncf7-eth0" Dec 13 08:49:05.611372 containerd[1595]: 2024-12-13 08:49:05.594 [INFO][5188] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:49:05.611372 containerd[1595]: 2024-12-13 08:49:05.594 [INFO][5188] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:49:05.611372 containerd[1595]: 2024-12-13 08:49:05.603 [WARNING][5188] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" HandleID="k8s-pod-network.141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" Workload="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--mncf7-eth0" Dec 13 08:49:05.611372 containerd[1595]: 2024-12-13 08:49:05.603 [INFO][5188] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" HandleID="k8s-pod-network.141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" Workload="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--mncf7-eth0" Dec 13 08:49:05.611372 containerd[1595]: 2024-12-13 08:49:05.606 [INFO][5188] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:49:05.611372 containerd[1595]: 2024-12-13 08:49:05.608 [INFO][5182] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa" Dec 13 08:49:05.612152 containerd[1595]: time="2024-12-13T08:49:05.611461362Z" level=info msg="TearDown network for sandbox \"141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa\" successfully" Dec 13 08:49:05.620808 containerd[1595]: time="2024-12-13T08:49:05.620666134Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 08:49:05.621031 containerd[1595]: time="2024-12-13T08:49:05.620835104Z" level=info msg="RemovePodSandbox \"141e005579883fb6ceed57c203e9f9d67ea9c522c3eda706ae635ecfe5e746aa\" returns successfully" Dec 13 08:49:05.622691 containerd[1595]: time="2024-12-13T08:49:05.622308774Z" level=info msg="StopPodSandbox for \"bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490\"" Dec 13 08:49:05.772513 containerd[1595]: 2024-12-13 08:49:05.683 [WARNING][5206] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--jm4wj-eth0", GenerateName:"calico-apiserver-96bd6dbc8-", Namespace:"calico-apiserver", SelfLink:"", UID:"15f81aca-82d2-4a86-bb58-7838817f9d2a", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 48, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"96bd6dbc8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-f-1ee231485e", ContainerID:"6604bdcfb8d31c5dab8b2b1bd7def8d04229ee2c4a4008f2b1438e4a0f905275", Pod:"calico-apiserver-96bd6dbc8-jm4wj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidb2c0d6fef0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:49:05.772513 containerd[1595]: 2024-12-13 08:49:05.683 [INFO][5206] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" Dec 13 08:49:05.772513 containerd[1595]: 2024-12-13 08:49:05.683 [INFO][5206] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" iface="eth0" netns="" Dec 13 08:49:05.772513 containerd[1595]: 2024-12-13 08:49:05.683 [INFO][5206] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" Dec 13 08:49:05.772513 containerd[1595]: 2024-12-13 08:49:05.683 [INFO][5206] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" Dec 13 08:49:05.772513 containerd[1595]: 2024-12-13 08:49:05.732 [INFO][5212] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" HandleID="k8s-pod-network.bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--jm4wj-eth0" Dec 13 08:49:05.772513 containerd[1595]: 2024-12-13 08:49:05.733 [INFO][5212] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:49:05.772513 containerd[1595]: 2024-12-13 08:49:05.733 [INFO][5212] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:49:05.772513 containerd[1595]: 2024-12-13 08:49:05.742 [WARNING][5212] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" HandleID="k8s-pod-network.bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--jm4wj-eth0" Dec 13 08:49:05.772513 containerd[1595]: 2024-12-13 08:49:05.743 [INFO][5212] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" HandleID="k8s-pod-network.bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--jm4wj-eth0" Dec 13 08:49:05.772513 containerd[1595]: 2024-12-13 08:49:05.747 [INFO][5212] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:49:05.772513 containerd[1595]: 2024-12-13 08:49:05.752 [INFO][5206] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" Dec 13 08:49:05.772513 containerd[1595]: time="2024-12-13T08:49:05.772460689Z" level=info msg="TearDown network for sandbox \"bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490\" successfully" Dec 13 08:49:05.773467 containerd[1595]: time="2024-12-13T08:49:05.772537305Z" level=info msg="StopPodSandbox for \"bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490\" returns successfully" Dec 13 08:49:05.777671 containerd[1595]: time="2024-12-13T08:49:05.776714651Z" level=info msg="RemovePodSandbox for \"bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490\"" Dec 13 08:49:05.777671 containerd[1595]: time="2024-12-13T08:49:05.776792637Z" level=info msg="Forcibly stopping sandbox \"bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490\"" Dec 13 08:49:05.938179 containerd[1595]: 2024-12-13 08:49:05.872 [WARNING][5230] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--jm4wj-eth0", GenerateName:"calico-apiserver-96bd6dbc8-", Namespace:"calico-apiserver", SelfLink:"", UID:"15f81aca-82d2-4a86-bb58-7838817f9d2a", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 48, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"96bd6dbc8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-f-1ee231485e", ContainerID:"6604bdcfb8d31c5dab8b2b1bd7def8d04229ee2c4a4008f2b1438e4a0f905275", Pod:"calico-apiserver-96bd6dbc8-jm4wj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidb2c0d6fef0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:49:05.938179 containerd[1595]: 2024-12-13 08:49:05.872 [INFO][5230] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" Dec 13 08:49:05.938179 containerd[1595]: 2024-12-13 08:49:05.873 [INFO][5230] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" iface="eth0" netns="" Dec 13 08:49:05.938179 containerd[1595]: 2024-12-13 08:49:05.873 [INFO][5230] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" Dec 13 08:49:05.938179 containerd[1595]: 2024-12-13 08:49:05.873 [INFO][5230] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" Dec 13 08:49:05.938179 containerd[1595]: 2024-12-13 08:49:05.920 [INFO][5238] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" HandleID="k8s-pod-network.bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--jm4wj-eth0" Dec 13 08:49:05.938179 containerd[1595]: 2024-12-13 08:49:05.921 [INFO][5238] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:49:05.938179 containerd[1595]: 2024-12-13 08:49:05.921 [INFO][5238] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:49:05.938179 containerd[1595]: 2024-12-13 08:49:05.929 [WARNING][5238] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" HandleID="k8s-pod-network.bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--jm4wj-eth0" Dec 13 08:49:05.938179 containerd[1595]: 2024-12-13 08:49:05.929 [INFO][5238] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" HandleID="k8s-pod-network.bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--jm4wj-eth0" Dec 13 08:49:05.938179 containerd[1595]: 2024-12-13 08:49:05.932 [INFO][5238] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:49:05.938179 containerd[1595]: 2024-12-13 08:49:05.935 [INFO][5230] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490" Dec 13 08:49:05.940365 containerd[1595]: time="2024-12-13T08:49:05.939229964Z" level=info msg="TearDown network for sandbox \"bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490\" successfully" Dec 13 08:49:05.965970 containerd[1595]: time="2024-12-13T08:49:05.965899144Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 08:49:05.966343 containerd[1595]: time="2024-12-13T08:49:05.966229215Z" level=info msg="RemovePodSandbox \"bed1db76645c64c013d8b870946a5987655cb9263dc27476838f8e3dbe085490\" returns successfully" Dec 13 08:49:05.967002 containerd[1595]: time="2024-12-13T08:49:05.966946084Z" level=info msg="StopPodSandbox for \"f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d\"" Dec 13 08:49:06.114085 containerd[1595]: 2024-12-13 08:49:06.050 [WARNING][5257] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--gtk2k-eth0", GenerateName:"calico-apiserver-96bd6dbc8-", Namespace:"calico-apiserver", SelfLink:"", UID:"85ce8df6-d384-49f2-95e5-eb36247ebf47", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 48, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"96bd6dbc8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-f-1ee231485e", ContainerID:"8e78c7fe87e20c740d2c5eba1e88e34a7c37b307500b3059f0078e75304474d7", Pod:"calico-apiserver-96bd6dbc8-gtk2k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidcb285b5f97", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:49:06.114085 containerd[1595]: 2024-12-13 08:49:06.050 [INFO][5257] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" Dec 13 08:49:06.114085 containerd[1595]: 2024-12-13 08:49:06.050 [INFO][5257] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" iface="eth0" netns="" Dec 13 08:49:06.114085 containerd[1595]: 2024-12-13 08:49:06.050 [INFO][5257] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" Dec 13 08:49:06.114085 containerd[1595]: 2024-12-13 08:49:06.050 [INFO][5257] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" Dec 13 08:49:06.114085 containerd[1595]: 2024-12-13 08:49:06.091 [INFO][5264] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" HandleID="k8s-pod-network.f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--gtk2k-eth0" Dec 13 08:49:06.114085 containerd[1595]: 2024-12-13 08:49:06.091 [INFO][5264] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:49:06.114085 containerd[1595]: 2024-12-13 08:49:06.092 [INFO][5264] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:49:06.114085 containerd[1595]: 2024-12-13 08:49:06.103 [WARNING][5264] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" HandleID="k8s-pod-network.f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--gtk2k-eth0" Dec 13 08:49:06.114085 containerd[1595]: 2024-12-13 08:49:06.103 [INFO][5264] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" HandleID="k8s-pod-network.f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--gtk2k-eth0" Dec 13 08:49:06.114085 containerd[1595]: 2024-12-13 08:49:06.106 [INFO][5264] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:49:06.114085 containerd[1595]: 2024-12-13 08:49:06.109 [INFO][5257] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" Dec 13 08:49:06.114085 containerd[1595]: time="2024-12-13T08:49:06.114007496Z" level=info msg="TearDown network for sandbox \"f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d\" successfully" Dec 13 08:49:06.114085 containerd[1595]: time="2024-12-13T08:49:06.114038119Z" level=info msg="StopPodSandbox for \"f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d\" returns successfully" Dec 13 08:49:06.115722 containerd[1595]: time="2024-12-13T08:49:06.115672304Z" level=info msg="RemovePodSandbox for \"f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d\"" Dec 13 08:49:06.115807 containerd[1595]: time="2024-12-13T08:49:06.115727647Z" level=info msg="Forcibly stopping sandbox \"f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d\"" Dec 13 08:49:06.240368 containerd[1595]: 2024-12-13 08:49:06.180 [WARNING][5282] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--gtk2k-eth0", GenerateName:"calico-apiserver-96bd6dbc8-", Namespace:"calico-apiserver", SelfLink:"", UID:"85ce8df6-d384-49f2-95e5-eb36247ebf47", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 48, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"96bd6dbc8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-f-1ee231485e", ContainerID:"8e78c7fe87e20c740d2c5eba1e88e34a7c37b307500b3059f0078e75304474d7", Pod:"calico-apiserver-96bd6dbc8-gtk2k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidcb285b5f97", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:49:06.240368 containerd[1595]: 2024-12-13 08:49:06.180 [INFO][5282] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" Dec 13 08:49:06.240368 containerd[1595]: 2024-12-13 08:49:06.181 [INFO][5282] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" iface="eth0" netns="" Dec 13 08:49:06.240368 containerd[1595]: 2024-12-13 08:49:06.181 [INFO][5282] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" Dec 13 08:49:06.240368 containerd[1595]: 2024-12-13 08:49:06.181 [INFO][5282] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" Dec 13 08:49:06.240368 containerd[1595]: 2024-12-13 08:49:06.222 [INFO][5288] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" HandleID="k8s-pod-network.f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--gtk2k-eth0" Dec 13 08:49:06.240368 containerd[1595]: 2024-12-13 08:49:06.222 [INFO][5288] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:49:06.240368 containerd[1595]: 2024-12-13 08:49:06.222 [INFO][5288] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:49:06.240368 containerd[1595]: 2024-12-13 08:49:06.233 [WARNING][5288] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" HandleID="k8s-pod-network.f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--gtk2k-eth0" Dec 13 08:49:06.240368 containerd[1595]: 2024-12-13 08:49:06.233 [INFO][5288] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" HandleID="k8s-pod-network.f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--apiserver--96bd6dbc8--gtk2k-eth0" Dec 13 08:49:06.240368 containerd[1595]: 2024-12-13 08:49:06.235 [INFO][5288] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:49:06.240368 containerd[1595]: 2024-12-13 08:49:06.238 [INFO][5282] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d" Dec 13 08:49:06.241129 containerd[1595]: time="2024-12-13T08:49:06.240426913Z" level=info msg="TearDown network for sandbox \"f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d\" successfully" Dec 13 08:49:06.249637 containerd[1595]: time="2024-12-13T08:49:06.249506562Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 08:49:06.249986 containerd[1595]: time="2024-12-13T08:49:06.249698361Z" level=info msg="RemovePodSandbox \"f55d5ef23255de1184d6e55a6307145eb14cde5cf6dc14ad305e06168d1f2f5d\" returns successfully" Dec 13 08:49:06.251296 containerd[1595]: time="2024-12-13T08:49:06.250853041Z" level=info msg="StopPodSandbox for \"88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994\"" Dec 13 08:49:06.360982 containerd[1595]: 2024-12-13 08:49:06.306 [WARNING][5306] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--f--1ee231485e-k8s-calico--kube--controllers--57f546d8b9--6fx86-eth0", GenerateName:"calico-kube-controllers-57f546d8b9-", Namespace:"calico-system", SelfLink:"", UID:"ad7eff51-0c89-4d1a-be7b-2c099a9c4335", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 48, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57f546d8b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-f-1ee231485e", ContainerID:"a7b372091e39ca2786d6d350d9cc9bfd8d8b692691438d7dded428f8c38b48e2", Pod:"calico-kube-controllers-57f546d8b9-6fx86", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie130ee86485", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:49:06.360982 containerd[1595]: 2024-12-13 08:49:06.306 [INFO][5306] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" Dec 13 08:49:06.360982 containerd[1595]: 2024-12-13 08:49:06.307 [INFO][5306] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" iface="eth0" netns="" Dec 13 08:49:06.360982 containerd[1595]: 2024-12-13 08:49:06.307 [INFO][5306] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" Dec 13 08:49:06.360982 containerd[1595]: 2024-12-13 08:49:06.307 [INFO][5306] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" Dec 13 08:49:06.360982 containerd[1595]: 2024-12-13 08:49:06.343 [INFO][5312] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" HandleID="k8s-pod-network.88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--kube--controllers--57f546d8b9--6fx86-eth0" Dec 13 08:49:06.360982 containerd[1595]: 2024-12-13 08:49:06.343 [INFO][5312] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:49:06.360982 containerd[1595]: 2024-12-13 08:49:06.343 [INFO][5312] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:49:06.360982 containerd[1595]: 2024-12-13 08:49:06.352 [WARNING][5312] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" HandleID="k8s-pod-network.88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--kube--controllers--57f546d8b9--6fx86-eth0" Dec 13 08:49:06.360982 containerd[1595]: 2024-12-13 08:49:06.352 [INFO][5312] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" HandleID="k8s-pod-network.88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--kube--controllers--57f546d8b9--6fx86-eth0" Dec 13 08:49:06.360982 containerd[1595]: 2024-12-13 08:49:06.355 [INFO][5312] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:49:06.360982 containerd[1595]: 2024-12-13 08:49:06.358 [INFO][5306] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" Dec 13 08:49:06.362024 containerd[1595]: time="2024-12-13T08:49:06.361028440Z" level=info msg="TearDown network for sandbox \"88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994\" successfully" Dec 13 08:49:06.362024 containerd[1595]: time="2024-12-13T08:49:06.361060742Z" level=info msg="StopPodSandbox for \"88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994\" returns successfully" Dec 13 08:49:06.362024 containerd[1595]: time="2024-12-13T08:49:06.361739146Z" level=info msg="RemovePodSandbox for \"88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994\"" Dec 13 08:49:06.362024 containerd[1595]: time="2024-12-13T08:49:06.361803151Z" level=info msg="Forcibly stopping sandbox \"88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994\"" Dec 13 08:49:06.495410 containerd[1595]: 2024-12-13 08:49:06.428 [WARNING][5330] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--f--1ee231485e-k8s-calico--kube--controllers--57f546d8b9--6fx86-eth0", GenerateName:"calico-kube-controllers-57f546d8b9-", Namespace:"calico-system", SelfLink:"", UID:"ad7eff51-0c89-4d1a-be7b-2c099a9c4335", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 48, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57f546d8b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-f-1ee231485e", ContainerID:"a7b372091e39ca2786d6d350d9cc9bfd8d8b692691438d7dded428f8c38b48e2", Pod:"calico-kube-controllers-57f546d8b9-6fx86", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie130ee86485", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:49:06.495410 containerd[1595]: 2024-12-13 08:49:06.431 [INFO][5330] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" Dec 13 08:49:06.495410 containerd[1595]: 2024-12-13 08:49:06.431 [INFO][5330] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" iface="eth0" netns="" Dec 13 08:49:06.495410 containerd[1595]: 2024-12-13 08:49:06.432 [INFO][5330] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" Dec 13 08:49:06.495410 containerd[1595]: 2024-12-13 08:49:06.432 [INFO][5330] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" Dec 13 08:49:06.495410 containerd[1595]: 2024-12-13 08:49:06.477 [INFO][5337] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" HandleID="k8s-pod-network.88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--kube--controllers--57f546d8b9--6fx86-eth0" Dec 13 08:49:06.495410 containerd[1595]: 2024-12-13 08:49:06.477 [INFO][5337] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:49:06.495410 containerd[1595]: 2024-12-13 08:49:06.477 [INFO][5337] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:49:06.495410 containerd[1595]: 2024-12-13 08:49:06.485 [WARNING][5337] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" HandleID="k8s-pod-network.88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--kube--controllers--57f546d8b9--6fx86-eth0" Dec 13 08:49:06.495410 containerd[1595]: 2024-12-13 08:49:06.485 [INFO][5337] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" HandleID="k8s-pod-network.88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" Workload="ci--4081.2.1--f--1ee231485e-k8s-calico--kube--controllers--57f546d8b9--6fx86-eth0" Dec 13 08:49:06.495410 containerd[1595]: 2024-12-13 08:49:06.488 [INFO][5337] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:49:06.495410 containerd[1595]: 2024-12-13 08:49:06.492 [INFO][5330] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994" Dec 13 08:49:06.495410 containerd[1595]: time="2024-12-13T08:49:06.494602317Z" level=info msg="TearDown network for sandbox \"88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994\" successfully" Dec 13 08:49:06.505060 containerd[1595]: time="2024-12-13T08:49:06.504963823Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 08:49:06.505738 containerd[1595]: time="2024-12-13T08:49:06.505556762Z" level=info msg="RemovePodSandbox \"88a2c659b86670869b1e58084aba1c5252fb5a43fe511433d1904e0a6457f994\" returns successfully" Dec 13 08:49:06.506562 containerd[1595]: time="2024-12-13T08:49:06.506406860Z" level=info msg="StopPodSandbox for \"e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6\"" Dec 13 08:49:06.644628 containerd[1595]: 2024-12-13 08:49:06.575 [WARNING][5356] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--dnl5l-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"22a25218-506a-48e3-b4fd-f0ae3f1527a3", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 48, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-f-1ee231485e", ContainerID:"946033eaf8a3961e9a6ee9ce8ff2b1a51a5215b56918a5ca22ea3f0c896580ab", Pod:"coredns-76f75df574-dnl5l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d3ce63dc0e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:49:06.644628 containerd[1595]: 2024-12-13 08:49:06.575 [INFO][5356] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" Dec 13 08:49:06.644628 containerd[1595]: 2024-12-13 08:49:06.575 [INFO][5356] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" iface="eth0" netns="" Dec 13 08:49:06.644628 containerd[1595]: 2024-12-13 08:49:06.575 [INFO][5356] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" Dec 13 08:49:06.644628 containerd[1595]: 2024-12-13 08:49:06.575 [INFO][5356] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" Dec 13 08:49:06.644628 containerd[1595]: 2024-12-13 08:49:06.619 [INFO][5362] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" HandleID="k8s-pod-network.e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" Workload="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--dnl5l-eth0" Dec 13 08:49:06.644628 containerd[1595]: 2024-12-13 08:49:06.620 [INFO][5362] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:49:06.644628 containerd[1595]: 2024-12-13 08:49:06.620 [INFO][5362] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:49:06.644628 containerd[1595]: 2024-12-13 08:49:06.629 [WARNING][5362] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" HandleID="k8s-pod-network.e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" Workload="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--dnl5l-eth0" Dec 13 08:49:06.644628 containerd[1595]: 2024-12-13 08:49:06.630 [INFO][5362] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" HandleID="k8s-pod-network.e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" Workload="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--dnl5l-eth0" Dec 13 08:49:06.644628 containerd[1595]: 2024-12-13 08:49:06.633 [INFO][5362] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:49:06.644628 containerd[1595]: 2024-12-13 08:49:06.640 [INFO][5356] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" Dec 13 08:49:06.645799 containerd[1595]: time="2024-12-13T08:49:06.644686335Z" level=info msg="TearDown network for sandbox \"e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6\" successfully" Dec 13 08:49:06.645799 containerd[1595]: time="2024-12-13T08:49:06.644723432Z" level=info msg="StopPodSandbox for \"e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6\" returns successfully" Dec 13 08:49:06.647190 containerd[1595]: time="2024-12-13T08:49:06.646216491Z" level=info msg="RemovePodSandbox for \"e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6\"" Dec 13 08:49:06.647190 containerd[1595]: time="2024-12-13T08:49:06.646427502Z" level=info msg="Forcibly stopping sandbox \"e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6\"" Dec 13 08:49:06.798723 containerd[1595]: 2024-12-13 08:49:06.708 [WARNING][5380] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--dnl5l-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"22a25218-506a-48e3-b4fd-f0ae3f1527a3", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 48, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-f-1ee231485e", ContainerID:"946033eaf8a3961e9a6ee9ce8ff2b1a51a5215b56918a5ca22ea3f0c896580ab", Pod:"coredns-76f75df574-dnl5l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d3ce63dc0e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:49:06.798723 containerd[1595]: 2024-12-13 08:49:06.709 [INFO][5380] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" Dec 13 08:49:06.798723 containerd[1595]: 2024-12-13 08:49:06.709 [INFO][5380] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" iface="eth0" netns="" Dec 13 08:49:06.798723 containerd[1595]: 2024-12-13 08:49:06.709 [INFO][5380] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" Dec 13 08:49:06.798723 containerd[1595]: 2024-12-13 08:49:06.709 [INFO][5380] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" Dec 13 08:49:06.798723 containerd[1595]: 2024-12-13 08:49:06.777 [INFO][5386] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" HandleID="k8s-pod-network.e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" Workload="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--dnl5l-eth0" Dec 13 08:49:06.798723 containerd[1595]: 2024-12-13 08:49:06.777 [INFO][5386] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:49:06.798723 containerd[1595]: 2024-12-13 08:49:06.778 [INFO][5386] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:49:06.798723 containerd[1595]: 2024-12-13 08:49:06.789 [WARNING][5386] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" HandleID="k8s-pod-network.e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" Workload="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--dnl5l-eth0" Dec 13 08:49:06.798723 containerd[1595]: 2024-12-13 08:49:06.789 [INFO][5386] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" HandleID="k8s-pod-network.e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" Workload="ci--4081.2.1--f--1ee231485e-k8s-coredns--76f75df574--dnl5l-eth0" Dec 13 08:49:06.798723 containerd[1595]: 2024-12-13 08:49:06.792 [INFO][5386] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:49:06.798723 containerd[1595]: 2024-12-13 08:49:06.795 [INFO][5380] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6" Dec 13 08:49:06.798723 containerd[1595]: time="2024-12-13T08:49:06.798694124Z" level=info msg="TearDown network for sandbox \"e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6\" successfully" Dec 13 08:49:06.815123 containerd[1595]: time="2024-12-13T08:49:06.815011045Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 08:49:06.815384 containerd[1595]: time="2024-12-13T08:49:06.815176661Z" level=info msg="RemovePodSandbox \"e7da841d81749af693ecfbd0652e76efb5e8682d811085e12e1532a48bfe4fb6\" returns successfully" Dec 13 08:49:09.626047 systemd[1]: Started sshd@9-146.190.59.17:22-147.75.109.163:53152.service - OpenSSH per-connection server daemon (147.75.109.163:53152). Dec 13 08:49:09.752221 sshd[5413]: Accepted publickey for core from 147.75.109.163 port 53152 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:09.755729 sshd[5413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:09.765892 systemd-logind[1572]: New session 10 of user core. Dec 13 08:49:09.770987 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 08:49:10.255448 sshd[5413]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:10.269446 systemd[1]: Started sshd@10-146.190.59.17:22-147.75.109.163:53168.service - OpenSSH per-connection server daemon (147.75.109.163:53168). Dec 13 08:49:10.273591 systemd[1]: sshd@9-146.190.59.17:22-147.75.109.163:53152.service: Deactivated successfully. Dec 13 08:49:10.279132 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 08:49:10.282811 systemd-logind[1572]: Session 10 logged out. Waiting for processes to exit. Dec 13 08:49:10.289461 systemd-logind[1572]: Removed session 10. Dec 13 08:49:10.343405 sshd[5425]: Accepted publickey for core from 147.75.109.163 port 53168 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:10.346887 sshd[5425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:10.357651 systemd-logind[1572]: New session 11 of user core. Dec 13 08:49:10.365478 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 08:49:10.685075 sshd[5425]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:10.700432 systemd[1]: Started sshd@11-146.190.59.17:22-147.75.109.163:53182.service - OpenSSH per-connection server daemon (147.75.109.163:53182). Dec 13 08:49:10.721996 systemd[1]: sshd@10-146.190.59.17:22-147.75.109.163:53168.service: Deactivated successfully. Dec 13 08:49:10.748216 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 08:49:10.753257 systemd-logind[1572]: Session 11 logged out. Waiting for processes to exit. Dec 13 08:49:10.763484 systemd-logind[1572]: Removed session 11. Dec 13 08:49:10.812350 sshd[5435]: Accepted publickey for core from 147.75.109.163 port 53182 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:10.818855 sshd[5435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:10.833431 systemd-logind[1572]: New session 12 of user core. Dec 13 08:49:10.840947 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 08:49:10.887129 systemd-journald[1139]: Under memory pressure, flushing caches. Dec 13 08:49:10.883554 systemd-resolved[1476]: Under memory pressure, flushing caches. Dec 13 08:49:10.883566 systemd-resolved[1476]: Flushed all caches. Dec 13 08:49:11.078133 sshd[5435]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:11.086522 systemd[1]: sshd@11-146.190.59.17:22-147.75.109.163:53182.service: Deactivated successfully. Dec 13 08:49:11.094046 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 08:49:11.097743 systemd-logind[1572]: Session 12 logged out. Waiting for processes to exit. Dec 13 08:49:11.100688 systemd-logind[1572]: Removed session 12. Dec 13 08:49:14.047249 kubelet[2743]: E1213 08:49:14.046584 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:49:16.048520 kubelet[2743]: E1213 08:49:16.048419 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:49:16.091939 systemd[1]: Started sshd@12-146.190.59.17:22-147.75.109.163:39324.service - OpenSSH per-connection server daemon (147.75.109.163:39324). Dec 13 08:49:16.142234 sshd[5462]: Accepted publickey for core from 147.75.109.163 port 39324 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:16.146668 sshd[5462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:16.161674 systemd-logind[1572]: New session 13 of user core. Dec 13 08:49:16.168305 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 08:49:16.431650 sshd[5462]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:16.447646 systemd-logind[1572]: Session 13 logged out. Waiting for processes to exit. Dec 13 08:49:16.448403 systemd[1]: sshd@12-146.190.59.17:22-147.75.109.163:39324.service: Deactivated successfully. Dec 13 08:49:16.458979 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 08:49:16.462706 systemd-logind[1572]: Removed session 13. Dec 13 08:49:17.269104 kubelet[2743]: I1213 08:49:17.269058 2743 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 08:49:17.335590 kubelet[2743]: I1213 08:49:17.335520 2743 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-h2vzm" podStartSLOduration=43.17558052 podStartE2EDuration="53.33527445s" podCreationTimestamp="2024-12-13 08:48:24 +0000 UTC" firstStartedPulling="2024-12-13 08:48:53.221915494 +0000 UTC m=+48.422729998" lastFinishedPulling="2024-12-13 08:49:03.38160944 +0000 UTC m=+58.582423928" observedRunningTime="2024-12-13 08:49:03.643170579 +0000 UTC m=+58.843985087" watchObservedRunningTime="2024-12-13 08:49:17.33527445 +0000 UTC m=+72.536088959" Dec 13 08:49:21.443791 systemd[1]: Started sshd@13-146.190.59.17:22-147.75.109.163:39326.service - OpenSSH per-connection server daemon (147.75.109.163:39326). Dec 13 08:49:21.548336 sshd[5509]: Accepted publickey for core from 147.75.109.163 port 39326 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:21.552675 sshd[5509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:21.565161 systemd-logind[1572]: New session 14 of user core. Dec 13 08:49:21.575672 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 08:49:22.023642 sshd[5509]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:22.033907 systemd-logind[1572]: Session 14 logged out. Waiting for processes to exit. Dec 13 08:49:22.035736 systemd[1]: sshd@13-146.190.59.17:22-147.75.109.163:39326.service: Deactivated successfully. Dec 13 08:49:22.043337 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 08:49:22.047360 systemd-logind[1572]: Removed session 14. Dec 13 08:49:26.037296 kubelet[2743]: E1213 08:49:26.037161 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:49:27.047359 systemd[1]: Started sshd@14-146.190.59.17:22-147.75.109.163:43358.service - OpenSSH per-connection server daemon (147.75.109.163:43358). Dec 13 08:49:27.145555 sshd[5524]: Accepted publickey for core from 147.75.109.163 port 43358 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:27.147805 sshd[5524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:27.160797 systemd-logind[1572]: New session 15 of user core. Dec 13 08:49:27.168873 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 08:49:27.437345 sshd[5524]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:27.442284 systemd[1]: sshd@14-146.190.59.17:22-147.75.109.163:43358.service: Deactivated successfully. Dec 13 08:49:27.449596 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 08:49:27.449885 systemd-logind[1572]: Session 15 logged out. Waiting for processes to exit. Dec 13 08:49:27.452665 systemd-logind[1572]: Removed session 15. Dec 13 08:49:32.447021 systemd[1]: Started sshd@15-146.190.59.17:22-147.75.109.163:43372.service - OpenSSH per-connection server daemon (147.75.109.163:43372). Dec 13 08:49:32.524373 sshd[5544]: Accepted publickey for core from 147.75.109.163 port 43372 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:32.527496 sshd[5544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:32.537471 systemd-logind[1572]: New session 16 of user core. Dec 13 08:49:32.540913 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 08:49:32.698955 kubelet[2743]: I1213 08:49:32.698583 2743 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 08:49:32.745636 sshd[5544]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:32.771585 systemd[1]: Started sshd@16-146.190.59.17:22-147.75.109.163:43376.service - OpenSSH per-connection server daemon (147.75.109.163:43376). Dec 13 08:49:32.772651 systemd[1]: sshd@15-146.190.59.17:22-147.75.109.163:43372.service: Deactivated successfully. Dec 13 08:49:32.797207 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 08:49:32.801604 systemd-logind[1572]: Session 16 logged out. Waiting for processes to exit. Dec 13 08:49:32.807122 systemd-logind[1572]: Removed session 16. Dec 13 08:49:32.859325 sshd[5555]: Accepted publickey for core from 147.75.109.163 port 43376 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:32.861806 sshd[5555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:32.870638 systemd-logind[1572]: New session 17 of user core. Dec 13 08:49:32.876919 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 08:49:33.039105 kubelet[2743]: E1213 08:49:33.038491 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:49:33.661029 sshd[5555]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:33.679507 systemd[1]: Started sshd@17-146.190.59.17:22-147.75.109.163:43386.service - OpenSSH per-connection server daemon (147.75.109.163:43386). Dec 13 08:49:33.681296 systemd[1]: sshd@16-146.190.59.17:22-147.75.109.163:43376.service: Deactivated successfully. Dec 13 08:49:33.697585 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 08:49:33.710785 systemd-logind[1572]: Session 17 logged out. Waiting for processes to exit. Dec 13 08:49:33.713629 systemd-logind[1572]: Removed session 17. Dec 13 08:49:33.826675 sshd[5569]: Accepted publickey for core from 147.75.109.163 port 43386 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:33.828175 sshd[5569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:33.842730 systemd-logind[1572]: New session 18 of user core. Dec 13 08:49:33.850030 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 08:49:36.921654 sshd[5569]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:36.972513 systemd-journald[1139]: Under memory pressure, flushing caches. Dec 13 08:49:36.932544 systemd-resolved[1476]: Under memory pressure, flushing caches. Dec 13 08:49:36.932558 systemd-resolved[1476]: Flushed all caches. Dec 13 08:49:36.953957 systemd[1]: Started sshd@18-146.190.59.17:22-147.75.109.163:40450.service - OpenSSH per-connection server daemon (147.75.109.163:40450). Dec 13 08:49:36.962137 systemd[1]: sshd@17-146.190.59.17:22-147.75.109.163:43386.service: Deactivated successfully. Dec 13 08:49:37.005517 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 08:49:37.006262 systemd-logind[1572]: Session 18 logged out. Waiting for processes to exit. Dec 13 08:49:37.060440 systemd-logind[1572]: Removed session 18. Dec 13 08:49:37.154430 sshd[5592]: Accepted publickey for core from 147.75.109.163 port 40450 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:37.164413 sshd[5592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:37.187301 systemd-logind[1572]: New session 19 of user core. Dec 13 08:49:37.194764 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 08:49:38.284871 sshd[5592]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:38.301980 systemd[1]: Started sshd@19-146.190.59.17:22-147.75.109.163:40452.service - OpenSSH per-connection server daemon (147.75.109.163:40452). Dec 13 08:49:38.308253 systemd[1]: sshd@18-146.190.59.17:22-147.75.109.163:40450.service: Deactivated successfully. Dec 13 08:49:38.319729 systemd-logind[1572]: Session 19 logged out. Waiting for processes to exit. Dec 13 08:49:38.320078 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 08:49:38.333089 systemd-logind[1572]: Removed session 19. Dec 13 08:49:38.419207 sshd[5622]: Accepted publickey for core from 147.75.109.163 port 40452 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:38.423019 sshd[5622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:38.431371 systemd-logind[1572]: New session 20 of user core. Dec 13 08:49:38.445535 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 08:49:38.651504 sshd[5622]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:38.657767 systemd-logind[1572]: Session 20 logged out. Waiting for processes to exit. Dec 13 08:49:38.657812 systemd[1]: sshd@19-146.190.59.17:22-147.75.109.163:40452.service: Deactivated successfully. Dec 13 08:49:38.663101 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 08:49:38.665250 systemd-logind[1572]: Removed session 20. Dec 13 08:49:38.979898 systemd-resolved[1476]: Under memory pressure, flushing caches. Dec 13 08:49:38.983078 systemd-journald[1139]: Under memory pressure, flushing caches. Dec 13 08:49:38.979911 systemd-resolved[1476]: Flushed all caches. Dec 13 08:49:43.660942 systemd[1]: Started sshd@20-146.190.59.17:22-147.75.109.163:40454.service - OpenSSH per-connection server daemon (147.75.109.163:40454). Dec 13 08:49:43.724798 sshd[5647]: Accepted publickey for core from 147.75.109.163 port 40454 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:43.727521 sshd[5647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:43.739793 systemd-logind[1572]: New session 21 of user core. Dec 13 08:49:43.745085 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 08:49:43.914485 sshd[5647]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:43.920919 systemd[1]: sshd@20-146.190.59.17:22-147.75.109.163:40454.service: Deactivated successfully. Dec 13 08:49:43.928652 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 08:49:43.930286 systemd-logind[1572]: Session 21 logged out. Waiting for processes to exit. Dec 13 08:49:43.931919 systemd-logind[1572]: Removed session 21. Dec 13 08:49:47.247390 systemd[1]: run-containerd-runc-k8s.io-d3df44d7d23cc1e77f2c51eb73848debc53743faf8b80630b500e2aa09fa9b67-runc.A4PLjY.mount: Deactivated successfully. Dec 13 08:49:47.363003 kubelet[2743]: E1213 08:49:47.362254 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:49:48.927823 systemd[1]: Started sshd@21-146.190.59.17:22-147.75.109.163:42280.service - OpenSSH per-connection server daemon (147.75.109.163:42280). Dec 13 08:49:49.025345 sshd[5684]: Accepted publickey for core from 147.75.109.163 port 42280 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:49.027265 sshd[5684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:49.038930 systemd-logind[1572]: New session 22 of user core. Dec 13 08:49:49.046812 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 08:49:49.250037 sshd[5684]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:49.256420 systemd-logind[1572]: Session 22 logged out. Waiting for processes to exit. Dec 13 08:49:49.256651 systemd[1]: sshd@21-146.190.59.17:22-147.75.109.163:42280.service: Deactivated successfully. Dec 13 08:49:49.261534 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 08:49:49.264257 systemd-logind[1572]: Removed session 22. Dec 13 08:49:54.263290 systemd[1]: Started sshd@22-146.190.59.17:22-147.75.109.163:42294.service - OpenSSH per-connection server daemon (147.75.109.163:42294). Dec 13 08:49:54.390960 sshd[5720]: Accepted publickey for core from 147.75.109.163 port 42294 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:54.393709 sshd[5720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:54.406792 systemd-logind[1572]: New session 23 of user core. Dec 13 08:49:54.412031 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 08:49:54.785625 sshd[5720]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:54.793142 systemd-logind[1572]: Session 23 logged out. Waiting for processes to exit. Dec 13 08:49:54.793987 systemd[1]: sshd@22-146.190.59.17:22-147.75.109.163:42294.service: Deactivated successfully. Dec 13 08:49:54.801189 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 08:49:54.806185 systemd-logind[1572]: Removed session 23. Dec 13 08:49:56.038047 kubelet[2743]: E1213 08:49:56.036116 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"