Jan 17 12:21:44.982065 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:21:44.982115 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:21:44.982136 kernel: BIOS-provided physical RAM map: Jan 17 12:21:44.982148 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 12:21:44.982159 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 12:21:44.982391 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 12:21:44.982405 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 17 12:21:44.982416 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 17 12:21:44.982427 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 12:21:44.982441 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 12:21:44.982448 kernel: NX (Execute Disable) protection: active Jan 17 12:21:44.982456 kernel: APIC: Static calls initialized Jan 17 12:21:44.982466 kernel: SMBIOS 2.8 present. Jan 17 12:21:44.982474 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 17 12:21:44.982482 kernel: Hypervisor detected: KVM Jan 17 12:21:44.982494 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:21:44.982504 kernel: kvm-clock: using sched offset of 2805102137 cycles Jan 17 12:21:44.982513 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:21:44.982522 kernel: tsc: Detected 2494.138 MHz processor Jan 17 12:21:44.982530 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:21:44.982538 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:21:44.982546 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 17 12:21:44.982554 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 12:21:44.982562 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:21:44.982574 kernel: ACPI: Early table checksum verification disabled Jan 17 12:21:44.982581 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 17 12:21:44.982590 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:21:44.982598 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:21:44.982605 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:21:44.982613 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 17 12:21:44.982621 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:21:44.982628 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:21:44.982637 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:21:44.982648 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:21:44.982655 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 17 12:21:44.982663 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 17 12:21:44.982671 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 17 12:21:44.982679 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 17 12:21:44.982686 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 17 12:21:44.982695 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 17 12:21:44.982711 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 17 12:21:44.982719 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 12:21:44.982727 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 12:21:44.982736 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 12:21:44.982744 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 17 12:21:44.982755 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 17 12:21:44.982763 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 17 12:21:44.982776 kernel: Zone ranges: Jan 17 12:21:44.982784 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:21:44.982793 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 17 12:21:44.982801 kernel: Normal empty Jan 17 12:21:44.982809 kernel: Movable zone start for each node Jan 17 12:21:44.982818 kernel: Early memory node ranges Jan 17 12:21:44.982826 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 12:21:44.982834 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 17 12:21:44.982843 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 17 12:21:44.982855 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:21:44.982863 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 12:21:44.982873 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 17 12:21:44.982881 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 12:21:44.982890 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:21:44.982898 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:21:44.982906 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 12:21:44.982915 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:21:44.982923 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:21:44.982935 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:21:44.982943 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:21:44.982952 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:21:44.982960 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 12:21:44.982968 kernel: TSC deadline timer available Jan 17 12:21:44.982977 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 12:21:44.982985 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 12:21:44.982996 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 17 12:21:44.983012 kernel: Booting paravirtualized kernel on KVM Jan 17 12:21:44.983026 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:21:44.983039 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 12:21:44.983048 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 17 12:21:44.983056 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 17 12:21:44.983064 kernel: pcpu-alloc: [0] 0 1 Jan 17 12:21:44.983073 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 17 12:21:44.983082 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:21:44.983091 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:21:44.983100 kernel: random: crng init done Jan 17 12:21:44.983112 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:21:44.983120 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 12:21:44.983129 kernel: Fallback order for Node 0: 0 Jan 17 12:21:44.983137 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 17 12:21:44.983145 kernel: Policy zone: DMA32 Jan 17 12:21:44.983154 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:21:44.983163 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 125148K reserved, 0K cma-reserved) Jan 17 12:21:44.983196 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:21:44.983209 kernel: Kernel/User page tables isolation: enabled Jan 17 12:21:44.983218 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:21:44.983226 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:21:44.983234 kernel: Dynamic Preempt: voluntary Jan 17 12:21:44.983243 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:21:44.983253 kernel: rcu: RCU event tracing is enabled. Jan 17 12:21:44.983262 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:21:44.983270 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:21:44.983279 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:21:44.983287 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:21:44.983299 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:21:44.983308 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:21:44.983316 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 12:21:44.983325 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:21:44.983335 kernel: Console: colour VGA+ 80x25 Jan 17 12:21:44.983344 kernel: printk: console [tty0] enabled Jan 17 12:21:44.983353 kernel: printk: console [ttyS0] enabled Jan 17 12:21:44.983361 kernel: ACPI: Core revision 20230628 Jan 17 12:21:44.983370 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 12:21:44.983383 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:21:44.983391 kernel: x2apic enabled Jan 17 12:21:44.983400 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:21:44.983409 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 12:21:44.983417 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jan 17 12:21:44.983426 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Jan 17 12:21:44.983434 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 17 12:21:44.983443 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 17 12:21:44.983467 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:21:44.983477 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 12:21:44.983485 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:21:44.983498 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:21:44.983507 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 17 12:21:44.983516 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:21:44.983525 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:21:44.983534 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 12:21:44.983543 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 12:21:44.983558 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:21:44.983567 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:21:44.983576 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:21:44.983585 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:21:44.983594 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 12:21:44.983603 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:21:44.983612 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:21:44.983621 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:21:44.983634 kernel: landlock: Up and running. Jan 17 12:21:44.983644 kernel: SELinux: Initializing. Jan 17 12:21:44.983653 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:21:44.983662 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:21:44.983671 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 17 12:21:44.983680 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:21:44.983690 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:21:44.983699 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:21:44.983708 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 17 12:21:44.983721 kernel: signal: max sigframe size: 1776 Jan 17 12:21:44.983730 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:21:44.983739 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:21:44.983748 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 12:21:44.983757 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:21:44.983766 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:21:44.983775 kernel: .... node #0, CPUs: #1 Jan 17 12:21:44.983784 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:21:44.983795 kernel: smpboot: Max logical packages: 1 Jan 17 12:21:44.983809 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Jan 17 12:21:44.983817 kernel: devtmpfs: initialized Jan 17 12:21:44.983827 kernel: x86/mm: Memory block size: 128MB Jan 17 12:21:44.983836 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:21:44.983845 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:21:44.983853 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:21:44.983862 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:21:44.983872 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:21:44.983881 kernel: audit: type=2000 audit(1737116503.776:1): state=initialized audit_enabled=0 res=1 Jan 17 12:21:44.983894 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:21:44.983907 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:21:44.983917 kernel: cpuidle: using governor menu Jan 17 12:21:44.983926 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:21:44.983935 kernel: dca service started, version 1.12.1 Jan 17 12:21:44.983944 kernel: PCI: Using configuration type 1 for base access Jan 17 12:21:44.983953 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:21:44.983962 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:21:44.983971 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:21:44.983985 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:21:44.983994 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:21:44.984003 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:21:44.984012 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:21:44.984021 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:21:44.984030 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:21:44.984039 kernel: ACPI: Interpreter enabled Jan 17 12:21:44.984048 kernel: ACPI: PM: (supports S0 S5) Jan 17 12:21:44.984057 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:21:44.984070 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:21:44.984079 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:21:44.984088 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 17 12:21:44.984097 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:21:44.985814 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:21:44.985964 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 12:21:44.986089 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 12:21:44.986118 kernel: acpiphp: Slot [3] registered Jan 17 12:21:44.986128 kernel: acpiphp: Slot [4] registered Jan 17 12:21:44.986138 kernel: acpiphp: Slot [5] registered Jan 17 12:21:44.986147 kernel: acpiphp: Slot [6] registered Jan 17 12:21:44.986156 kernel: acpiphp: Slot [7] registered Jan 17 12:21:44.987293 kernel: acpiphp: Slot [8] registered Jan 17 12:21:44.987328 kernel: acpiphp: Slot [9] registered Jan 17 12:21:44.987337 kernel: acpiphp: Slot [10] registered Jan 17 12:21:44.987347 kernel: acpiphp: Slot [11] registered Jan 17 12:21:44.987366 kernel: acpiphp: Slot [12] registered Jan 17 12:21:44.987376 kernel: acpiphp: Slot [13] registered Jan 17 12:21:44.987385 kernel: acpiphp: Slot [14] registered Jan 17 12:21:44.987394 kernel: acpiphp: Slot [15] registered Jan 17 12:21:44.987403 kernel: acpiphp: Slot [16] registered Jan 17 12:21:44.987412 kernel: acpiphp: Slot [17] registered Jan 17 12:21:44.987421 kernel: acpiphp: Slot [18] registered Jan 17 12:21:44.987430 kernel: acpiphp: Slot [19] registered Jan 17 12:21:44.987439 kernel: acpiphp: Slot [20] registered Jan 17 12:21:44.987448 kernel: acpiphp: Slot [21] registered Jan 17 12:21:44.987461 kernel: acpiphp: Slot [22] registered Jan 17 12:21:44.987470 kernel: acpiphp: Slot [23] registered Jan 17 12:21:44.987479 kernel: acpiphp: Slot [24] registered Jan 17 12:21:44.987488 kernel: acpiphp: Slot [25] registered Jan 17 12:21:44.987497 kernel: acpiphp: Slot [26] registered Jan 17 12:21:44.987506 kernel: acpiphp: Slot [27] registered Jan 17 12:21:44.987515 kernel: acpiphp: Slot [28] registered Jan 17 12:21:44.987530 kernel: acpiphp: Slot [29] registered Jan 17 12:21:44.987542 kernel: acpiphp: Slot [30] registered Jan 17 12:21:44.987561 kernel: acpiphp: Slot [31] registered Jan 17 12:21:44.987575 kernel: PCI host bridge to bus 0000:00 Jan 17 12:21:44.987849 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:21:44.988017 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:21:44.988158 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:21:44.990409 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 17 12:21:44.990562 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 17 12:21:44.990695 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:21:44.990916 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 12:21:44.991116 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 17 12:21:44.991251 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 17 12:21:44.991348 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 17 12:21:44.991456 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 17 12:21:44.991602 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 17 12:21:44.991772 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 17 12:21:44.991875 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 17 12:21:44.991994 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 17 12:21:44.992091 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 17 12:21:44.994333 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 17 12:21:44.994488 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 17 12:21:44.994598 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 17 12:21:44.994709 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 17 12:21:44.994805 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 17 12:21:44.994926 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 17 12:21:44.995039 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 17 12:21:44.995137 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 17 12:21:44.996490 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:21:44.996678 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:21:44.996785 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 17 12:21:44.996883 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 17 12:21:44.997011 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 17 12:21:44.997122 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:21:44.998393 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 17 12:21:44.998523 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 17 12:21:44.998635 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 17 12:21:44.998743 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 17 12:21:44.998840 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 17 12:21:44.998937 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 17 12:21:44.999051 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 17 12:21:45.000280 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 17 12:21:45.000423 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 17 12:21:45.000547 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 17 12:21:45.000708 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 17 12:21:45.000825 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 17 12:21:45.000925 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 17 12:21:45.001024 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 17 12:21:45.001124 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 17 12:21:45.002388 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 17 12:21:45.002518 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 17 12:21:45.002622 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 17 12:21:45.002635 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:21:45.002646 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:21:45.002656 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:21:45.002665 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:21:45.002674 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 12:21:45.002689 kernel: iommu: Default domain type: Translated Jan 17 12:21:45.002698 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:21:45.002708 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:21:45.002717 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:21:45.002726 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 12:21:45.002735 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 17 12:21:45.002839 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 17 12:21:45.002934 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 17 12:21:45.003031 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:21:45.003044 kernel: vgaarb: loaded Jan 17 12:21:45.003053 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 12:21:45.003063 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 12:21:45.003072 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:21:45.003081 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:21:45.003090 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:21:45.003099 kernel: pnp: PnP ACPI init Jan 17 12:21:45.003108 kernel: pnp: PnP ACPI: found 4 devices Jan 17 12:21:45.003122 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:21:45.003131 kernel: NET: Registered PF_INET protocol family Jan 17 12:21:45.003140 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:21:45.003149 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 12:21:45.003158 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:21:45.003182 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:21:45.004268 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 12:21:45.004279 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 12:21:45.004289 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:21:45.004305 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:21:45.004314 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:21:45.004324 kernel: NET: Registered PF_XDP protocol family Jan 17 12:21:45.004457 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:21:45.004545 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:21:45.004631 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:21:45.004715 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 17 12:21:45.004800 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 17 12:21:45.004932 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 17 12:21:45.005035 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 12:21:45.005050 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 17 12:21:45.006219 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 28802 usecs Jan 17 12:21:45.006241 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:21:45.006252 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 12:21:45.006263 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jan 17 12:21:45.006272 kernel: Initialise system trusted keyrings Jan 17 12:21:45.006288 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 12:21:45.006312 kernel: Key type asymmetric registered Jan 17 12:21:45.006325 kernel: Asymmetric key parser 'x509' registered Jan 17 12:21:45.006337 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:21:45.006347 kernel: io scheduler mq-deadline registered Jan 17 12:21:45.006356 kernel: io scheduler kyber registered Jan 17 12:21:45.006365 kernel: io scheduler bfq registered Jan 17 12:21:45.006375 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:21:45.006385 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 17 12:21:45.006394 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 17 12:21:45.006408 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 17 12:21:45.006417 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:21:45.006426 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:21:45.006436 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:21:45.006445 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:21:45.006454 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:21:45.006613 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 17 12:21:45.006628 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:21:45.006726 kernel: rtc_cmos 00:03: registered as rtc0 Jan 17 12:21:45.006815 kernel: rtc_cmos 00:03: setting system clock to 2025-01-17T12:21:44 UTC (1737116504) Jan 17 12:21:45.006903 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 17 12:21:45.006915 kernel: intel_pstate: CPU model not supported Jan 17 12:21:45.006924 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:21:45.006933 kernel: Segment Routing with IPv6 Jan 17 12:21:45.006943 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:21:45.006952 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:21:45.006965 kernel: Key type dns_resolver registered Jan 17 12:21:45.006974 kernel: IPI shorthand broadcast: enabled Jan 17 12:21:45.006983 kernel: sched_clock: Marking stable (922003379, 108767886)->(1141795627, -111024362) Jan 17 12:21:45.006993 kernel: registered taskstats version 1 Jan 17 12:21:45.007002 kernel: Loading compiled-in X.509 certificates Jan 17 12:21:45.007011 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:21:45.007020 kernel: Key type .fscrypt registered Jan 17 12:21:45.007029 kernel: Key type fscrypt-provisioning registered Jan 17 12:21:45.007038 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:21:45.007050 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:21:45.007060 kernel: ima: No architecture policies found Jan 17 12:21:45.007068 kernel: clk: Disabling unused clocks Jan 17 12:21:45.007077 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:21:45.007087 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:21:45.007120 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:21:45.007133 kernel: Run /init as init process Jan 17 12:21:45.007142 kernel: with arguments: Jan 17 12:21:45.007156 kernel: /init Jan 17 12:21:45.007182 kernel: with environment: Jan 17 12:21:45.008246 kernel: HOME=/ Jan 17 12:21:45.008257 kernel: TERM=linux Jan 17 12:21:45.008268 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:21:45.008281 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:21:45.008294 systemd[1]: Detected virtualization kvm. Jan 17 12:21:45.008304 systemd[1]: Detected architecture x86-64. Jan 17 12:21:45.008314 systemd[1]: Running in initrd. Jan 17 12:21:45.008330 systemd[1]: No hostname configured, using default hostname. Jan 17 12:21:45.008339 systemd[1]: Hostname set to . Jan 17 12:21:45.008350 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:21:45.008360 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:21:45.008371 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:21:45.008381 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:21:45.008412 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:21:45.008422 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:21:45.008437 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:21:45.008447 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:21:45.008459 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:21:45.008469 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:21:45.008480 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:21:45.008493 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:21:45.008507 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:21:45.008517 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:21:45.008527 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:21:45.008540 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:21:45.008550 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:21:45.008560 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:21:45.008575 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:21:45.008585 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:21:45.008595 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:21:45.008606 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:21:45.008616 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:21:45.008626 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:21:45.008637 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:21:45.008647 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:21:45.008661 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:21:45.008671 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:21:45.008681 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:21:45.008691 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:21:45.008701 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:21:45.008711 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:21:45.008721 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:21:45.008731 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:21:45.008745 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:21:45.008795 systemd-journald[182]: Collecting audit messages is disabled. Jan 17 12:21:45.008825 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:21:45.008835 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:21:45.008848 systemd-journald[182]: Journal started Jan 17 12:21:45.008870 systemd-journald[182]: Runtime Journal (/run/log/journal/47ec6e9c40ce41e8ba23384146140862) is 4.9M, max 39.3M, 34.4M free. Jan 17 12:21:44.992677 systemd-modules-load[183]: Inserted module 'overlay' Jan 17 12:21:45.037711 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:21:45.037747 kernel: Bridge firewalling registered Jan 17 12:21:45.032791 systemd-modules-load[183]: Inserted module 'br_netfilter' Jan 17 12:21:45.044216 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:21:45.044853 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:21:45.046255 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:21:45.059497 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:21:45.062344 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:21:45.069513 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:21:45.074874 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:21:45.086504 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:21:45.091490 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:21:45.097643 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:21:45.103747 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:21:45.107276 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:21:45.119908 dracut-cmdline[216]: dracut-dracut-053 Jan 17 12:21:45.126273 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:21:45.161611 systemd-resolved[221]: Positive Trust Anchors: Jan 17 12:21:45.161636 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:21:45.161691 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:21:45.169979 systemd-resolved[221]: Defaulting to hostname 'linux'. Jan 17 12:21:45.173901 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:21:45.174477 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:21:45.239215 kernel: SCSI subsystem initialized Jan 17 12:21:45.252208 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:21:45.267205 kernel: iscsi: registered transport (tcp) Jan 17 12:21:45.289489 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:21:45.289609 kernel: QLogic iSCSI HBA Driver Jan 17 12:21:45.344783 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:21:45.351484 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:21:45.379245 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:21:45.379346 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:21:45.380263 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:21:45.430278 kernel: raid6: avx2x4 gen() 19240 MB/s Jan 17 12:21:45.446292 kernel: raid6: avx2x2 gen() 22650 MB/s Jan 17 12:21:45.463461 kernel: raid6: avx2x1 gen() 16615 MB/s Jan 17 12:21:45.463582 kernel: raid6: using algorithm avx2x2 gen() 22650 MB/s Jan 17 12:21:45.481494 kernel: raid6: .... xor() 14783 MB/s, rmw enabled Jan 17 12:21:45.481603 kernel: raid6: using avx2x2 recovery algorithm Jan 17 12:21:45.507232 kernel: xor: automatically using best checksumming function avx Jan 17 12:21:45.669213 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:21:45.683180 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:21:45.689414 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:21:45.716040 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jan 17 12:21:45.721573 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:21:45.728421 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:21:45.761425 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Jan 17 12:21:45.807096 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:21:45.813443 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:21:45.883190 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:21:45.892787 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:21:45.913437 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:21:45.915759 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:21:45.916573 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:21:45.918042 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:21:45.923862 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:21:45.953129 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:21:45.992211 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 17 12:21:46.022651 kernel: ACPI: bus type USB registered Jan 17 12:21:46.022675 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 17 12:21:46.022819 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:21:46.022833 kernel: usbcore: registered new interface driver usbfs Jan 17 12:21:46.022857 kernel: usbcore: registered new interface driver hub Jan 17 12:21:46.022870 kernel: usbcore: registered new device driver usb Jan 17 12:21:46.022882 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:21:46.022894 kernel: GPT:9289727 != 125829119 Jan 17 12:21:46.022906 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:21:46.022917 kernel: GPT:9289727 != 125829119 Jan 17 12:21:46.022928 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:21:46.022940 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:21:46.022951 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 17 12:21:46.034455 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Jan 17 12:21:46.034624 kernel: scsi host0: Virtio SCSI HBA Jan 17 12:21:46.050283 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:21:46.057277 kernel: AES CTR mode by8 optimization enabled Jan 17 12:21:46.078979 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:21:46.079497 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:21:46.081470 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:21:46.082029 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:21:46.082239 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:21:46.083732 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:21:46.091620 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:21:46.097192 kernel: libata version 3.00 loaded. Jan 17 12:21:46.102326 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 17 12:21:46.133434 kernel: scsi host1: ata_piix Jan 17 12:21:46.133621 kernel: scsi host2: ata_piix Jan 17 12:21:46.133856 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 17 12:21:46.133884 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 17 12:21:46.136206 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (456) Jan 17 12:21:46.140219 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (447) Jan 17 12:21:46.168648 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 12:21:46.203979 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 17 12:21:46.204400 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 17 12:21:46.204586 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 17 12:21:46.204807 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 17 12:21:46.204995 kernel: hub 1-0:1.0: USB hub found Jan 17 12:21:46.205250 kernel: hub 1-0:1.0: 2 ports detected Jan 17 12:21:46.204898 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:21:46.209714 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 12:21:46.210246 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 12:21:46.215652 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:21:46.220091 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 12:21:46.227409 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:21:46.229402 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:21:46.240203 disk-uuid[529]: Primary Header is updated. Jan 17 12:21:46.240203 disk-uuid[529]: Secondary Entries is updated. Jan 17 12:21:46.240203 disk-uuid[529]: Secondary Header is updated. Jan 17 12:21:46.251233 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:21:46.260219 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:21:46.263570 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:21:46.276201 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:21:47.267303 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:21:47.267507 disk-uuid[530]: The operation has completed successfully. Jan 17 12:21:47.320863 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:21:47.321032 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:21:47.333538 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:21:47.350272 sh[562]: Success Jan 17 12:21:47.367289 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 12:21:47.433517 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:21:47.443626 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:21:47.447694 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:21:47.477213 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:21:47.481231 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:21:47.481327 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:21:47.481343 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:21:47.481356 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:21:47.490575 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:21:47.492313 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:21:47.498562 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:21:47.501681 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:21:47.516362 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:21:47.516455 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:21:47.517402 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:21:47.521285 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:21:47.533714 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:21:47.534614 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:21:47.540982 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:21:47.550661 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:21:47.689781 ignition[642]: Ignition 2.19.0 Jan 17 12:21:47.689805 ignition[642]: Stage: fetch-offline Jan 17 12:21:47.689858 ignition[642]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:47.689868 ignition[642]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:21:47.692727 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:21:47.689989 ignition[642]: parsed url from cmdline: "" Jan 17 12:21:47.689993 ignition[642]: no config URL provided Jan 17 12:21:47.689999 ignition[642]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:21:47.690008 ignition[642]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:21:47.690015 ignition[642]: failed to fetch config: resource requires networking Jan 17 12:21:47.691213 ignition[642]: Ignition finished successfully Jan 17 12:21:47.708596 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:21:47.725601 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:21:47.750190 systemd-networkd[752]: lo: Link UP Jan 17 12:21:47.750210 systemd-networkd[752]: lo: Gained carrier Jan 17 12:21:47.753919 systemd-networkd[752]: Enumeration completed Jan 17 12:21:47.754425 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 17 12:21:47.754430 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 17 12:21:47.755462 systemd-networkd[752]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:21:47.755467 systemd-networkd[752]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:21:47.756374 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:21:47.757797 systemd-networkd[752]: eth0: Link UP Jan 17 12:21:47.757803 systemd-networkd[752]: eth0: Gained carrier Jan 17 12:21:47.757817 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 17 12:21:47.758599 systemd[1]: Reached target network.target - Network. Jan 17 12:21:47.762616 systemd-networkd[752]: eth1: Link UP Jan 17 12:21:47.762624 systemd-networkd[752]: eth1: Gained carrier Jan 17 12:21:47.762638 systemd-networkd[752]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:21:47.766251 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:21:47.774360 systemd-networkd[752]: eth0: DHCPv4 address 164.92.109.43/19, gateway 164.92.96.1 acquired from 169.254.169.253 Jan 17 12:21:47.783384 systemd-networkd[752]: eth1: DHCPv4 address 10.124.0.8/20 acquired from 169.254.169.253 Jan 17 12:21:47.799249 ignition[754]: Ignition 2.19.0 Jan 17 12:21:47.799265 ignition[754]: Stage: fetch Jan 17 12:21:47.799488 ignition[754]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:47.799499 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:21:47.799613 ignition[754]: parsed url from cmdline: "" Jan 17 12:21:47.799617 ignition[754]: no config URL provided Jan 17 12:21:47.799622 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:21:47.799630 ignition[754]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:21:47.799657 ignition[754]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 17 12:21:47.815900 ignition[754]: GET result: OK Jan 17 12:21:47.816050 ignition[754]: parsing config with SHA512: 9cbb057ad4096a011d9fbbd50a9a385fb6bae37a15121b7d3f51e445012d9c034f96cdf90dfc0be6eb408750870dd4aedc9934bc3c2606ca82e8ce443b16f918 Jan 17 12:21:47.821079 unknown[754]: fetched base config from "system" Jan 17 12:21:47.821091 unknown[754]: fetched base config from "system" Jan 17 12:21:47.821484 ignition[754]: fetch: fetch complete Jan 17 12:21:47.821098 unknown[754]: fetched user config from "digitalocean" Jan 17 12:21:47.821491 ignition[754]: fetch: fetch passed Jan 17 12:21:47.821546 ignition[754]: Ignition finished successfully Jan 17 12:21:47.823441 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:21:47.832512 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:21:47.862738 ignition[761]: Ignition 2.19.0 Jan 17 12:21:47.862753 ignition[761]: Stage: kargs Jan 17 12:21:47.862984 ignition[761]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:47.862994 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:21:47.865780 ignition[761]: kargs: kargs passed Jan 17 12:21:47.865854 ignition[761]: Ignition finished successfully Jan 17 12:21:47.867704 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:21:47.872586 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:21:47.898484 ignition[767]: Ignition 2.19.0 Jan 17 12:21:47.898500 ignition[767]: Stage: disks Jan 17 12:21:47.898786 ignition[767]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:47.898801 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:21:47.900238 ignition[767]: disks: disks passed Jan 17 12:21:47.902455 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:21:47.900324 ignition[767]: Ignition finished successfully Jan 17 12:21:47.906629 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:21:47.907624 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:21:47.908445 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:21:47.909322 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:21:47.910081 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:21:47.918502 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:21:47.936024 systemd-fsck[775]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 12:21:47.939319 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:21:47.945511 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:21:48.048267 kernel: EXT4-fs (vda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:21:48.050071 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:21:48.051649 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:21:48.062423 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:21:48.066417 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:21:48.069001 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 17 12:21:48.076535 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 12:21:48.078652 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:21:48.088810 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (783) Jan 17 12:21:48.088852 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:21:48.088872 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:21:48.088889 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:21:48.078707 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:21:48.086668 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:21:48.098739 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:21:48.102880 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:21:48.107296 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:21:48.193209 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:21:48.215389 coreos-metadata[786]: Jan 17 12:21:48.214 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:21:48.216646 initrd-setup-root[820]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:21:48.222093 coreos-metadata[785]: Jan 17 12:21:48.222 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:21:48.227979 initrd-setup-root[827]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:21:48.229121 coreos-metadata[786]: Jan 17 12:21:48.228 INFO Fetch successful Jan 17 12:21:48.239072 coreos-metadata[785]: Jan 17 12:21:48.238 INFO Fetch successful Jan 17 12:21:48.242197 initrd-setup-root[834]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:21:48.244064 coreos-metadata[786]: Jan 17 12:21:48.242 INFO wrote hostname ci-4081.3.0-6-c2def92c28 to /sysroot/etc/hostname Jan 17 12:21:48.245274 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:21:48.253041 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 17 12:21:48.253241 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 17 12:21:48.385586 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:21:48.391466 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:21:48.395462 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:21:48.411224 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:21:48.432200 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:21:48.450434 ignition[903]: INFO : Ignition 2.19.0 Jan 17 12:21:48.452294 ignition[903]: INFO : Stage: mount Jan 17 12:21:48.452294 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:48.452294 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:21:48.453683 ignition[903]: INFO : mount: mount passed Jan 17 12:21:48.454182 ignition[903]: INFO : Ignition finished successfully Jan 17 12:21:48.455452 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:21:48.461405 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:21:48.476526 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:21:48.482542 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:21:48.505216 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (915) Jan 17 12:21:48.507424 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:21:48.507491 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:21:48.507528 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:21:48.520219 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:21:48.522888 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:21:48.550387 ignition[931]: INFO : Ignition 2.19.0 Jan 17 12:21:48.552288 ignition[931]: INFO : Stage: files Jan 17 12:21:48.552288 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:48.552288 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:21:48.553984 ignition[931]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:21:48.555858 ignition[931]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:21:48.556537 ignition[931]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:21:48.560561 ignition[931]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:21:48.561518 ignition[931]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:21:48.562794 unknown[931]: wrote ssh authorized keys file for user: core Jan 17 12:21:48.563632 ignition[931]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:21:48.566393 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:21:48.566393 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:21:48.604988 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 12:21:48.691021 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:21:48.691021 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:21:48.693232 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:21:48.693232 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:21:48.693232 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:21:48.693232 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:21:48.693232 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:21:48.693232 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:21:48.693232 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:21:48.693232 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:21:48.693232 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:21:48.693232 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:21:48.693232 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:21:48.693232 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:21:48.693232 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 17 12:21:49.155759 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 12:21:49.237347 systemd-networkd[752]: eth0: Gained IPv6LL Jan 17 12:21:49.407242 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:21:49.407242 ignition[931]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 12:21:49.408818 ignition[931]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:21:49.408818 ignition[931]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:21:49.408818 ignition[931]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 12:21:49.408818 ignition[931]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:21:49.411311 ignition[931]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:21:49.411311 ignition[931]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:21:49.411311 ignition[931]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:21:49.411311 ignition[931]: INFO : files: files passed Jan 17 12:21:49.411311 ignition[931]: INFO : Ignition finished successfully Jan 17 12:21:49.411889 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:21:49.421514 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:21:49.424434 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:21:49.426461 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:21:49.427160 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:21:49.429260 systemd-networkd[752]: eth1: Gained IPv6LL Jan 17 12:21:49.446383 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:21:49.446383 initrd-setup-root-after-ignition[960]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:21:49.449461 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:21:49.451431 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:21:49.452141 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:21:49.456454 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:21:49.510553 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:21:49.510689 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:21:49.511835 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:21:49.512369 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:21:49.513282 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:21:49.514441 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:21:49.547705 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:21:49.557703 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:21:49.572452 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:21:49.573990 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:21:49.575632 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:21:49.576366 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:21:49.576566 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:21:49.577480 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:21:49.577934 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:21:49.578984 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:21:49.579992 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:21:49.580638 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:21:49.581319 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:21:49.582079 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:21:49.582909 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:21:49.583750 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:21:49.584840 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:21:49.585645 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:21:49.585802 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:21:49.586679 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:21:49.587558 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:21:49.588378 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:21:49.588610 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:21:49.589340 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:21:49.589501 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:21:49.590620 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:21:49.590756 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:21:49.591971 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:21:49.592080 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:21:49.592905 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 12:21:49.593043 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:21:49.608578 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:21:49.609791 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:21:49.610086 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:21:49.615482 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:21:49.615897 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:21:49.616112 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:21:49.616682 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:21:49.616809 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:21:49.627340 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:21:49.628181 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:21:49.633122 ignition[984]: INFO : Ignition 2.19.0 Jan 17 12:21:49.633122 ignition[984]: INFO : Stage: umount Jan 17 12:21:49.637569 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:49.637569 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:21:49.639511 ignition[984]: INFO : umount: umount passed Jan 17 12:21:49.640253 ignition[984]: INFO : Ignition finished successfully Jan 17 12:21:49.642512 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:21:49.643515 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:21:49.649808 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:21:49.650011 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:21:49.650883 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:21:49.650999 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:21:49.651582 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:21:49.651662 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:21:49.654858 systemd[1]: Stopped target network.target - Network. Jan 17 12:21:49.665934 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:21:49.666069 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:21:49.666974 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:21:49.667437 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:21:49.679570 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:21:49.680372 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:21:49.680814 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:21:49.681364 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:21:49.681438 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:21:49.681916 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:21:49.681978 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:21:49.682503 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:21:49.682576 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:21:49.683102 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:21:49.683219 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:21:49.684386 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:21:49.685428 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:21:49.688041 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:21:49.688299 systemd-networkd[752]: eth0: DHCPv6 lease lost Jan 17 12:21:49.688977 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:21:49.689137 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:21:49.691579 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:21:49.691743 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:21:49.695008 systemd-networkd[752]: eth1: DHCPv6 lease lost Jan 17 12:21:49.696369 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:21:49.696517 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:21:49.699828 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:21:49.700505 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:21:49.702499 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:21:49.702596 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:21:49.707415 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:21:49.708429 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:21:49.708527 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:21:49.710524 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:21:49.710603 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:21:49.712650 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:21:49.712711 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:21:49.713572 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:21:49.713614 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:21:49.719376 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:21:49.740622 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:21:49.740847 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:21:49.741919 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:21:49.741972 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:21:49.742454 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:21:49.742492 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:21:49.742820 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:21:49.742866 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:21:49.743345 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:21:49.743389 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:21:49.744081 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:21:49.744127 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:21:49.746431 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:21:49.747391 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:21:49.747449 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:21:49.748516 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:21:49.748564 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:21:49.752444 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:21:49.752995 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:21:49.765899 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:21:49.766042 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:21:49.767548 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:21:49.772461 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:21:49.791750 systemd[1]: Switching root. Jan 17 12:21:49.823397 systemd-journald[182]: Journal stopped Jan 17 12:21:50.969627 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). Jan 17 12:21:50.969723 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:21:50.969746 kernel: SELinux: policy capability open_perms=1 Jan 17 12:21:50.969759 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:21:50.969771 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:21:50.969787 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:21:50.969800 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:21:50.969815 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:21:50.969829 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:21:50.969845 kernel: audit: type=1403 audit(1737116509.956:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:21:50.969862 systemd[1]: Successfully loaded SELinux policy in 37.548ms. Jan 17 12:21:50.969889 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.250ms. Jan 17 12:21:50.969904 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:21:50.969918 systemd[1]: Detected virtualization kvm. Jan 17 12:21:50.969931 systemd[1]: Detected architecture x86-64. Jan 17 12:21:50.969943 systemd[1]: Detected first boot. Jan 17 12:21:50.969959 systemd[1]: Hostname set to . Jan 17 12:21:50.969973 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:21:50.969986 zram_generator::config[1028]: No configuration found. Jan 17 12:21:50.970003 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:21:50.970017 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 12:21:50.970031 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 12:21:50.970067 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 12:21:50.970088 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:21:50.970108 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:21:50.970126 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:21:50.970146 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:21:50.971275 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:21:50.971307 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:21:50.971320 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:21:50.971334 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:21:50.971348 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:21:50.971362 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:21:50.971376 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:21:50.971388 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:21:50.971409 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:21:50.971423 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:21:50.971436 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:21:50.971450 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:21:50.971462 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 12:21:50.971477 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 12:21:50.971491 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 12:21:50.971526 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:21:50.971538 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:21:50.971552 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:21:50.971565 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:21:50.971579 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:21:50.971592 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:21:50.971604 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:21:50.971618 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:21:50.971630 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:21:50.971647 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:21:50.971661 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:21:50.971681 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:21:50.971696 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:21:50.971709 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:21:50.971722 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:50.971735 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:21:50.971748 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:21:50.971760 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:21:50.971778 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:21:50.971791 systemd[1]: Reached target machines.target - Containers. Jan 17 12:21:50.971804 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:21:50.971816 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:21:50.971829 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:21:50.971842 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:21:50.971855 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:21:50.971868 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:21:50.971884 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:21:50.971897 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:21:50.971909 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:21:50.971921 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:21:50.971935 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 12:21:50.971947 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 12:21:50.971960 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 12:21:50.971973 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 12:21:50.971989 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:21:50.972003 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:21:50.972015 kernel: loop: module loaded Jan 17 12:21:50.972030 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:21:50.972042 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:21:50.972055 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:21:50.972068 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 12:21:50.972081 systemd[1]: Stopped verity-setup.service. Jan 17 12:21:50.972095 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:50.972112 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:21:50.972125 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:21:50.972137 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:21:50.972149 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:21:50.972163 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:21:50.973298 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:21:50.973325 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:21:50.973342 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:21:50.973355 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:21:50.973370 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:21:50.973387 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:21:50.973404 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:21:50.973417 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:21:50.973430 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:21:50.973444 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:21:50.973456 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:21:50.973470 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:21:50.973483 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:21:50.973496 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:21:50.973514 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:21:50.973528 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:21:50.973540 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:21:50.973553 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:21:50.973567 kernel: fuse: init (API version 7.39) Jan 17 12:21:50.973581 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:21:50.973640 systemd-journald[1102]: Collecting audit messages is disabled. Jan 17 12:21:50.973668 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:21:50.973687 systemd-journald[1102]: Journal started Jan 17 12:21:50.973713 systemd-journald[1102]: Runtime Journal (/run/log/journal/47ec6e9c40ce41e8ba23384146140862) is 4.9M, max 39.3M, 34.4M free. Jan 17 12:21:50.590814 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:21:50.610430 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 12:21:50.610901 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 12:21:50.980197 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:21:50.988209 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:21:50.994210 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:21:51.001201 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:21:51.005277 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:21:51.012206 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:21:51.017264 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:21:51.025134 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:21:51.044787 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:21:51.045629 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:21:51.045789 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:21:51.046510 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:21:51.047301 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:21:51.089756 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:21:51.098412 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:21:51.110422 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:21:51.118581 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:21:51.128658 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:21:51.131203 kernel: loop0: detected capacity change from 0 to 142488 Jan 17 12:21:51.134068 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:21:51.139946 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:21:51.149255 kernel: ACPI: bus type drm_connector registered Jan 17 12:21:51.153295 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:21:51.154160 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:21:51.154504 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:21:51.155768 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:21:51.178981 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:21:51.189142 systemd-journald[1102]: Time spent on flushing to /var/log/journal/47ec6e9c40ce41e8ba23384146140862 is 81.326ms for 994 entries. Jan 17 12:21:51.189142 systemd-journald[1102]: System Journal (/var/log/journal/47ec6e9c40ce41e8ba23384146140862) is 8.0M, max 195.6M, 187.6M free. Jan 17 12:21:51.298791 systemd-journald[1102]: Received client request to flush runtime journal. Jan 17 12:21:51.298859 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:21:51.298876 kernel: loop1: detected capacity change from 0 to 140768 Jan 17 12:21:51.302195 kernel: loop2: detected capacity change from 0 to 211296 Jan 17 12:21:51.232268 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:21:51.240522 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:21:51.250604 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:21:51.258559 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:21:51.285658 udevadm[1163]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 12:21:51.317822 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:21:51.323589 systemd-tmpfiles[1161]: ACLs are not supported, ignoring. Jan 17 12:21:51.323611 systemd-tmpfiles[1161]: ACLs are not supported, ignoring. Jan 17 12:21:51.339305 kernel: loop3: detected capacity change from 0 to 8 Jan 17 12:21:51.345971 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:21:51.377211 kernel: loop4: detected capacity change from 0 to 142488 Jan 17 12:21:51.412213 kernel: loop5: detected capacity change from 0 to 140768 Jan 17 12:21:51.432218 kernel: loop6: detected capacity change from 0 to 211296 Jan 17 12:21:51.449669 kernel: loop7: detected capacity change from 0 to 8 Jan 17 12:21:51.453700 (sd-merge)[1171]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 17 12:21:51.454382 (sd-merge)[1171]: Merged extensions into '/usr'. Jan 17 12:21:51.467151 systemd[1]: Reloading requested from client PID 1127 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:21:51.467404 systemd[1]: Reloading... Jan 17 12:21:51.588217 zram_generator::config[1194]: No configuration found. Jan 17 12:21:51.856976 ldconfig[1120]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:21:51.947648 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:21:52.032605 systemd[1]: Reloading finished in 564 ms. Jan 17 12:21:52.085386 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:21:52.087004 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:21:52.102635 systemd[1]: Starting ensure-sysext.service... Jan 17 12:21:52.112583 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:21:52.213920 systemd[1]: Reloading requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:21:52.213940 systemd[1]: Reloading... Jan 17 12:21:52.350917 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:21:52.351690 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:21:52.359061 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:21:52.359415 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Jan 17 12:21:52.359484 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Jan 17 12:21:52.373743 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:21:52.373760 systemd-tmpfiles[1241]: Skipping /boot Jan 17 12:21:52.382228 zram_generator::config[1265]: No configuration found. Jan 17 12:21:52.411050 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:21:52.411073 systemd-tmpfiles[1241]: Skipping /boot Jan 17 12:21:52.576071 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:21:52.638743 systemd[1]: Reloading finished in 419 ms. Jan 17 12:21:52.659296 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:21:52.664879 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:21:52.678473 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:21:52.682478 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:21:52.686337 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:21:52.692446 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:21:52.700569 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:21:52.710444 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:21:52.717673 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:52.718340 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:21:52.725659 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:21:52.727548 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:21:52.732521 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:21:52.733570 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:21:52.734457 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:52.756637 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:21:52.757832 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:21:52.762070 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:52.762660 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:21:52.763360 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:21:52.763942 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:52.770013 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:52.770403 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:21:52.779569 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:21:52.780404 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:21:52.780632 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:52.781771 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:21:52.782024 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:21:52.786807 systemd[1]: Finished ensure-sysext.service. Jan 17 12:21:52.795520 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 12:21:52.798627 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:21:52.808515 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:21:52.820711 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:21:52.822288 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:21:52.822915 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:21:52.825707 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:21:52.825959 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:21:52.828016 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:21:52.835784 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:21:52.835974 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:21:52.845842 augenrules[1348]: No rules Jan 17 12:21:52.846028 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:21:52.847259 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:21:52.854188 systemd-udevd[1324]: Using default interface naming scheme 'v255'. Jan 17 12:21:52.859906 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:21:52.861897 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:21:52.877211 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:21:52.884399 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:21:52.895540 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:21:53.034552 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 12:21:53.051458 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1378) Jan 17 12:21:53.060887 systemd-networkd[1358]: lo: Link UP Jan 17 12:21:53.064578 systemd-networkd[1358]: lo: Gained carrier Jan 17 12:21:53.069414 systemd-networkd[1358]: Enumeration completed Jan 17 12:21:53.071959 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 12:21:53.073347 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:21:53.073849 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:21:53.079807 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 17 12:21:53.080242 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:53.080401 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:21:53.088421 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:21:53.088956 systemd-resolved[1320]: Positive Trust Anchors: Jan 17 12:21:53.089885 systemd-resolved[1320]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:21:53.089929 systemd-resolved[1320]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:21:53.091381 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:21:53.096444 systemd-resolved[1320]: Using system hostname 'ci-4081.3.0-6-c2def92c28'. Jan 17 12:21:53.100908 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:21:53.101489 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:21:53.105405 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:21:53.105824 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:21:53.105858 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:53.106056 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:21:53.109008 systemd[1]: Reached target network.target - Network. Jan 17 12:21:53.110509 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:21:53.138198 kernel: ISO 9660 Extensions: RRIP_1991A Jan 17 12:21:53.142114 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 17 12:21:53.143482 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:21:53.144266 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:21:53.156523 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:21:53.156691 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:21:53.157349 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:21:53.161871 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:21:53.162356 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:21:53.165717 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:21:53.185204 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 17 12:21:53.204965 systemd-networkd[1358]: eth1: Configuring with /run/systemd/network/10-e6:36:cf:6b:39:84.network. Jan 17 12:21:53.206521 systemd-networkd[1358]: eth1: Link UP Jan 17 12:21:53.206529 systemd-networkd[1358]: eth1: Gained carrier Jan 17 12:21:53.210601 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Jan 17 12:21:53.215216 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 12:21:53.225189 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:21:53.252828 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:21:53.262408 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:21:53.269495 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 12:21:53.275187 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 17 12:21:53.277301 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 17 12:21:53.277907 kernel: Console: switching to colour dummy device 80x25 Jan 17 12:21:53.278306 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 17 12:21:53.278340 kernel: [drm] features: -context_init Jan 17 12:21:53.282066 systemd-networkd[1358]: eth0: Configuring with /run/systemd/network/10-26:c6:f1:0c:f4:fd.network. Jan 17 12:21:53.282800 systemd-networkd[1358]: eth0: Link UP Jan 17 12:21:53.282809 systemd-networkd[1358]: eth0: Gained carrier Jan 17 12:21:53.291201 kernel: [drm] number of scanouts: 1 Jan 17 12:21:53.291301 kernel: [drm] number of cap sets: 0 Jan 17 12:21:53.291318 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 17 12:21:53.298275 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 17 12:21:53.298351 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 12:21:53.303048 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 17 12:21:53.306961 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:21:53.325549 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:21:53.330619 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:21:53.342405 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:21:53.343917 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:21:53.355529 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:21:53.370739 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:21:53.370939 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:21:53.402577 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:21:53.510208 kernel: EDAC MC: Ver: 3.0.0 Jan 17 12:21:53.536605 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:21:53.538545 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:21:53.545404 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:21:53.571185 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:21:53.602819 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:21:53.603368 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:21:53.603517 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:21:53.603763 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:21:53.603934 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:21:53.604354 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:21:53.604573 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:21:53.604679 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:21:53.604771 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:21:53.604812 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:21:53.604888 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:21:53.609668 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:21:53.613054 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:21:53.629145 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:21:53.640436 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:21:53.641692 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:21:53.643152 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:21:53.645501 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:21:53.646058 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:21:53.646092 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:21:53.647193 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:21:53.653425 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:21:53.658098 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 12:21:53.666524 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:21:53.671345 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:21:53.679656 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:21:53.680359 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:21:53.689234 jq[1429]: false Jan 17 12:21:53.690411 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:21:53.701344 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:21:53.714475 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:21:53.717001 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:21:53.729398 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:21:53.730897 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:21:53.733022 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:21:53.741475 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:21:53.745878 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:21:53.749241 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:21:53.753121 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:21:53.753450 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:21:53.760665 dbus-daemon[1428]: [system] SELinux support is enabled Jan 17 12:21:53.761119 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:21:53.767418 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:21:53.767462 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:21:53.772768 coreos-metadata[1427]: Jan 17 12:21:53.772 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:21:53.769508 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:21:53.783618 extend-filesystems[1432]: Found loop4 Jan 17 12:21:53.783618 extend-filesystems[1432]: Found loop5 Jan 17 12:21:53.783618 extend-filesystems[1432]: Found loop6 Jan 17 12:21:53.783618 extend-filesystems[1432]: Found loop7 Jan 17 12:21:53.783618 extend-filesystems[1432]: Found vda Jan 17 12:21:53.783618 extend-filesystems[1432]: Found vda1 Jan 17 12:21:53.783618 extend-filesystems[1432]: Found vda2 Jan 17 12:21:53.783618 extend-filesystems[1432]: Found vda3 Jan 17 12:21:53.783618 extend-filesystems[1432]: Found usr Jan 17 12:21:53.783618 extend-filesystems[1432]: Found vda4 Jan 17 12:21:53.783618 extend-filesystems[1432]: Found vda6 Jan 17 12:21:53.783618 extend-filesystems[1432]: Found vda7 Jan 17 12:21:53.783618 extend-filesystems[1432]: Found vda9 Jan 17 12:21:53.783618 extend-filesystems[1432]: Checking size of /dev/vda9 Jan 17 12:21:53.769592 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 17 12:21:53.863996 coreos-metadata[1427]: Jan 17 12:21:53.787 INFO Fetch successful Jan 17 12:21:53.769613 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:21:53.864106 update_engine[1445]: I20250117 12:21:53.839720 1445 main.cc:92] Flatcar Update Engine starting Jan 17 12:21:53.864106 update_engine[1445]: I20250117 12:21:53.852263 1445 update_check_scheduler.cc:74] Next update check in 9m7s Jan 17 12:21:53.783554 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:21:53.887735 extend-filesystems[1432]: Resized partition /dev/vda9 Jan 17 12:21:53.785286 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:21:53.889959 extend-filesystems[1467]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:21:53.901748 jq[1446]: true Jan 17 12:21:53.910920 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 17 12:21:53.786442 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:21:53.913659 tar[1448]: linux-amd64/helm Jan 17 12:21:53.787667 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:21:53.851608 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:21:53.914603 jq[1459]: true Jan 17 12:21:53.851761 (ntainerd)[1460]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:21:53.871703 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:21:53.941208 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1376) Jan 17 12:21:53.992259 systemd-logind[1442]: New seat seat0. Jan 17 12:21:53.996948 systemd-logind[1442]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 12:21:53.997703 systemd-logind[1442]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:21:53.999936 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:21:54.054441 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 12:21:54.056268 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:21:54.079979 bash[1489]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:21:54.098451 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:21:54.121674 systemd[1]: Starting sshkeys.service... Jan 17 12:21:54.132588 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 17 12:21:54.152963 extend-filesystems[1467]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 12:21:54.152963 extend-filesystems[1467]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 17 12:21:54.152963 extend-filesystems[1467]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 17 12:21:54.181297 extend-filesystems[1432]: Resized filesystem in /dev/vda9 Jan 17 12:21:54.181297 extend-filesystems[1432]: Found vdb Jan 17 12:21:54.159633 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:21:54.161308 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:21:54.201348 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 12:21:54.210781 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 12:21:54.247163 locksmithd[1466]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:21:54.340278 coreos-metadata[1500]: Jan 17 12:21:54.339 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:21:54.353559 coreos-metadata[1500]: Jan 17 12:21:54.352 INFO Fetch successful Jan 17 12:21:54.371992 unknown[1500]: wrote ssh authorized keys file for user: core Jan 17 12:21:54.415149 update-ssh-keys[1508]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:21:54.419080 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 12:21:54.422559 systemd[1]: Finished sshkeys.service. Jan 17 12:21:54.435028 containerd[1460]: time="2025-01-17T12:21:54.434531323Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:21:54.492518 containerd[1460]: time="2025-01-17T12:21:54.492113142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:54.503458 containerd[1460]: time="2025-01-17T12:21:54.501883734Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:21:54.503458 containerd[1460]: time="2025-01-17T12:21:54.501934841Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:21:54.503458 containerd[1460]: time="2025-01-17T12:21:54.501956471Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:21:54.503458 containerd[1460]: time="2025-01-17T12:21:54.502210588Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:21:54.503458 containerd[1460]: time="2025-01-17T12:21:54.502236550Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:54.503458 containerd[1460]: time="2025-01-17T12:21:54.502322640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:21:54.503458 containerd[1460]: time="2025-01-17T12:21:54.502336720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:54.503458 containerd[1460]: time="2025-01-17T12:21:54.502561064Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:21:54.503458 containerd[1460]: time="2025-01-17T12:21:54.502576573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:54.503458 containerd[1460]: time="2025-01-17T12:21:54.502589920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:21:54.503458 containerd[1460]: time="2025-01-17T12:21:54.502599761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:54.503774 containerd[1460]: time="2025-01-17T12:21:54.502673112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:54.503774 containerd[1460]: time="2025-01-17T12:21:54.502884118Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:54.503774 containerd[1460]: time="2025-01-17T12:21:54.502998724Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:21:54.503774 containerd[1460]: time="2025-01-17T12:21:54.503012257Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:21:54.503774 containerd[1460]: time="2025-01-17T12:21:54.503115720Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:21:54.504702 containerd[1460]: time="2025-01-17T12:21:54.504660384Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:21:54.511738 containerd[1460]: time="2025-01-17T12:21:54.511680259Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:21:54.512191 containerd[1460]: time="2025-01-17T12:21:54.512085740Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:21:54.512283 containerd[1460]: time="2025-01-17T12:21:54.512217574Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:21:54.512283 containerd[1460]: time="2025-01-17T12:21:54.512251299Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:21:54.512283 containerd[1460]: time="2025-01-17T12:21:54.512275877Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:21:54.512734 containerd[1460]: time="2025-01-17T12:21:54.512558468Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:21:54.513220 containerd[1460]: time="2025-01-17T12:21:54.512996743Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:21:54.515333 containerd[1460]: time="2025-01-17T12:21:54.515282102Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:21:54.515333 containerd[1460]: time="2025-01-17T12:21:54.515335007Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:21:54.515451 containerd[1460]: time="2025-01-17T12:21:54.515357313Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:21:54.515451 containerd[1460]: time="2025-01-17T12:21:54.515381254Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:21:54.515451 containerd[1460]: time="2025-01-17T12:21:54.515401797Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:21:54.515451 containerd[1460]: time="2025-01-17T12:21:54.515424250Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:21:54.515451 containerd[1460]: time="2025-01-17T12:21:54.515446688Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:21:54.515576 containerd[1460]: time="2025-01-17T12:21:54.515467700Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:21:54.515576 containerd[1460]: time="2025-01-17T12:21:54.515493296Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:21:54.515576 containerd[1460]: time="2025-01-17T12:21:54.515511841Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:21:54.515576 containerd[1460]: time="2025-01-17T12:21:54.515529498Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:21:54.515576 containerd[1460]: time="2025-01-17T12:21:54.515557672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:21:54.515672 containerd[1460]: time="2025-01-17T12:21:54.515578052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:21:54.515672 containerd[1460]: time="2025-01-17T12:21:54.515596295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:21:54.515672 containerd[1460]: time="2025-01-17T12:21:54.515616010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:21:54.515672 containerd[1460]: time="2025-01-17T12:21:54.515633026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:21:54.515672 containerd[1460]: time="2025-01-17T12:21:54.515653378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:21:54.515801 containerd[1460]: time="2025-01-17T12:21:54.515671267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:21:54.515801 containerd[1460]: time="2025-01-17T12:21:54.515692040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:21:54.515801 containerd[1460]: time="2025-01-17T12:21:54.515712331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:21:54.515801 containerd[1460]: time="2025-01-17T12:21:54.515732949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:21:54.515801 containerd[1460]: time="2025-01-17T12:21:54.515768668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:21:54.515899 containerd[1460]: time="2025-01-17T12:21:54.515797524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:21:54.515899 containerd[1460]: time="2025-01-17T12:21:54.515818136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:21:54.515899 containerd[1460]: time="2025-01-17T12:21:54.515841072Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:21:54.515899 containerd[1460]: time="2025-01-17T12:21:54.515876108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:21:54.515998 containerd[1460]: time="2025-01-17T12:21:54.515896700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:21:54.515998 containerd[1460]: time="2025-01-17T12:21:54.515924056Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:21:54.516042 containerd[1460]: time="2025-01-17T12:21:54.515996007Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:21:54.516065 containerd[1460]: time="2025-01-17T12:21:54.516021809Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:21:54.516065 containerd[1460]: time="2025-01-17T12:21:54.516050980Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:21:54.516121 containerd[1460]: time="2025-01-17T12:21:54.516073665Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:21:54.516144 containerd[1460]: time="2025-01-17T12:21:54.516116527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:21:54.516179 containerd[1460]: time="2025-01-17T12:21:54.516145698Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:21:54.519298 containerd[1460]: time="2025-01-17T12:21:54.516164127Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:21:54.519298 containerd[1460]: time="2025-01-17T12:21:54.518231488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:21:54.519448 containerd[1460]: time="2025-01-17T12:21:54.518546315Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:21:54.519448 containerd[1460]: time="2025-01-17T12:21:54.518615730Z" level=info msg="Connect containerd service" Jan 17 12:21:54.519448 containerd[1460]: time="2025-01-17T12:21:54.518660081Z" level=info msg="using legacy CRI server" Jan 17 12:21:54.519448 containerd[1460]: time="2025-01-17T12:21:54.518668818Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:21:54.519448 containerd[1460]: time="2025-01-17T12:21:54.518779510Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:21:54.525440 containerd[1460]: time="2025-01-17T12:21:54.521666388Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:21:54.525440 containerd[1460]: time="2025-01-17T12:21:54.522054853Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:21:54.525440 containerd[1460]: time="2025-01-17T12:21:54.522098777Z" level=info msg="Start subscribing containerd event" Jan 17 12:21:54.525440 containerd[1460]: time="2025-01-17T12:21:54.522185773Z" level=info msg="Start recovering state" Jan 17 12:21:54.525440 containerd[1460]: time="2025-01-17T12:21:54.522288542Z" level=info msg="Start event monitor" Jan 17 12:21:54.525440 containerd[1460]: time="2025-01-17T12:21:54.522309827Z" level=info msg="Start snapshots syncer" Jan 17 12:21:54.525440 containerd[1460]: time="2025-01-17T12:21:54.522329372Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:21:54.525440 containerd[1460]: time="2025-01-17T12:21:54.522337957Z" level=info msg="Start streaming server" Jan 17 12:21:54.525440 containerd[1460]: time="2025-01-17T12:21:54.522104400Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:21:54.522630 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:21:54.527385 containerd[1460]: time="2025-01-17T12:21:54.527338553Z" level=info msg="containerd successfully booted in 0.096022s" Jan 17 12:21:54.548435 systemd-networkd[1358]: eth0: Gained IPv6LL Jan 17 12:21:54.551559 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:21:54.557111 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:21:54.568503 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:21:54.571875 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:21:54.660708 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:21:54.677282 systemd-networkd[1358]: eth1: Gained IPv6LL Jan 17 12:21:54.712860 sshd_keygen[1468]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:21:54.750834 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:21:54.762078 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:21:54.789777 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:21:54.790809 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:21:54.801322 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:21:54.825975 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:21:54.836988 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:21:54.847745 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:21:54.851429 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:21:54.953057 tar[1448]: linux-amd64/LICENSE Jan 17 12:21:54.953473 tar[1448]: linux-amd64/README.md Jan 17 12:21:54.973207 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:21:55.687812 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:21:55.689818 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:21:55.692675 systemd[1]: Startup finished in 1.067s (kernel) + 5.253s (initrd) + 5.772s (userspace) = 12.093s. Jan 17 12:21:55.698874 (kubelet)[1552]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:21:56.469875 kubelet[1552]: E0117 12:21:56.469788 1552 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:21:56.473648 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:21:56.474062 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:21:56.474564 systemd[1]: kubelet.service: Consumed 1.311s CPU time. Jan 17 12:21:59.426766 systemd-timesyncd[1340]: Contacted time server 173.73.96.68:123 (1.flatcar.pool.ntp.org). Jan 17 12:21:59.426847 systemd-timesyncd[1340]: Initial clock synchronization to Fri 2025-01-17 12:21:59.714808 UTC. Jan 17 12:21:59.596538 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:21:59.612571 systemd[1]: Started sshd@0-164.92.109.43:22-139.178.68.195:48578.service - OpenSSH per-connection server daemon (139.178.68.195:48578). Jan 17 12:21:59.670979 sshd[1565]: Accepted publickey for core from 139.178.68.195 port 48578 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:21:59.673549 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:59.683061 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:21:59.691528 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:21:59.695539 systemd-logind[1442]: New session 1 of user core. Jan 17 12:21:59.710661 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:21:59.722767 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:21:59.727578 (systemd)[1569]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:21:59.842655 systemd[1569]: Queued start job for default target default.target. Jan 17 12:21:59.855740 systemd[1569]: Created slice app.slice - User Application Slice. Jan 17 12:21:59.855994 systemd[1569]: Reached target paths.target - Paths. Jan 17 12:21:59.856080 systemd[1569]: Reached target timers.target - Timers. Jan 17 12:21:59.857800 systemd[1569]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:21:59.876568 systemd[1569]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:21:59.876755 systemd[1569]: Reached target sockets.target - Sockets. Jan 17 12:21:59.876777 systemd[1569]: Reached target basic.target - Basic System. Jan 17 12:21:59.876848 systemd[1569]: Reached target default.target - Main User Target. Jan 17 12:21:59.876902 systemd[1569]: Startup finished in 140ms. Jan 17 12:21:59.877291 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:21:59.885470 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:21:59.962362 systemd[1]: Started sshd@1-164.92.109.43:22-139.178.68.195:48588.service - OpenSSH per-connection server daemon (139.178.68.195:48588). Jan 17 12:22:00.009269 sshd[1580]: Accepted publickey for core from 139.178.68.195 port 48588 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:22:00.011115 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:00.017325 systemd-logind[1442]: New session 2 of user core. Jan 17 12:22:00.021456 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:22:00.086866 sshd[1580]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:00.099084 systemd[1]: sshd@1-164.92.109.43:22-139.178.68.195:48588.service: Deactivated successfully. Jan 17 12:22:00.101181 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:22:00.103365 systemd-logind[1442]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:22:00.110842 systemd[1]: Started sshd@2-164.92.109.43:22-139.178.68.195:48594.service - OpenSSH per-connection server daemon (139.178.68.195:48594). Jan 17 12:22:00.113035 systemd-logind[1442]: Removed session 2. Jan 17 12:22:00.155816 sshd[1587]: Accepted publickey for core from 139.178.68.195 port 48594 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:22:00.157760 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:00.165591 systemd-logind[1442]: New session 3 of user core. Jan 17 12:22:00.175510 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:22:00.236989 sshd[1587]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:00.252676 systemd[1]: sshd@2-164.92.109.43:22-139.178.68.195:48594.service: Deactivated successfully. Jan 17 12:22:00.255280 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:22:00.256408 systemd-logind[1442]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:22:00.263887 systemd[1]: Started sshd@3-164.92.109.43:22-139.178.68.195:48602.service - OpenSSH per-connection server daemon (139.178.68.195:48602). Jan 17 12:22:00.267645 systemd-logind[1442]: Removed session 3. Jan 17 12:22:00.312252 sshd[1595]: Accepted publickey for core from 139.178.68.195 port 48602 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:22:00.314702 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:00.321467 systemd-logind[1442]: New session 4 of user core. Jan 17 12:22:00.329576 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:22:00.392795 sshd[1595]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:00.405832 systemd[1]: sshd@3-164.92.109.43:22-139.178.68.195:48602.service: Deactivated successfully. Jan 17 12:22:00.408228 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:22:00.410411 systemd-logind[1442]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:22:00.415618 systemd[1]: Started sshd@4-164.92.109.43:22-139.178.68.195:48604.service - OpenSSH per-connection server daemon (139.178.68.195:48604). Jan 17 12:22:00.417859 systemd-logind[1442]: Removed session 4. Jan 17 12:22:00.465116 sshd[1602]: Accepted publickey for core from 139.178.68.195 port 48604 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:22:00.466891 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:00.472159 systemd-logind[1442]: New session 5 of user core. Jan 17 12:22:00.483451 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:22:00.550473 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:22:00.550802 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:22:00.568117 sudo[1605]: pam_unix(sudo:session): session closed for user root Jan 17 12:22:00.572232 sshd[1602]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:00.586598 systemd[1]: sshd@4-164.92.109.43:22-139.178.68.195:48604.service: Deactivated successfully. Jan 17 12:22:00.589451 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:22:00.590368 systemd-logind[1442]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:22:00.597651 systemd[1]: Started sshd@5-164.92.109.43:22-139.178.68.195:48620.service - OpenSSH per-connection server daemon (139.178.68.195:48620). Jan 17 12:22:00.599174 systemd-logind[1442]: Removed session 5. Jan 17 12:22:00.644850 sshd[1610]: Accepted publickey for core from 139.178.68.195 port 48620 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:22:00.646951 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:00.653464 systemd-logind[1442]: New session 6 of user core. Jan 17 12:22:00.659504 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:22:00.722662 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:22:00.723041 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:22:00.727309 sudo[1614]: pam_unix(sudo:session): session closed for user root Jan 17 12:22:00.734357 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:22:00.734678 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:22:00.751869 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:22:00.756598 auditctl[1617]: No rules Jan 17 12:22:00.757165 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:22:00.757504 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:22:00.765897 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:22:00.797538 augenrules[1635]: No rules Jan 17 12:22:00.799182 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:22:00.800741 sudo[1613]: pam_unix(sudo:session): session closed for user root Jan 17 12:22:00.805466 sshd[1610]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:00.818505 systemd[1]: sshd@5-164.92.109.43:22-139.178.68.195:48620.service: Deactivated successfully. Jan 17 12:22:00.820539 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:22:00.822663 systemd-logind[1442]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:22:00.828794 systemd[1]: Started sshd@6-164.92.109.43:22-139.178.68.195:48632.service - OpenSSH per-connection server daemon (139.178.68.195:48632). Jan 17 12:22:00.831709 systemd-logind[1442]: Removed session 6. Jan 17 12:22:00.876547 sshd[1643]: Accepted publickey for core from 139.178.68.195 port 48632 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:22:00.878718 sshd[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:00.884588 systemd-logind[1442]: New session 7 of user core. Jan 17 12:22:00.900560 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:22:00.963695 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:22:00.964058 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:22:01.465770 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:22:01.465984 (dockerd)[1662]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:22:01.947800 dockerd[1662]: time="2025-01-17T12:22:01.947209962Z" level=info msg="Starting up" Jan 17 12:22:02.057980 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1952590060-merged.mount: Deactivated successfully. Jan 17 12:22:02.086679 dockerd[1662]: time="2025-01-17T12:22:02.086356169Z" level=info msg="Loading containers: start." Jan 17 12:22:02.202316 kernel: Initializing XFRM netlink socket Jan 17 12:22:02.295369 systemd-networkd[1358]: docker0: Link UP Jan 17 12:22:02.313315 dockerd[1662]: time="2025-01-17T12:22:02.313259386Z" level=info msg="Loading containers: done." Jan 17 12:22:02.332795 dockerd[1662]: time="2025-01-17T12:22:02.332711659Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:22:02.332985 dockerd[1662]: time="2025-01-17T12:22:02.332878299Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:22:02.333024 dockerd[1662]: time="2025-01-17T12:22:02.333015357Z" level=info msg="Daemon has completed initialization" Jan 17 12:22:02.373752 dockerd[1662]: time="2025-01-17T12:22:02.373374027Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:22:02.373972 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:22:03.316559 containerd[1460]: time="2025-01-17T12:22:03.316503617Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\"" Jan 17 12:22:03.877092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1831516589.mount: Deactivated successfully. Jan 17 12:22:05.248216 containerd[1460]: time="2025-01-17T12:22:05.247670008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:05.249034 containerd[1460]: time="2025-01-17T12:22:05.249000291Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.13: active requests=0, bytes read=35140730" Jan 17 12:22:05.249645 containerd[1460]: time="2025-01-17T12:22:05.249592659Z" level=info msg="ImageCreate event name:\"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:05.253373 containerd[1460]: time="2025-01-17T12:22:05.253239548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:05.254808 containerd[1460]: time="2025-01-17T12:22:05.254339007Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.13\" with image id \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\", size \"35137530\" in 1.937794728s" Jan 17 12:22:05.254808 containerd[1460]: time="2025-01-17T12:22:05.254375293Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\" returns image reference \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\"" Jan 17 12:22:05.284889 containerd[1460]: time="2025-01-17T12:22:05.284850226Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\"" Jan 17 12:22:05.492763 systemd-resolved[1320]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 17 12:22:06.725039 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:22:06.735302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:22:06.857522 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:22:06.858875 (kubelet)[1884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:22:06.930306 containerd[1460]: time="2025-01-17T12:22:06.929839114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:06.932213 containerd[1460]: time="2025-01-17T12:22:06.931557151Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.13: active requests=0, bytes read=32216641" Jan 17 12:22:06.932213 containerd[1460]: time="2025-01-17T12:22:06.932037904Z" level=info msg="ImageCreate event name:\"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:06.932882 kubelet[1884]: E0117 12:22:06.932834 1884 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:22:06.937322 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:22:06.937474 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:22:06.941572 containerd[1460]: time="2025-01-17T12:22:06.941521782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:06.943480 containerd[1460]: time="2025-01-17T12:22:06.942565511Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.13\" with image id \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\", size \"33663223\" in 1.657673627s" Jan 17 12:22:06.943480 containerd[1460]: time="2025-01-17T12:22:06.943482764Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\" returns image reference \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\"" Jan 17 12:22:06.970598 containerd[1460]: time="2025-01-17T12:22:06.970561703Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\"" Jan 17 12:22:07.981258 containerd[1460]: time="2025-01-17T12:22:07.980219453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:07.982229 containerd[1460]: time="2025-01-17T12:22:07.982183273Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.13: active requests=0, bytes read=17332841" Jan 17 12:22:07.983482 containerd[1460]: time="2025-01-17T12:22:07.983449362Z" level=info msg="ImageCreate event name:\"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:07.986040 containerd[1460]: time="2025-01-17T12:22:07.986005581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:07.987236 containerd[1460]: time="2025-01-17T12:22:07.987205089Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.13\" with image id \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\", size \"18779441\" in 1.016405253s" Jan 17 12:22:07.987342 containerd[1460]: time="2025-01-17T12:22:07.987327353Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\" returns image reference \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\"" Jan 17 12:22:08.026972 containerd[1460]: time="2025-01-17T12:22:08.026924337Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\"" Jan 17 12:22:09.009417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3906286046.mount: Deactivated successfully. Jan 17 12:22:09.518094 containerd[1460]: time="2025-01-17T12:22:09.517997391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:09.519402 containerd[1460]: time="2025-01-17T12:22:09.519034274Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.13: active requests=0, bytes read=28620941" Jan 17 12:22:09.520096 containerd[1460]: time="2025-01-17T12:22:09.520042069Z" level=info msg="ImageCreate event name:\"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:09.523468 containerd[1460]: time="2025-01-17T12:22:09.523402807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:09.524773 containerd[1460]: time="2025-01-17T12:22:09.524713219Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.13\" with image id \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\", repo tag \"registry.k8s.io/kube-proxy:v1.29.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\", size \"28619960\" in 1.497718415s" Jan 17 12:22:09.525211 containerd[1460]: time="2025-01-17T12:22:09.524985784Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\" returns image reference \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\"" Jan 17 12:22:09.561798 containerd[1460]: time="2025-01-17T12:22:09.561737472Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:22:09.563816 systemd-resolved[1320]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Jan 17 12:22:10.068361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2462717229.mount: Deactivated successfully. Jan 17 12:22:11.145539 containerd[1460]: time="2025-01-17T12:22:11.145472640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:11.146867 containerd[1460]: time="2025-01-17T12:22:11.146259818Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 17 12:22:11.148257 containerd[1460]: time="2025-01-17T12:22:11.148095416Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:11.151564 containerd[1460]: time="2025-01-17T12:22:11.151496463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:11.153212 containerd[1460]: time="2025-01-17T12:22:11.153091166Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.591297143s" Jan 17 12:22:11.153212 containerd[1460]: time="2025-01-17T12:22:11.153153245Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 17 12:22:11.182808 containerd[1460]: time="2025-01-17T12:22:11.182738135Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 12:22:11.638371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3341616594.mount: Deactivated successfully. Jan 17 12:22:11.643031 containerd[1460]: time="2025-01-17T12:22:11.642983543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:11.644027 containerd[1460]: time="2025-01-17T12:22:11.643867811Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 17 12:22:11.644802 containerd[1460]: time="2025-01-17T12:22:11.644531924Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:11.646644 containerd[1460]: time="2025-01-17T12:22:11.646577645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:11.647425 containerd[1460]: time="2025-01-17T12:22:11.647389571Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 464.60308ms" Jan 17 12:22:11.647758 containerd[1460]: time="2025-01-17T12:22:11.647430706Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 17 12:22:11.673733 containerd[1460]: time="2025-01-17T12:22:11.673694537Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 17 12:22:12.150134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1649791406.mount: Deactivated successfully. Jan 17 12:22:13.828645 containerd[1460]: time="2025-01-17T12:22:13.828589017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:13.830427 containerd[1460]: time="2025-01-17T12:22:13.830369657Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jan 17 12:22:13.830708 containerd[1460]: time="2025-01-17T12:22:13.830661742Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:13.834977 containerd[1460]: time="2025-01-17T12:22:13.834883852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:13.836591 containerd[1460]: time="2025-01-17T12:22:13.836352858Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.16245136s" Jan 17 12:22:13.836591 containerd[1460]: time="2025-01-17T12:22:13.836421066Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 17 12:22:16.701753 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:22:16.715636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:22:16.748143 systemd[1]: Reloading requested from client PID 2088 ('systemctl') (unit session-7.scope)... Jan 17 12:22:16.748161 systemd[1]: Reloading... Jan 17 12:22:16.893195 zram_generator::config[2130]: No configuration found. Jan 17 12:22:17.015132 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:22:17.101292 systemd[1]: Reloading finished in 352 ms. Jan 17 12:22:17.151158 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:22:17.151322 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:22:17.151898 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:22:17.157679 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:22:17.268031 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:22:17.278820 (kubelet)[2181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:22:17.349422 kubelet[2181]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:22:17.349422 kubelet[2181]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:22:17.349422 kubelet[2181]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:22:17.350567 kubelet[2181]: I0117 12:22:17.350483 2181 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:22:17.584906 kubelet[2181]: I0117 12:22:17.584783 2181 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:22:17.584906 kubelet[2181]: I0117 12:22:17.584823 2181 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:22:17.585598 kubelet[2181]: I0117 12:22:17.585569 2181 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:22:17.610032 kubelet[2181]: I0117 12:22:17.609519 2181 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:22:17.610766 kubelet[2181]: E0117 12:22:17.610710 2181 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://164.92.109.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 164.92.109.43:6443: connect: connection refused Jan 17 12:22:17.628693 kubelet[2181]: I0117 12:22:17.628584 2181 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:22:17.628897 kubelet[2181]: I0117 12:22:17.628879 2181 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:22:17.630020 kubelet[2181]: I0117 12:22:17.629957 2181 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:22:17.630020 kubelet[2181]: I0117 12:22:17.630019 2181 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:22:17.630020 kubelet[2181]: I0117 12:22:17.630031 2181 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:22:17.630262 kubelet[2181]: I0117 12:22:17.630193 2181 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:22:17.630323 kubelet[2181]: I0117 12:22:17.630310 2181 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:22:17.630356 kubelet[2181]: I0117 12:22:17.630331 2181 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:22:17.630394 kubelet[2181]: I0117 12:22:17.630367 2181 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:22:17.630394 kubelet[2181]: I0117 12:22:17.630386 2181 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:22:17.632806 kubelet[2181]: I0117 12:22:17.632779 2181 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:22:17.640292 kubelet[2181]: I0117 12:22:17.640243 2181 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:22:17.640540 kubelet[2181]: W0117 12:22:17.640525 2181 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:22:17.641312 kubelet[2181]: W0117 12:22:17.641261 2181 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://164.92.109.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.92.109.43:6443: connect: connection refused Jan 17 12:22:17.641312 kubelet[2181]: E0117 12:22:17.641317 2181 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://164.92.109.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.92.109.43:6443: connect: connection refused Jan 17 12:22:17.641506 kubelet[2181]: W0117 12:22:17.641423 2181 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://164.92.109.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-6-c2def92c28&limit=500&resourceVersion=0": dial tcp 164.92.109.43:6443: connect: connection refused Jan 17 12:22:17.641506 kubelet[2181]: E0117 12:22:17.641449 2181 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://164.92.109.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-6-c2def92c28&limit=500&resourceVersion=0": dial tcp 164.92.109.43:6443: connect: connection refused Jan 17 12:22:17.642221 kubelet[2181]: I0117 12:22:17.642202 2181 server.go:1256] "Started kubelet" Jan 17 12:22:17.642369 kubelet[2181]: I0117 12:22:17.642357 2181 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:22:17.644119 kubelet[2181]: I0117 12:22:17.644055 2181 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:22:17.645304 kubelet[2181]: I0117 12:22:17.645271 2181 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:22:17.646297 kubelet[2181]: I0117 12:22:17.645499 2181 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:22:17.648922 kubelet[2181]: I0117 12:22:17.648738 2181 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:22:17.654532 kubelet[2181]: E0117 12:22:17.654488 2181 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://164.92.109.43:6443/api/v1/namespaces/default/events\": dial tcp 164.92.109.43:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-6-c2def92c28.181b7a498f9ddcb4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-6-c2def92c28,UID:ci-4081.3.0-6-c2def92c28,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-6-c2def92c28,},FirstTimestamp:2025-01-17 12:22:17.642146996 +0000 UTC m=+0.355035118,LastTimestamp:2025-01-17 12:22:17.642146996 +0000 UTC m=+0.355035118,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-6-c2def92c28,}" Jan 17 12:22:17.658489 kubelet[2181]: E0117 12:22:17.657968 2181 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-6-c2def92c28\" not found" Jan 17 12:22:17.658489 kubelet[2181]: I0117 12:22:17.658010 2181 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:22:17.658489 kubelet[2181]: I0117 12:22:17.658089 2181 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:22:17.658489 kubelet[2181]: I0117 12:22:17.658159 2181 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:22:17.659023 kubelet[2181]: E0117 12:22:17.658995 2181 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:22:17.662581 kubelet[2181]: E0117 12:22:17.661450 2181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.109.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-6-c2def92c28?timeout=10s\": dial tcp 164.92.109.43:6443: connect: connection refused" interval="200ms" Jan 17 12:22:17.662581 kubelet[2181]: W0117 12:22:17.661558 2181 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://164.92.109.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.109.43:6443: connect: connection refused Jan 17 12:22:17.662581 kubelet[2181]: E0117 12:22:17.661599 2181 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://164.92.109.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.109.43:6443: connect: connection refused Jan 17 12:22:17.662581 kubelet[2181]: I0117 12:22:17.661804 2181 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:22:17.662581 kubelet[2181]: I0117 12:22:17.661900 2181 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:22:17.665815 kubelet[2181]: I0117 12:22:17.665487 2181 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:22:17.672846 kubelet[2181]: I0117 12:22:17.672808 2181 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:22:17.676803 kubelet[2181]: I0117 12:22:17.676452 2181 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:22:17.676803 kubelet[2181]: I0117 12:22:17.676489 2181 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:22:17.676803 kubelet[2181]: I0117 12:22:17.676511 2181 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:22:17.676803 kubelet[2181]: E0117 12:22:17.676577 2181 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:22:17.687363 kubelet[2181]: W0117 12:22:17.687215 2181 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://164.92.109.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.109.43:6443: connect: connection refused Jan 17 12:22:17.687632 kubelet[2181]: E0117 12:22:17.687603 2181 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://164.92.109.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.109.43:6443: connect: connection refused Jan 17 12:22:17.699692 kubelet[2181]: I0117 12:22:17.699658 2181 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:22:17.699692 kubelet[2181]: I0117 12:22:17.699680 2181 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:22:17.699692 kubelet[2181]: I0117 12:22:17.699701 2181 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:22:17.702260 kubelet[2181]: I0117 12:22:17.702200 2181 policy_none.go:49] "None policy: Start" Jan 17 12:22:17.703250 kubelet[2181]: I0117 12:22:17.703229 2181 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:22:17.703350 kubelet[2181]: I0117 12:22:17.703302 2181 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:22:17.711537 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 12:22:17.721281 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 12:22:17.725156 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 12:22:17.733532 kubelet[2181]: I0117 12:22:17.733499 2181 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:22:17.734191 kubelet[2181]: I0117 12:22:17.734151 2181 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:22:17.736148 kubelet[2181]: E0117 12:22:17.736130 2181 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-6-c2def92c28\" not found" Jan 17 12:22:17.759450 kubelet[2181]: I0117 12:22:17.759380 2181 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-6-c2def92c28" Jan 17 12:22:17.759936 kubelet[2181]: E0117 12:22:17.759905 2181 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://164.92.109.43:6443/api/v1/nodes\": dial tcp 164.92.109.43:6443: connect: connection refused" node="ci-4081.3.0-6-c2def92c28" Jan 17 12:22:17.777640 kubelet[2181]: I0117 12:22:17.777587 2181 topology_manager.go:215] "Topology Admit Handler" podUID="d009c2ee6963a764dd8d634e11717e07" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-6-c2def92c28" Jan 17 12:22:17.779137 kubelet[2181]: I0117 12:22:17.778625 2181 topology_manager.go:215] "Topology Admit Handler" podUID="04c93b2335ce9d2b0aad7fb0e5f6f982" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-6-c2def92c28" Jan 17 12:22:17.779999 kubelet[2181]: I0117 12:22:17.779969 2181 topology_manager.go:215] "Topology Admit Handler" podUID="28be93d205fa1ee2f8e04912acf1536f" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-6-c2def92c28" Jan 17 12:22:17.787222 systemd[1]: Created slice kubepods-burstable-podd009c2ee6963a764dd8d634e11717e07.slice - libcontainer container kubepods-burstable-podd009c2ee6963a764dd8d634e11717e07.slice. Jan 17 12:22:17.801596 systemd[1]: Created slice kubepods-burstable-pod04c93b2335ce9d2b0aad7fb0e5f6f982.slice - libcontainer container kubepods-burstable-pod04c93b2335ce9d2b0aad7fb0e5f6f982.slice. Jan 17 12:22:17.813846 systemd[1]: Created slice kubepods-burstable-pod28be93d205fa1ee2f8e04912acf1536f.slice - libcontainer container kubepods-burstable-pod28be93d205fa1ee2f8e04912acf1536f.slice. Jan 17 12:22:17.862379 kubelet[2181]: E0117 12:22:17.862256 2181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.109.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-6-c2def92c28?timeout=10s\": dial tcp 164.92.109.43:6443: connect: connection refused" interval="400ms" Jan 17 12:22:17.959203 kubelet[2181]: I0117 12:22:17.958764 2181 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d009c2ee6963a764dd8d634e11717e07-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-6-c2def92c28\" (UID: \"d009c2ee6963a764dd8d634e11717e07\") " pod="kube-system/kube-apiserver-ci-4081.3.0-6-c2def92c28" Jan 17 12:22:17.959203 kubelet[2181]: I0117 12:22:17.958818 2181 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/04c93b2335ce9d2b0aad7fb0e5f6f982-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-6-c2def92c28\" (UID: \"04c93b2335ce9d2b0aad7fb0e5f6f982\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-6-c2def92c28" Jan 17 12:22:17.959203 kubelet[2181]: I0117 12:22:17.958843 2181 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04c93b2335ce9d2b0aad7fb0e5f6f982-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-6-c2def92c28\" (UID: \"04c93b2335ce9d2b0aad7fb0e5f6f982\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-6-c2def92c28" Jan 17 12:22:17.959203 kubelet[2181]: I0117 12:22:17.958947 2181 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/28be93d205fa1ee2f8e04912acf1536f-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-6-c2def92c28\" (UID: \"28be93d205fa1ee2f8e04912acf1536f\") " pod="kube-system/kube-scheduler-ci-4081.3.0-6-c2def92c28" Jan 17 12:22:17.959203 kubelet[2181]: I0117 12:22:17.958966 2181 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d009c2ee6963a764dd8d634e11717e07-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-6-c2def92c28\" (UID: \"d009c2ee6963a764dd8d634e11717e07\") " pod="kube-system/kube-apiserver-ci-4081.3.0-6-c2def92c28" Jan 17 12:22:17.959495 kubelet[2181]: I0117 12:22:17.958986 2181 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/04c93b2335ce9d2b0aad7fb0e5f6f982-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-6-c2def92c28\" (UID: \"04c93b2335ce9d2b0aad7fb0e5f6f982\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-6-c2def92c28" Jan 17 12:22:17.959495 kubelet[2181]: I0117 12:22:17.959004 2181 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/04c93b2335ce9d2b0aad7fb0e5f6f982-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-6-c2def92c28\" (UID: \"04c93b2335ce9d2b0aad7fb0e5f6f982\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-6-c2def92c28" Jan 17 12:22:17.959495 kubelet[2181]: I0117 12:22:17.959024 2181 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/04c93b2335ce9d2b0aad7fb0e5f6f982-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-6-c2def92c28\" (UID: \"04c93b2335ce9d2b0aad7fb0e5f6f982\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-6-c2def92c28" Jan 17 12:22:17.959495 kubelet[2181]: I0117 12:22:17.959120 2181 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d009c2ee6963a764dd8d634e11717e07-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-6-c2def92c28\" (UID: \"d009c2ee6963a764dd8d634e11717e07\") " pod="kube-system/kube-apiserver-ci-4081.3.0-6-c2def92c28" Jan 17 12:22:17.961285 kubelet[2181]: I0117 12:22:17.961242 2181 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-6-c2def92c28" Jan 17 12:22:17.961599 kubelet[2181]: E0117 12:22:17.961580 2181 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://164.92.109.43:6443/api/v1/nodes\": dial tcp 164.92.109.43:6443: connect: connection refused" node="ci-4081.3.0-6-c2def92c28" Jan 17 12:22:18.101339 kubelet[2181]: E0117 12:22:18.100741 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:18.101995 containerd[1460]: time="2025-01-17T12:22:18.101944548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-6-c2def92c28,Uid:d009c2ee6963a764dd8d634e11717e07,Namespace:kube-system,Attempt:0,}" Jan 17 12:22:18.105677 kubelet[2181]: E0117 12:22:18.105637 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:18.112838 containerd[1460]: time="2025-01-17T12:22:18.112683590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-6-c2def92c28,Uid:04c93b2335ce9d2b0aad7fb0e5f6f982,Namespace:kube-system,Attempt:0,}" Jan 17 12:22:18.117285 kubelet[2181]: E0117 12:22:18.117108 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:18.118156 containerd[1460]: time="2025-01-17T12:22:18.117721821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-6-c2def92c28,Uid:28be93d205fa1ee2f8e04912acf1536f,Namespace:kube-system,Attempt:0,}" Jan 17 12:22:18.263316 kubelet[2181]: E0117 12:22:18.263280 2181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.109.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-6-c2def92c28?timeout=10s\": dial tcp 164.92.109.43:6443: connect: connection refused" interval="800ms" Jan 17 12:22:18.363505 kubelet[2181]: I0117 12:22:18.362884 2181 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-6-c2def92c28" Jan 17 12:22:18.363872 kubelet[2181]: E0117 12:22:18.363601 2181 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://164.92.109.43:6443/api/v1/nodes\": dial tcp 164.92.109.43:6443: connect: connection refused" node="ci-4081.3.0-6-c2def92c28" Jan 17 12:22:18.501625 kubelet[2181]: W0117 12:22:18.501575 2181 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://164.92.109.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.109.43:6443: connect: connection refused Jan 17 12:22:18.501625 kubelet[2181]: E0117 12:22:18.501621 2181 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://164.92.109.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.109.43:6443: connect: connection refused Jan 17 12:22:18.534344 kubelet[2181]: W0117 12:22:18.534295 2181 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://164.92.109.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.109.43:6443: connect: connection refused Jan 17 12:22:18.534344 kubelet[2181]: E0117 12:22:18.534342 2181 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://164.92.109.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.109.43:6443: connect: connection refused Jan 17 12:22:18.556906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2589871980.mount: Deactivated successfully. Jan 17 12:22:18.561272 containerd[1460]: time="2025-01-17T12:22:18.561197120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:22:18.561970 containerd[1460]: time="2025-01-17T12:22:18.561902445Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 12:22:18.562887 containerd[1460]: time="2025-01-17T12:22:18.562852589Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:22:18.564397 containerd[1460]: time="2025-01-17T12:22:18.564004687Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:22:18.564397 containerd[1460]: time="2025-01-17T12:22:18.564228493Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:22:18.564960 containerd[1460]: time="2025-01-17T12:22:18.564932917Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:22:18.565428 containerd[1460]: time="2025-01-17T12:22:18.565394147Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:22:18.568517 containerd[1460]: time="2025-01-17T12:22:18.568431265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:22:18.570614 containerd[1460]: time="2025-01-17T12:22:18.570105727Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 452.261634ms" Jan 17 12:22:18.571712 containerd[1460]: time="2025-01-17T12:22:18.571575217Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 458.791932ms" Jan 17 12:22:18.573697 containerd[1460]: time="2025-01-17T12:22:18.573655645Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 471.073026ms" Jan 17 12:22:18.729996 containerd[1460]: time="2025-01-17T12:22:18.729797956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:18.729996 containerd[1460]: time="2025-01-17T12:22:18.729938796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:18.730634 containerd[1460]: time="2025-01-17T12:22:18.729970800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:18.732256 containerd[1460]: time="2025-01-17T12:22:18.732162034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:18.740978 containerd[1460]: time="2025-01-17T12:22:18.739108878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:18.740978 containerd[1460]: time="2025-01-17T12:22:18.739158265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:18.740978 containerd[1460]: time="2025-01-17T12:22:18.739185738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:18.740978 containerd[1460]: time="2025-01-17T12:22:18.739285252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:18.751643 containerd[1460]: time="2025-01-17T12:22:18.750482424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:18.751643 containerd[1460]: time="2025-01-17T12:22:18.751643432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:18.752510 containerd[1460]: time="2025-01-17T12:22:18.751696148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:18.752510 containerd[1460]: time="2025-01-17T12:22:18.751992228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:18.778505 systemd[1]: Started cri-containerd-e52461c5c4fe1371b9f3bdb3251d975badf4cfa348f2d78d26596d8f8723290f.scope - libcontainer container e52461c5c4fe1371b9f3bdb3251d975badf4cfa348f2d78d26596d8f8723290f. Jan 17 12:22:18.780733 kubelet[2181]: W0117 12:22:18.780252 2181 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://164.92.109.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.92.109.43:6443: connect: connection refused Jan 17 12:22:18.780733 kubelet[2181]: E0117 12:22:18.780294 2181 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://164.92.109.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.92.109.43:6443: connect: connection refused Jan 17 12:22:18.781036 systemd[1]: Started cri-containerd-f451abdbf7fda33b878408b5cf0a36004d123f7e008087a9f8dbd56f0837b749.scope - libcontainer container f451abdbf7fda33b878408b5cf0a36004d123f7e008087a9f8dbd56f0837b749. Jan 17 12:22:18.794418 systemd[1]: Started cri-containerd-79f53f53f7241b991c7f0dfbe9f1e33bb26b9b07dc435c1e13f80c200e180073.scope - libcontainer container 79f53f53f7241b991c7f0dfbe9f1e33bb26b9b07dc435c1e13f80c200e180073. Jan 17 12:22:18.861197 containerd[1460]: time="2025-01-17T12:22:18.860691758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-6-c2def92c28,Uid:04c93b2335ce9d2b0aad7fb0e5f6f982,Namespace:kube-system,Attempt:0,} returns sandbox id \"e52461c5c4fe1371b9f3bdb3251d975badf4cfa348f2d78d26596d8f8723290f\"" Jan 17 12:22:18.865641 kubelet[2181]: E0117 12:22:18.865431 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:18.872848 containerd[1460]: time="2025-01-17T12:22:18.872709855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-6-c2def92c28,Uid:d009c2ee6963a764dd8d634e11717e07,Namespace:kube-system,Attempt:0,} returns sandbox id \"79f53f53f7241b991c7f0dfbe9f1e33bb26b9b07dc435c1e13f80c200e180073\"" Jan 17 12:22:18.876154 kubelet[2181]: E0117 12:22:18.875535 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:18.876295 containerd[1460]: time="2025-01-17T12:22:18.875827663Z" level=info msg="CreateContainer within sandbox \"e52461c5c4fe1371b9f3bdb3251d975badf4cfa348f2d78d26596d8f8723290f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:22:18.881206 containerd[1460]: time="2025-01-17T12:22:18.881104495Z" level=info msg="CreateContainer within sandbox \"79f53f53f7241b991c7f0dfbe9f1e33bb26b9b07dc435c1e13f80c200e180073\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:22:18.905326 containerd[1460]: time="2025-01-17T12:22:18.905245697Z" level=info msg="CreateContainer within sandbox \"e52461c5c4fe1371b9f3bdb3251d975badf4cfa348f2d78d26596d8f8723290f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"56a4a92f91c4a075f74267bef0f4468b23d613d7f704ad0047e542a5517fae81\"" Jan 17 12:22:18.906226 containerd[1460]: time="2025-01-17T12:22:18.906105758Z" level=info msg="CreateContainer within sandbox \"79f53f53f7241b991c7f0dfbe9f1e33bb26b9b07dc435c1e13f80c200e180073\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bfa3a9840ce7ecae60ab4ffbd5983c0e8ea5bf49b66ce14f99b76737c033b90e\"" Jan 17 12:22:18.906595 containerd[1460]: time="2025-01-17T12:22:18.906467488Z" level=info msg="StartContainer for \"56a4a92f91c4a075f74267bef0f4468b23d613d7f704ad0047e542a5517fae81\"" Jan 17 12:22:18.911348 containerd[1460]: time="2025-01-17T12:22:18.911130023Z" level=info msg="StartContainer for \"bfa3a9840ce7ecae60ab4ffbd5983c0e8ea5bf49b66ce14f99b76737c033b90e\"" Jan 17 12:22:18.918222 containerd[1460]: time="2025-01-17T12:22:18.917980892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-6-c2def92c28,Uid:28be93d205fa1ee2f8e04912acf1536f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f451abdbf7fda33b878408b5cf0a36004d123f7e008087a9f8dbd56f0837b749\"" Jan 17 12:22:18.919381 kubelet[2181]: E0117 12:22:18.919351 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:18.925959 containerd[1460]: time="2025-01-17T12:22:18.925724884Z" level=info msg="CreateContainer within sandbox \"f451abdbf7fda33b878408b5cf0a36004d123f7e008087a9f8dbd56f0837b749\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:22:18.949069 containerd[1460]: time="2025-01-17T12:22:18.949009535Z" level=info msg="CreateContainer within sandbox \"f451abdbf7fda33b878408b5cf0a36004d123f7e008087a9f8dbd56f0837b749\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8454362bf684f378c40b6bf1e5957c30dfa37f59f911b0c8bcbf86765b819430\"" Jan 17 12:22:18.953191 containerd[1460]: time="2025-01-17T12:22:18.952403988Z" level=info msg="StartContainer for \"8454362bf684f378c40b6bf1e5957c30dfa37f59f911b0c8bcbf86765b819430\"" Jan 17 12:22:18.959934 systemd[1]: Started cri-containerd-bfa3a9840ce7ecae60ab4ffbd5983c0e8ea5bf49b66ce14f99b76737c033b90e.scope - libcontainer container bfa3a9840ce7ecae60ab4ffbd5983c0e8ea5bf49b66ce14f99b76737c033b90e. Jan 17 12:22:18.981431 systemd[1]: Started cri-containerd-56a4a92f91c4a075f74267bef0f4468b23d613d7f704ad0047e542a5517fae81.scope - libcontainer container 56a4a92f91c4a075f74267bef0f4468b23d613d7f704ad0047e542a5517fae81. Jan 17 12:22:19.041257 systemd[1]: Started cri-containerd-8454362bf684f378c40b6bf1e5957c30dfa37f59f911b0c8bcbf86765b819430.scope - libcontainer container 8454362bf684f378c40b6bf1e5957c30dfa37f59f911b0c8bcbf86765b819430. Jan 17 12:22:19.065946 kubelet[2181]: E0117 12:22:19.065899 2181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.109.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-6-c2def92c28?timeout=10s\": dial tcp 164.92.109.43:6443: connect: connection refused" interval="1.6s" Jan 17 12:22:19.069421 containerd[1460]: time="2025-01-17T12:22:19.068586400Z" level=info msg="StartContainer for \"bfa3a9840ce7ecae60ab4ffbd5983c0e8ea5bf49b66ce14f99b76737c033b90e\" returns successfully" Jan 17 12:22:19.092265 containerd[1460]: time="2025-01-17T12:22:19.092076760Z" level=info msg="StartContainer for \"56a4a92f91c4a075f74267bef0f4468b23d613d7f704ad0047e542a5517fae81\" returns successfully" Jan 17 12:22:19.133206 containerd[1460]: time="2025-01-17T12:22:19.132752459Z" level=info msg="StartContainer for \"8454362bf684f378c40b6bf1e5957c30dfa37f59f911b0c8bcbf86765b819430\" returns successfully" Jan 17 12:22:19.166347 kubelet[2181]: I0117 12:22:19.166302 2181 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-6-c2def92c28" Jan 17 12:22:19.167004 kubelet[2181]: E0117 12:22:19.166911 2181 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://164.92.109.43:6443/api/v1/nodes\": dial tcp 164.92.109.43:6443: connect: connection refused" node="ci-4081.3.0-6-c2def92c28" Jan 17 12:22:19.198069 kubelet[2181]: W0117 12:22:19.197774 2181 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://164.92.109.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-6-c2def92c28&limit=500&resourceVersion=0": dial tcp 164.92.109.43:6443: connect: connection refused Jan 17 12:22:19.198069 kubelet[2181]: E0117 12:22:19.197845 2181 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://164.92.109.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-6-c2def92c28&limit=500&resourceVersion=0": dial tcp 164.92.109.43:6443: connect: connection refused Jan 17 12:22:19.699377 kubelet[2181]: E0117 12:22:19.698605 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:19.704204 kubelet[2181]: E0117 12:22:19.701354 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:19.704559 kubelet[2181]: E0117 12:22:19.704532 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:20.704284 kubelet[2181]: E0117 12:22:20.704246 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:20.769297 kubelet[2181]: I0117 12:22:20.769254 2181 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-6-c2def92c28" Jan 17 12:22:20.819922 kubelet[2181]: E0117 12:22:20.819886 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:21.199576 kubelet[2181]: I0117 12:22:21.199411 2181 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-6-c2def92c28" Jan 17 12:22:21.223890 kubelet[2181]: E0117 12:22:21.223852 2181 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-6-c2def92c28\" not found" Jan 17 12:22:21.261766 kubelet[2181]: E0117 12:22:21.261722 2181 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jan 17 12:22:21.324261 kubelet[2181]: E0117 12:22:21.324212 2181 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-6-c2def92c28\" not found" Jan 17 12:22:21.425411 kubelet[2181]: E0117 12:22:21.425338 2181 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-6-c2def92c28\" not found" Jan 17 12:22:21.526385 kubelet[2181]: E0117 12:22:21.526240 2181 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-6-c2def92c28\" not found" Jan 17 12:22:21.627271 kubelet[2181]: E0117 12:22:21.627208 2181 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-6-c2def92c28\" not found" Jan 17 12:22:21.727835 kubelet[2181]: E0117 12:22:21.727777 2181 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-6-c2def92c28\" not found" Jan 17 12:22:21.828760 kubelet[2181]: E0117 12:22:21.828610 2181 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-6-c2def92c28\" not found" Jan 17 12:22:21.929364 kubelet[2181]: E0117 12:22:21.929301 2181 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-6-c2def92c28\" not found" Jan 17 12:22:22.030056 kubelet[2181]: E0117 12:22:22.030001 2181 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-6-c2def92c28\" not found" Jan 17 12:22:22.130620 kubelet[2181]: E0117 12:22:22.130464 2181 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-6-c2def92c28\" not found" Jan 17 12:22:22.635607 kubelet[2181]: I0117 12:22:22.635546 2181 apiserver.go:52] "Watching apiserver" Jan 17 12:22:22.658300 kubelet[2181]: I0117 12:22:22.658246 2181 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:22:25.850097 kubelet[2181]: W0117 12:22:25.849415 2181 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:22:25.850097 kubelet[2181]: E0117 12:22:25.850127 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:25.875026 kubelet[2181]: W0117 12:22:25.873751 2181 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:22:25.876610 kubelet[2181]: E0117 12:22:25.876573 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:26.520201 systemd[1]: Reloading requested from client PID 2456 ('systemctl') (unit session-7.scope)... Jan 17 12:22:26.520224 systemd[1]: Reloading... Jan 17 12:22:26.643207 zram_generator::config[2498]: No configuration found. Jan 17 12:22:26.715434 kubelet[2181]: E0117 12:22:26.715343 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:26.716623 kubelet[2181]: E0117 12:22:26.715951 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:26.835822 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:22:26.969031 systemd[1]: Reloading finished in 448 ms. Jan 17 12:22:27.018075 kubelet[2181]: I0117 12:22:27.017742 2181 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:22:27.017872 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:22:27.028049 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:22:27.028379 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:22:27.037643 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:22:27.209544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:22:27.214196 (kubelet)[2547]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:22:27.303895 kubelet[2547]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:22:27.305564 kubelet[2547]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:22:27.305564 kubelet[2547]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:22:27.305564 kubelet[2547]: I0117 12:22:27.304073 2547 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:22:27.312826 kubelet[2547]: I0117 12:22:27.312091 2547 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:22:27.312826 kubelet[2547]: I0117 12:22:27.312129 2547 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:22:27.312826 kubelet[2547]: I0117 12:22:27.312386 2547 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:22:27.314493 kubelet[2547]: I0117 12:22:27.314463 2547 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:22:27.316742 kubelet[2547]: I0117 12:22:27.316697 2547 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:22:27.329209 kubelet[2547]: I0117 12:22:27.329154 2547 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:22:27.329787 kubelet[2547]: I0117 12:22:27.329770 2547 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:22:27.330901 kubelet[2547]: I0117 12:22:27.330844 2547 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:22:27.331255 kubelet[2547]: I0117 12:22:27.331237 2547 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:22:27.331395 kubelet[2547]: I0117 12:22:27.331381 2547 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:22:27.331562 kubelet[2547]: I0117 12:22:27.331550 2547 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:22:27.332611 kubelet[2547]: I0117 12:22:27.332583 2547 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:22:27.333145 kubelet[2547]: I0117 12:22:27.333067 2547 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:22:27.333218 kubelet[2547]: I0117 12:22:27.333163 2547 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:22:27.333218 kubelet[2547]: I0117 12:22:27.333189 2547 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:22:27.336348 kubelet[2547]: I0117 12:22:27.336314 2547 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:22:27.337014 kubelet[2547]: I0117 12:22:27.336995 2547 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:22:27.337490 kubelet[2547]: I0117 12:22:27.337432 2547 server.go:1256] "Started kubelet" Jan 17 12:22:27.347292 kubelet[2547]: I0117 12:22:27.346879 2547 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:22:27.357100 kubelet[2547]: I0117 12:22:27.357056 2547 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:22:27.359558 kubelet[2547]: I0117 12:22:27.358195 2547 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:22:27.364240 kubelet[2547]: I0117 12:22:27.362589 2547 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:22:27.364240 kubelet[2547]: I0117 12:22:27.362826 2547 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:22:27.368222 kubelet[2547]: I0117 12:22:27.368125 2547 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:22:27.374969 kubelet[2547]: I0117 12:22:27.372647 2547 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:22:27.374969 kubelet[2547]: I0117 12:22:27.372905 2547 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:22:27.385210 kubelet[2547]: I0117 12:22:27.382923 2547 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:22:27.385517 kubelet[2547]: I0117 12:22:27.385486 2547 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:22:27.391840 kubelet[2547]: I0117 12:22:27.391805 2547 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:22:27.397948 kubelet[2547]: I0117 12:22:27.397915 2547 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:22:27.399505 kubelet[2547]: I0117 12:22:27.399475 2547 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:22:27.399648 kubelet[2547]: I0117 12:22:27.399640 2547 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:22:27.399739 kubelet[2547]: I0117 12:22:27.399731 2547 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:22:27.399836 kubelet[2547]: E0117 12:22:27.399826 2547 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:22:27.464536 kubelet[2547]: I0117 12:22:27.464428 2547 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:22:27.464874 kubelet[2547]: I0117 12:22:27.464836 2547 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:22:27.465030 kubelet[2547]: I0117 12:22:27.465016 2547 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:22:27.465400 kubelet[2547]: I0117 12:22:27.465384 2547 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:22:27.465510 kubelet[2547]: I0117 12:22:27.465501 2547 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:22:27.465598 kubelet[2547]: I0117 12:22:27.465588 2547 policy_none.go:49] "None policy: Start" Jan 17 12:22:27.467211 kubelet[2547]: I0117 12:22:27.467159 2547 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:22:27.467357 kubelet[2547]: I0117 12:22:27.467346 2547 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:22:27.467728 kubelet[2547]: I0117 12:22:27.467701 2547 state_mem.go:75] "Updated machine memory state" Jan 17 12:22:27.473646 kubelet[2547]: I0117 12:22:27.473572 2547 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-6-c2def92c28" Jan 17 12:22:27.483950 kubelet[2547]: I0117 12:22:27.483787 2547 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:22:27.485740 kubelet[2547]: I0117 12:22:27.485114 2547 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:22:27.504551 kubelet[2547]: I0117 12:22:27.500275 2547 topology_manager.go:215] "Topology Admit Handler" podUID="28be93d205fa1ee2f8e04912acf1536f" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-6-c2def92c28" Jan 17 12:22:27.504551 kubelet[2547]: I0117 12:22:27.500382 2547 topology_manager.go:215] "Topology Admit Handler" podUID="d009c2ee6963a764dd8d634e11717e07" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-6-c2def92c28" Jan 17 12:22:27.504551 kubelet[2547]: I0117 12:22:27.500444 2547 topology_manager.go:215] "Topology Admit Handler" podUID="04c93b2335ce9d2b0aad7fb0e5f6f982" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-6-c2def92c28" Jan 17 12:22:27.515957 kubelet[2547]: I0117 12:22:27.515800 2547 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-6-c2def92c28" Jan 17 12:22:27.515957 kubelet[2547]: I0117 12:22:27.515947 2547 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-6-c2def92c28" Jan 17 12:22:27.519386 kubelet[2547]: W0117 12:22:27.519259 2547 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:22:27.519972 kubelet[2547]: W0117 12:22:27.519936 2547 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:22:27.520073 kubelet[2547]: E0117 12:22:27.520047 2547 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.0-6-c2def92c28\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.0-6-c2def92c28" Jan 17 12:22:27.522472 kubelet[2547]: W0117 12:22:27.522341 2547 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:22:27.522731 kubelet[2547]: E0117 12:22:27.522591 2547 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-6-c2def92c28\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-6-c2def92c28" Jan 17 12:22:27.573563 kubelet[2547]: I0117 12:22:27.573285 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d009c2ee6963a764dd8d634e11717e07-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-6-c2def92c28\" (UID: \"d009c2ee6963a764dd8d634e11717e07\") " pod="kube-system/kube-apiserver-ci-4081.3.0-6-c2def92c28" Jan 17 12:22:27.573563 kubelet[2547]: I0117 12:22:27.573340 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/04c93b2335ce9d2b0aad7fb0e5f6f982-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-6-c2def92c28\" (UID: \"04c93b2335ce9d2b0aad7fb0e5f6f982\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-6-c2def92c28" Jan 17 12:22:27.573563 kubelet[2547]: I0117 12:22:27.573360 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/04c93b2335ce9d2b0aad7fb0e5f6f982-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-6-c2def92c28\" (UID: \"04c93b2335ce9d2b0aad7fb0e5f6f982\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-6-c2def92c28" Jan 17 12:22:27.573563 kubelet[2547]: I0117 12:22:27.573379 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d009c2ee6963a764dd8d634e11717e07-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-6-c2def92c28\" (UID: \"d009c2ee6963a764dd8d634e11717e07\") " pod="kube-system/kube-apiserver-ci-4081.3.0-6-c2def92c28" Jan 17 12:22:27.573563 kubelet[2547]: I0117 12:22:27.573397 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d009c2ee6963a764dd8d634e11717e07-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-6-c2def92c28\" (UID: \"d009c2ee6963a764dd8d634e11717e07\") " pod="kube-system/kube-apiserver-ci-4081.3.0-6-c2def92c28" Jan 17 12:22:27.573836 kubelet[2547]: I0117 12:22:27.573422 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04c93b2335ce9d2b0aad7fb0e5f6f982-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-6-c2def92c28\" (UID: \"04c93b2335ce9d2b0aad7fb0e5f6f982\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-6-c2def92c28" Jan 17 12:22:27.573836 kubelet[2547]: I0117 12:22:27.573444 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/04c93b2335ce9d2b0aad7fb0e5f6f982-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-6-c2def92c28\" (UID: \"04c93b2335ce9d2b0aad7fb0e5f6f982\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-6-c2def92c28" Jan 17 12:22:27.573836 kubelet[2547]: I0117 12:22:27.573464 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/28be93d205fa1ee2f8e04912acf1536f-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-6-c2def92c28\" (UID: \"28be93d205fa1ee2f8e04912acf1536f\") " pod="kube-system/kube-scheduler-ci-4081.3.0-6-c2def92c28" Jan 17 12:22:27.573836 kubelet[2547]: I0117 12:22:27.573484 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/04c93b2335ce9d2b0aad7fb0e5f6f982-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-6-c2def92c28\" (UID: \"04c93b2335ce9d2b0aad7fb0e5f6f982\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-6-c2def92c28" Jan 17 12:22:27.822274 kubelet[2547]: E0117 12:22:27.821799 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:27.824527 kubelet[2547]: E0117 12:22:27.823489 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:27.824527 kubelet[2547]: E0117 12:22:27.823749 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:28.336001 kubelet[2547]: I0117 12:22:28.335752 2547 apiserver.go:52] "Watching apiserver" Jan 17 12:22:28.373011 kubelet[2547]: I0117 12:22:28.372938 2547 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:22:28.439232 kubelet[2547]: E0117 12:22:28.438441 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:28.439232 kubelet[2547]: E0117 12:22:28.438692 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:28.468250 kubelet[2547]: W0117 12:22:28.468213 2547 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:22:28.468639 kubelet[2547]: E0117 12:22:28.468616 2547 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-6-c2def92c28\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-6-c2def92c28" Jan 17 12:22:28.469707 kubelet[2547]: E0117 12:22:28.469591 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:28.585808 kubelet[2547]: I0117 12:22:28.585643 2547 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-6-c2def92c28" podStartSLOduration=5.58559991 podStartE2EDuration="5.58559991s" podCreationTimestamp="2025-01-17 12:22:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:22:28.585229446 +0000 UTC m=+1.361494361" watchObservedRunningTime="2025-01-17 12:22:28.58559991 +0000 UTC m=+1.361864826" Jan 17 12:22:28.663043 kubelet[2547]: I0117 12:22:28.662702 2547 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-6-c2def92c28" podStartSLOduration=1.662647417 podStartE2EDuration="1.662647417s" podCreationTimestamp="2025-01-17 12:22:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:22:28.637273744 +0000 UTC m=+1.413538662" watchObservedRunningTime="2025-01-17 12:22:28.662647417 +0000 UTC m=+1.438912334" Jan 17 12:22:29.445367 kubelet[2547]: E0117 12:22:29.445038 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:32.571064 sudo[1646]: pam_unix(sudo:session): session closed for user root Jan 17 12:22:32.575457 sshd[1643]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:32.580951 systemd[1]: sshd@6-164.92.109.43:22-139.178.68.195:48632.service: Deactivated successfully. Jan 17 12:22:32.584837 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:22:32.585516 systemd[1]: session-7.scope: Consumed 5.248s CPU time, 187.0M memory peak, 0B memory swap peak. Jan 17 12:22:32.586512 systemd-logind[1442]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:22:32.587794 systemd-logind[1442]: Removed session 7. Jan 17 12:22:32.793920 kubelet[2547]: E0117 12:22:32.793876 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:32.826612 kubelet[2547]: I0117 12:22:32.826219 2547 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-6-c2def92c28" podStartSLOduration=7.826116178 podStartE2EDuration="7.826116178s" podCreationTimestamp="2025-01-17 12:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:22:28.662897181 +0000 UTC m=+1.439162125" watchObservedRunningTime="2025-01-17 12:22:32.826116178 +0000 UTC m=+5.602381098" Jan 17 12:22:33.454226 kubelet[2547]: E0117 12:22:33.454117 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:36.150444 kubelet[2547]: E0117 12:22:36.150396 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:36.458456 kubelet[2547]: E0117 12:22:36.458329 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:36.927268 kubelet[2547]: E0117 12:22:36.926549 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:37.460522 kubelet[2547]: E0117 12:22:37.459760 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:39.606497 update_engine[1445]: I20250117 12:22:39.606392 1445 update_attempter.cc:509] Updating boot flags... Jan 17 12:22:39.638207 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2630) Jan 17 12:22:39.724251 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2628) Jan 17 12:22:39.786350 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2628) Jan 17 12:22:40.599766 kubelet[2547]: I0117 12:22:40.599721 2547 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:22:40.601221 containerd[1460]: time="2025-01-17T12:22:40.600908791Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:22:40.602884 kubelet[2547]: I0117 12:22:40.601264 2547 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:22:40.627043 kubelet[2547]: I0117 12:22:40.626842 2547 topology_manager.go:215] "Topology Admit Handler" podUID="c9a50f90-4988-45f8-95c2-8eb8f5188c99" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-wtnzz" Jan 17 12:22:40.630589 kubelet[2547]: I0117 12:22:40.630476 2547 topology_manager.go:215] "Topology Admit Handler" podUID="9b05f129-c804-42ea-97e5-7f73cc0e33ed" podNamespace="kube-system" podName="kube-proxy-mzldh" Jan 17 12:22:40.638031 systemd[1]: Created slice kubepods-besteffort-podc9a50f90_4988_45f8_95c2_8eb8f5188c99.slice - libcontainer container kubepods-besteffort-podc9a50f90_4988_45f8_95c2_8eb8f5188c99.slice. Jan 17 12:22:40.651267 systemd[1]: Created slice kubepods-besteffort-pod9b05f129_c804_42ea_97e5_7f73cc0e33ed.slice - libcontainer container kubepods-besteffort-pod9b05f129_c804_42ea_97e5_7f73cc0e33ed.slice. Jan 17 12:22:40.655694 kubelet[2547]: W0117 12:22:40.655612 2547 reflector.go:539] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.0-6-c2def92c28" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081.3.0-6-c2def92c28' and this object Jan 17 12:22:40.655694 kubelet[2547]: E0117 12:22:40.655657 2547 reflector.go:147] object-"tigera-operator"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.0-6-c2def92c28" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081.3.0-6-c2def92c28' and this object Jan 17 12:22:40.655694 kubelet[2547]: W0117 12:22:40.655669 2547 reflector.go:539] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4081.3.0-6-c2def92c28" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081.3.0-6-c2def92c28' and this object Jan 17 12:22:40.657489 kubelet[2547]: E0117 12:22:40.655708 2547 reflector.go:147] object-"tigera-operator"/"kubernetes-services-endpoint": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4081.3.0-6-c2def92c28" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081.3.0-6-c2def92c28' and this object Jan 17 12:22:40.657489 kubelet[2547]: W0117 12:22:40.655887 2547 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.0-6-c2def92c28" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.3.0-6-c2def92c28' and this object Jan 17 12:22:40.657489 kubelet[2547]: E0117 12:22:40.655912 2547 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.0-6-c2def92c28" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.3.0-6-c2def92c28' and this object Jan 17 12:22:40.657489 kubelet[2547]: W0117 12:22:40.656333 2547 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4081.3.0-6-c2def92c28" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.3.0-6-c2def92c28' and this object Jan 17 12:22:40.657489 kubelet[2547]: E0117 12:22:40.656355 2547 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4081.3.0-6-c2def92c28" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.3.0-6-c2def92c28' and this object Jan 17 12:22:40.750100 kubelet[2547]: I0117 12:22:40.750033 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm262\" (UniqueName: \"kubernetes.io/projected/9b05f129-c804-42ea-97e5-7f73cc0e33ed-kube-api-access-mm262\") pod \"kube-proxy-mzldh\" (UID: \"9b05f129-c804-42ea-97e5-7f73cc0e33ed\") " pod="kube-system/kube-proxy-mzldh" Jan 17 12:22:40.750100 kubelet[2547]: I0117 12:22:40.750092 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c9a50f90-4988-45f8-95c2-8eb8f5188c99-var-lib-calico\") pod \"tigera-operator-c7ccbd65-wtnzz\" (UID: \"c9a50f90-4988-45f8-95c2-8eb8f5188c99\") " pod="tigera-operator/tigera-operator-c7ccbd65-wtnzz" Jan 17 12:22:40.750100 kubelet[2547]: I0117 12:22:40.750113 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b05f129-c804-42ea-97e5-7f73cc0e33ed-xtables-lock\") pod \"kube-proxy-mzldh\" (UID: \"9b05f129-c804-42ea-97e5-7f73cc0e33ed\") " pod="kube-system/kube-proxy-mzldh" Jan 17 12:22:40.750389 kubelet[2547]: I0117 12:22:40.750156 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9b05f129-c804-42ea-97e5-7f73cc0e33ed-kube-proxy\") pod \"kube-proxy-mzldh\" (UID: \"9b05f129-c804-42ea-97e5-7f73cc0e33ed\") " pod="kube-system/kube-proxy-mzldh" Jan 17 12:22:40.750389 kubelet[2547]: I0117 12:22:40.750220 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnsrx\" (UniqueName: \"kubernetes.io/projected/c9a50f90-4988-45f8-95c2-8eb8f5188c99-kube-api-access-hnsrx\") pod \"tigera-operator-c7ccbd65-wtnzz\" (UID: \"c9a50f90-4988-45f8-95c2-8eb8f5188c99\") " pod="tigera-operator/tigera-operator-c7ccbd65-wtnzz" Jan 17 12:22:40.750389 kubelet[2547]: I0117 12:22:40.750239 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b05f129-c804-42ea-97e5-7f73cc0e33ed-lib-modules\") pod \"kube-proxy-mzldh\" (UID: \"9b05f129-c804-42ea-97e5-7f73cc0e33ed\") " pod="kube-system/kube-proxy-mzldh" Jan 17 12:22:41.849235 containerd[1460]: time="2025-01-17T12:22:41.849179780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-wtnzz,Uid:c9a50f90-4988-45f8-95c2-8eb8f5188c99,Namespace:tigera-operator,Attempt:0,}" Jan 17 12:22:41.864196 kubelet[2547]: E0117 12:22:41.862655 2547 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 17 12:22:41.864196 kubelet[2547]: E0117 12:22:41.862709 2547 projected.go:200] Error preparing data for projected volume kube-api-access-mm262 for pod kube-system/kube-proxy-mzldh: failed to sync configmap cache: timed out waiting for the condition Jan 17 12:22:41.864196 kubelet[2547]: E0117 12:22:41.862792 2547 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9b05f129-c804-42ea-97e5-7f73cc0e33ed-kube-api-access-mm262 podName:9b05f129-c804-42ea-97e5-7f73cc0e33ed nodeName:}" failed. No retries permitted until 2025-01-17 12:22:42.362766817 +0000 UTC m=+15.139031713 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mm262" (UniqueName: "kubernetes.io/projected/9b05f129-c804-42ea-97e5-7f73cc0e33ed-kube-api-access-mm262") pod "kube-proxy-mzldh" (UID: "9b05f129-c804-42ea-97e5-7f73cc0e33ed") : failed to sync configmap cache: timed out waiting for the condition Jan 17 12:22:41.878373 containerd[1460]: time="2025-01-17T12:22:41.878108339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:41.878373 containerd[1460]: time="2025-01-17T12:22:41.878201085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:41.878373 containerd[1460]: time="2025-01-17T12:22:41.878217358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:41.878373 containerd[1460]: time="2025-01-17T12:22:41.878305288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:41.907469 systemd[1]: Started cri-containerd-10810844bb8a2f696082c9899c46199ef9364aaaf50060a8828f73f6aecdf880.scope - libcontainer container 10810844bb8a2f696082c9899c46199ef9364aaaf50060a8828f73f6aecdf880. Jan 17 12:22:41.956243 containerd[1460]: time="2025-01-17T12:22:41.956136962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-wtnzz,Uid:c9a50f90-4988-45f8-95c2-8eb8f5188c99,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"10810844bb8a2f696082c9899c46199ef9364aaaf50060a8828f73f6aecdf880\"" Jan 17 12:22:41.958576 containerd[1460]: time="2025-01-17T12:22:41.958440942Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 17 12:22:42.755241 kubelet[2547]: E0117 12:22:42.754581 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:42.755955 containerd[1460]: time="2025-01-17T12:22:42.755574884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mzldh,Uid:9b05f129-c804-42ea-97e5-7f73cc0e33ed,Namespace:kube-system,Attempt:0,}" Jan 17 12:22:42.782255 containerd[1460]: time="2025-01-17T12:22:42.781732585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:42.782255 containerd[1460]: time="2025-01-17T12:22:42.781813656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:42.782255 containerd[1460]: time="2025-01-17T12:22:42.781827911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:42.782255 containerd[1460]: time="2025-01-17T12:22:42.781991937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:42.806769 systemd[1]: run-containerd-runc-k8s.io-ce75146be69acb4f34f4c774f91019b7eb2a6848ca0ddda0d4a6edd689c54193-runc.nHqI1a.mount: Deactivated successfully. Jan 17 12:22:42.816574 systemd[1]: Started cri-containerd-ce75146be69acb4f34f4c774f91019b7eb2a6848ca0ddda0d4a6edd689c54193.scope - libcontainer container ce75146be69acb4f34f4c774f91019b7eb2a6848ca0ddda0d4a6edd689c54193. Jan 17 12:22:42.850633 containerd[1460]: time="2025-01-17T12:22:42.850590770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mzldh,Uid:9b05f129-c804-42ea-97e5-7f73cc0e33ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce75146be69acb4f34f4c774f91019b7eb2a6848ca0ddda0d4a6edd689c54193\"" Jan 17 12:22:42.852342 kubelet[2547]: E0117 12:22:42.852305 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:42.856630 containerd[1460]: time="2025-01-17T12:22:42.856486229Z" level=info msg="CreateContainer within sandbox \"ce75146be69acb4f34f4c774f91019b7eb2a6848ca0ddda0d4a6edd689c54193\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:22:42.874772 containerd[1460]: time="2025-01-17T12:22:42.874711741Z" level=info msg="CreateContainer within sandbox \"ce75146be69acb4f34f4c774f91019b7eb2a6848ca0ddda0d4a6edd689c54193\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"89ba3cddaa98fffbc9d04475bb88d52e2f6e905b55d7a20426b95893c7dceadf\"" Jan 17 12:22:42.875869 containerd[1460]: time="2025-01-17T12:22:42.875798957Z" level=info msg="StartContainer for \"89ba3cddaa98fffbc9d04475bb88d52e2f6e905b55d7a20426b95893c7dceadf\"" Jan 17 12:22:42.908477 systemd[1]: Started cri-containerd-89ba3cddaa98fffbc9d04475bb88d52e2f6e905b55d7a20426b95893c7dceadf.scope - libcontainer container 89ba3cddaa98fffbc9d04475bb88d52e2f6e905b55d7a20426b95893c7dceadf. Jan 17 12:22:42.942736 containerd[1460]: time="2025-01-17T12:22:42.942669513Z" level=info msg="StartContainer for \"89ba3cddaa98fffbc9d04475bb88d52e2f6e905b55d7a20426b95893c7dceadf\" returns successfully" Jan 17 12:22:43.477062 kubelet[2547]: E0117 12:22:43.477031 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:43.763448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1834970844.mount: Deactivated successfully. Jan 17 12:22:47.431553 kubelet[2547]: I0117 12:22:47.431289 2547 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-mzldh" podStartSLOduration=7.431237004 podStartE2EDuration="7.431237004s" podCreationTimestamp="2025-01-17 12:22:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:22:43.494983809 +0000 UTC m=+16.271248728" watchObservedRunningTime="2025-01-17 12:22:47.431237004 +0000 UTC m=+20.207501936" Jan 17 12:22:50.679571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1758441567.mount: Deactivated successfully. Jan 17 12:22:51.217085 containerd[1460]: time="2025-01-17T12:22:51.217026315Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:51.218291 containerd[1460]: time="2025-01-17T12:22:51.218195592Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764349" Jan 17 12:22:51.218839 containerd[1460]: time="2025-01-17T12:22:51.218797868Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:51.221958 containerd[1460]: time="2025-01-17T12:22:51.221861576Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:51.222823 containerd[1460]: time="2025-01-17T12:22:51.222678131Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 9.26418997s" Jan 17 12:22:51.222823 containerd[1460]: time="2025-01-17T12:22:51.222716957Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 17 12:22:51.225466 containerd[1460]: time="2025-01-17T12:22:51.225337899Z" level=info msg="CreateContainer within sandbox \"10810844bb8a2f696082c9899c46199ef9364aaaf50060a8828f73f6aecdf880\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 12:22:51.248723 containerd[1460]: time="2025-01-17T12:22:51.248635381Z" level=info msg="CreateContainer within sandbox \"10810844bb8a2f696082c9899c46199ef9364aaaf50060a8828f73f6aecdf880\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"509e7a5897569b66d887bc377c51413a56e2adcdd54540bc06fd7ce25726a7f9\"" Jan 17 12:22:51.249614 containerd[1460]: time="2025-01-17T12:22:51.249520741Z" level=info msg="StartContainer for \"509e7a5897569b66d887bc377c51413a56e2adcdd54540bc06fd7ce25726a7f9\"" Jan 17 12:22:51.286391 systemd[1]: Started cri-containerd-509e7a5897569b66d887bc377c51413a56e2adcdd54540bc06fd7ce25726a7f9.scope - libcontainer container 509e7a5897569b66d887bc377c51413a56e2adcdd54540bc06fd7ce25726a7f9. Jan 17 12:22:51.324213 containerd[1460]: time="2025-01-17T12:22:51.324112029Z" level=info msg="StartContainer for \"509e7a5897569b66d887bc377c51413a56e2adcdd54540bc06fd7ce25726a7f9\" returns successfully" Jan 17 12:22:55.213193 kubelet[2547]: I0117 12:22:55.211977 2547 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-wtnzz" podStartSLOduration=5.946709146 podStartE2EDuration="15.211912607s" podCreationTimestamp="2025-01-17 12:22:40 +0000 UTC" firstStartedPulling="2025-01-17 12:22:41.957854498 +0000 UTC m=+14.734119393" lastFinishedPulling="2025-01-17 12:22:51.223057938 +0000 UTC m=+23.999322854" observedRunningTime="2025-01-17 12:22:53.067425265 +0000 UTC m=+25.843690181" watchObservedRunningTime="2025-01-17 12:22:55.211912607 +0000 UTC m=+27.988177525" Jan 17 12:22:55.213193 kubelet[2547]: I0117 12:22:55.212369 2547 topology_manager.go:215] "Topology Admit Handler" podUID="55e1029c-d47a-4683-a765-1ff7d03b043c" podNamespace="calico-system" podName="calico-typha-556b68dbb6-rfckr" Jan 17 12:22:55.236540 systemd[1]: Created slice kubepods-besteffort-pod55e1029c_d47a_4683_a765_1ff7d03b043c.slice - libcontainer container kubepods-besteffort-pod55e1029c_d47a_4683_a765_1ff7d03b043c.slice. Jan 17 12:22:55.354470 kubelet[2547]: I0117 12:22:55.354410 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/55e1029c-d47a-4683-a765-1ff7d03b043c-typha-certs\") pod \"calico-typha-556b68dbb6-rfckr\" (UID: \"55e1029c-d47a-4683-a765-1ff7d03b043c\") " pod="calico-system/calico-typha-556b68dbb6-rfckr" Jan 17 12:22:55.354470 kubelet[2547]: I0117 12:22:55.354472 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55e1029c-d47a-4683-a765-1ff7d03b043c-tigera-ca-bundle\") pod \"calico-typha-556b68dbb6-rfckr\" (UID: \"55e1029c-d47a-4683-a765-1ff7d03b043c\") " pod="calico-system/calico-typha-556b68dbb6-rfckr" Jan 17 12:22:55.354861 kubelet[2547]: I0117 12:22:55.354496 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kklq\" (UniqueName: \"kubernetes.io/projected/55e1029c-d47a-4683-a765-1ff7d03b043c-kube-api-access-9kklq\") pod \"calico-typha-556b68dbb6-rfckr\" (UID: \"55e1029c-d47a-4683-a765-1ff7d03b043c\") " pod="calico-system/calico-typha-556b68dbb6-rfckr" Jan 17 12:22:55.561547 kubelet[2547]: I0117 12:22:55.561158 2547 topology_manager.go:215] "Topology Admit Handler" podUID="d592a5fe-0a0c-4ded-8820-1420c58546f4" podNamespace="calico-system" podName="calico-node-m2jmt" Jan 17 12:22:55.569664 kubelet[2547]: E0117 12:22:55.569204 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:55.572407 containerd[1460]: time="2025-01-17T12:22:55.571453720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-556b68dbb6-rfckr,Uid:55e1029c-d47a-4683-a765-1ff7d03b043c,Namespace:calico-system,Attempt:0,}" Jan 17 12:22:55.600957 systemd[1]: Created slice kubepods-besteffort-podd592a5fe_0a0c_4ded_8820_1420c58546f4.slice - libcontainer container kubepods-besteffort-podd592a5fe_0a0c_4ded_8820_1420c58546f4.slice. Jan 17 12:22:55.657179 containerd[1460]: time="2025-01-17T12:22:55.655156370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:55.657179 containerd[1460]: time="2025-01-17T12:22:55.655711184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:55.657179 containerd[1460]: time="2025-01-17T12:22:55.655738621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:55.657179 containerd[1460]: time="2025-01-17T12:22:55.655999449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:55.663659 kubelet[2547]: I0117 12:22:55.662614 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d592a5fe-0a0c-4ded-8820-1420c58546f4-tigera-ca-bundle\") pod \"calico-node-m2jmt\" (UID: \"d592a5fe-0a0c-4ded-8820-1420c58546f4\") " pod="calico-system/calico-node-m2jmt" Jan 17 12:22:55.663659 kubelet[2547]: I0117 12:22:55.662695 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d592a5fe-0a0c-4ded-8820-1420c58546f4-var-run-calico\") pod \"calico-node-m2jmt\" (UID: \"d592a5fe-0a0c-4ded-8820-1420c58546f4\") " pod="calico-system/calico-node-m2jmt" Jan 17 12:22:55.663659 kubelet[2547]: I0117 12:22:55.662727 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d592a5fe-0a0c-4ded-8820-1420c58546f4-cni-log-dir\") pod \"calico-node-m2jmt\" (UID: \"d592a5fe-0a0c-4ded-8820-1420c58546f4\") " pod="calico-system/calico-node-m2jmt" Jan 17 12:22:55.663659 kubelet[2547]: I0117 12:22:55.662761 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d592a5fe-0a0c-4ded-8820-1420c58546f4-node-certs\") pod \"calico-node-m2jmt\" (UID: \"d592a5fe-0a0c-4ded-8820-1420c58546f4\") " pod="calico-system/calico-node-m2jmt" Jan 17 12:22:55.663659 kubelet[2547]: I0117 12:22:55.662790 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9qpl\" (UniqueName: \"kubernetes.io/projected/d592a5fe-0a0c-4ded-8820-1420c58546f4-kube-api-access-k9qpl\") pod \"calico-node-m2jmt\" (UID: \"d592a5fe-0a0c-4ded-8820-1420c58546f4\") " pod="calico-system/calico-node-m2jmt" Jan 17 12:22:55.664109 kubelet[2547]: I0117 12:22:55.662819 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d592a5fe-0a0c-4ded-8820-1420c58546f4-xtables-lock\") pod \"calico-node-m2jmt\" (UID: \"d592a5fe-0a0c-4ded-8820-1420c58546f4\") " pod="calico-system/calico-node-m2jmt" Jan 17 12:22:55.664109 kubelet[2547]: I0117 12:22:55.662844 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d592a5fe-0a0c-4ded-8820-1420c58546f4-var-lib-calico\") pod \"calico-node-m2jmt\" (UID: \"d592a5fe-0a0c-4ded-8820-1420c58546f4\") " pod="calico-system/calico-node-m2jmt" Jan 17 12:22:55.664109 kubelet[2547]: I0117 12:22:55.662873 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d592a5fe-0a0c-4ded-8820-1420c58546f4-cni-bin-dir\") pod \"calico-node-m2jmt\" (UID: \"d592a5fe-0a0c-4ded-8820-1420c58546f4\") " pod="calico-system/calico-node-m2jmt" Jan 17 12:22:55.664109 kubelet[2547]: I0117 12:22:55.662904 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d592a5fe-0a0c-4ded-8820-1420c58546f4-lib-modules\") pod \"calico-node-m2jmt\" (UID: \"d592a5fe-0a0c-4ded-8820-1420c58546f4\") " pod="calico-system/calico-node-m2jmt" Jan 17 12:22:55.664109 kubelet[2547]: I0117 12:22:55.662939 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d592a5fe-0a0c-4ded-8820-1420c58546f4-flexvol-driver-host\") pod \"calico-node-m2jmt\" (UID: \"d592a5fe-0a0c-4ded-8820-1420c58546f4\") " pod="calico-system/calico-node-m2jmt" Jan 17 12:22:55.664265 kubelet[2547]: I0117 12:22:55.662972 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d592a5fe-0a0c-4ded-8820-1420c58546f4-policysync\") pod \"calico-node-m2jmt\" (UID: \"d592a5fe-0a0c-4ded-8820-1420c58546f4\") " pod="calico-system/calico-node-m2jmt" Jan 17 12:22:55.664265 kubelet[2547]: I0117 12:22:55.663042 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d592a5fe-0a0c-4ded-8820-1420c58546f4-cni-net-dir\") pod \"calico-node-m2jmt\" (UID: \"d592a5fe-0a0c-4ded-8820-1420c58546f4\") " pod="calico-system/calico-node-m2jmt" Jan 17 12:22:55.711522 systemd[1]: Started cri-containerd-c3d540feb0d7830f83d907df48d631903703157200614a0618c854fa3335fb4a.scope - libcontainer container c3d540feb0d7830f83d907df48d631903703157200614a0618c854fa3335fb4a. Jan 17 12:22:55.734393 kubelet[2547]: I0117 12:22:55.734349 2547 topology_manager.go:215] "Topology Admit Handler" podUID="151ac44f-4692-405d-a3ad-26a51dc59114" podNamespace="calico-system" podName="csi-node-driver-gzkcx" Jan 17 12:22:55.736039 kubelet[2547]: E0117 12:22:55.735744 2547 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzkcx" podUID="151ac44f-4692-405d-a3ad-26a51dc59114" Jan 17 12:22:55.781288 kubelet[2547]: E0117 12:22:55.781233 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.781288 kubelet[2547]: W0117 12:22:55.781276 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.781484 kubelet[2547]: E0117 12:22:55.781349 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.782073 kubelet[2547]: E0117 12:22:55.781882 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.782073 kubelet[2547]: W0117 12:22:55.781907 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.782275 kubelet[2547]: E0117 12:22:55.782247 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.783371 kubelet[2547]: E0117 12:22:55.783287 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.783438 kubelet[2547]: W0117 12:22:55.783384 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.783438 kubelet[2547]: E0117 12:22:55.783411 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.783961 kubelet[2547]: E0117 12:22:55.783856 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.783961 kubelet[2547]: W0117 12:22:55.783877 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.783961 kubelet[2547]: E0117 12:22:55.783899 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.785113 kubelet[2547]: E0117 12:22:55.784950 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.785113 kubelet[2547]: W0117 12:22:55.784969 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.785113 kubelet[2547]: E0117 12:22:55.784997 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.786794 kubelet[2547]: E0117 12:22:55.786253 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.786794 kubelet[2547]: W0117 12:22:55.786275 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.786794 kubelet[2547]: E0117 12:22:55.786301 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.786794 kubelet[2547]: E0117 12:22:55.786581 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.786794 kubelet[2547]: W0117 12:22:55.786594 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.786794 kubelet[2547]: E0117 12:22:55.786612 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.787240 kubelet[2547]: E0117 12:22:55.787222 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.787240 kubelet[2547]: W0117 12:22:55.787240 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.787317 kubelet[2547]: E0117 12:22:55.787259 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.788542 kubelet[2547]: E0117 12:22:55.788520 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.788542 kubelet[2547]: W0117 12:22:55.788539 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.788635 kubelet[2547]: E0117 12:22:55.788559 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.788943 kubelet[2547]: E0117 12:22:55.788920 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.788995 kubelet[2547]: W0117 12:22:55.788945 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.788995 kubelet[2547]: E0117 12:22:55.788964 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.789218 kubelet[2547]: E0117 12:22:55.789203 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.789218 kubelet[2547]: W0117 12:22:55.789218 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.789287 kubelet[2547]: E0117 12:22:55.789234 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.790333 kubelet[2547]: E0117 12:22:55.790298 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.790333 kubelet[2547]: W0117 12:22:55.790319 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.790423 kubelet[2547]: E0117 12:22:55.790338 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.790765 kubelet[2547]: E0117 12:22:55.790639 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.790765 kubelet[2547]: W0117 12:22:55.790657 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.790765 kubelet[2547]: E0117 12:22:55.790675 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.791143 kubelet[2547]: E0117 12:22:55.791124 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.791199 kubelet[2547]: W0117 12:22:55.791144 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.791199 kubelet[2547]: E0117 12:22:55.791163 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.792266 kubelet[2547]: E0117 12:22:55.792242 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.792266 kubelet[2547]: W0117 12:22:55.792261 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.792404 kubelet[2547]: E0117 12:22:55.792279 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.793191 kubelet[2547]: E0117 12:22:55.793150 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.793191 kubelet[2547]: W0117 12:22:55.793184 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.793315 kubelet[2547]: E0117 12:22:55.793204 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.793589 kubelet[2547]: E0117 12:22:55.793477 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.793589 kubelet[2547]: W0117 12:22:55.793494 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.793589 kubelet[2547]: E0117 12:22:55.793512 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.795200 kubelet[2547]: E0117 12:22:55.794633 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.795200 kubelet[2547]: W0117 12:22:55.794655 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.795200 kubelet[2547]: E0117 12:22:55.794679 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.795200 kubelet[2547]: E0117 12:22:55.794953 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.795200 kubelet[2547]: W0117 12:22:55.794969 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.795200 kubelet[2547]: E0117 12:22:55.794987 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.796283 kubelet[2547]: E0117 12:22:55.796254 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.796283 kubelet[2547]: W0117 12:22:55.796283 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.796434 kubelet[2547]: E0117 12:22:55.796305 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.796611 kubelet[2547]: E0117 12:22:55.796588 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.796611 kubelet[2547]: W0117 12:22:55.796608 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.796901 kubelet[2547]: E0117 12:22:55.796627 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.804917 kubelet[2547]: E0117 12:22:55.804873 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.804917 kubelet[2547]: W0117 12:22:55.804909 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.805431 kubelet[2547]: E0117 12:22:55.805219 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.833214 containerd[1460]: time="2025-01-17T12:22:55.832438142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-556b68dbb6-rfckr,Uid:55e1029c-d47a-4683-a765-1ff7d03b043c,Namespace:calico-system,Attempt:0,} returns sandbox id \"c3d540feb0d7830f83d907df48d631903703157200614a0618c854fa3335fb4a\"" Jan 17 12:22:55.836379 kubelet[2547]: E0117 12:22:55.835002 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:55.841192 containerd[1460]: time="2025-01-17T12:22:55.840307883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 17 12:22:55.868237 kubelet[2547]: E0117 12:22:55.867939 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.868237 kubelet[2547]: W0117 12:22:55.867973 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.868237 kubelet[2547]: E0117 12:22:55.868006 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.868237 kubelet[2547]: I0117 12:22:55.868082 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/151ac44f-4692-405d-a3ad-26a51dc59114-varrun\") pod \"csi-node-driver-gzkcx\" (UID: \"151ac44f-4692-405d-a3ad-26a51dc59114\") " pod="calico-system/csi-node-driver-gzkcx" Jan 17 12:22:55.870000 kubelet[2547]: E0117 12:22:55.869511 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.870000 kubelet[2547]: W0117 12:22:55.869544 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.870000 kubelet[2547]: E0117 12:22:55.869703 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.870785 kubelet[2547]: E0117 12:22:55.870462 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.870785 kubelet[2547]: W0117 12:22:55.870487 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.871433 kubelet[2547]: E0117 12:22:55.871244 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.872195 kubelet[2547]: E0117 12:22:55.871535 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.872195 kubelet[2547]: W0117 12:22:55.871557 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.872195 kubelet[2547]: E0117 12:22:55.871593 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.872195 kubelet[2547]: I0117 12:22:55.871645 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/151ac44f-4692-405d-a3ad-26a51dc59114-kubelet-dir\") pod \"csi-node-driver-gzkcx\" (UID: \"151ac44f-4692-405d-a3ad-26a51dc59114\") " pod="calico-system/csi-node-driver-gzkcx" Jan 17 12:22:55.872856 kubelet[2547]: E0117 12:22:55.872806 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.872856 kubelet[2547]: W0117 12:22:55.872830 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.872997 kubelet[2547]: E0117 12:22:55.872898 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.873250 kubelet[2547]: E0117 12:22:55.873228 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.873355 kubelet[2547]: W0117 12:22:55.873246 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.873355 kubelet[2547]: E0117 12:22:55.873351 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.873581 kubelet[2547]: E0117 12:22:55.873560 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.873581 kubelet[2547]: W0117 12:22:55.873578 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.873720 kubelet[2547]: E0117 12:22:55.873595 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.873720 kubelet[2547]: I0117 12:22:55.873639 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/151ac44f-4692-405d-a3ad-26a51dc59114-registration-dir\") pod \"csi-node-driver-gzkcx\" (UID: \"151ac44f-4692-405d-a3ad-26a51dc59114\") " pod="calico-system/csi-node-driver-gzkcx" Jan 17 12:22:55.875212 kubelet[2547]: E0117 12:22:55.874266 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.875212 kubelet[2547]: W0117 12:22:55.874287 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.875212 kubelet[2547]: E0117 12:22:55.874412 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.875212 kubelet[2547]: I0117 12:22:55.874719 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2kxv\" (UniqueName: \"kubernetes.io/projected/151ac44f-4692-405d-a3ad-26a51dc59114-kube-api-access-s2kxv\") pod \"csi-node-driver-gzkcx\" (UID: \"151ac44f-4692-405d-a3ad-26a51dc59114\") " pod="calico-system/csi-node-driver-gzkcx" Jan 17 12:22:55.875212 kubelet[2547]: E0117 12:22:55.875088 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.875212 kubelet[2547]: W0117 12:22:55.875102 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.875538 kubelet[2547]: E0117 12:22:55.875224 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.875728 kubelet[2547]: E0117 12:22:55.875701 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.875728 kubelet[2547]: W0117 12:22:55.875720 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.875842 kubelet[2547]: E0117 12:22:55.875756 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.876587 kubelet[2547]: E0117 12:22:55.876556 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.876587 kubelet[2547]: W0117 12:22:55.876578 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.876724 kubelet[2547]: E0117 12:22:55.876612 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.876799 kubelet[2547]: I0117 12:22:55.876780 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/151ac44f-4692-405d-a3ad-26a51dc59114-socket-dir\") pod \"csi-node-driver-gzkcx\" (UID: \"151ac44f-4692-405d-a3ad-26a51dc59114\") " pod="calico-system/csi-node-driver-gzkcx" Jan 17 12:22:55.877029 kubelet[2547]: E0117 12:22:55.877005 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.877029 kubelet[2547]: W0117 12:22:55.877026 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.878486 kubelet[2547]: E0117 12:22:55.877050 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.878486 kubelet[2547]: E0117 12:22:55.877586 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.878486 kubelet[2547]: W0117 12:22:55.877599 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.878486 kubelet[2547]: E0117 12:22:55.877891 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.878486 kubelet[2547]: E0117 12:22:55.878447 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.878486 kubelet[2547]: W0117 12:22:55.878470 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.878486 kubelet[2547]: E0117 12:22:55.878489 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.879045 kubelet[2547]: E0117 12:22:55.879018 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.879045 kubelet[2547]: W0117 12:22:55.879039 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.879208 kubelet[2547]: E0117 12:22:55.879058 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.917790 kubelet[2547]: E0117 12:22:55.917703 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:55.922196 containerd[1460]: time="2025-01-17T12:22:55.920573251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m2jmt,Uid:d592a5fe-0a0c-4ded-8820-1420c58546f4,Namespace:calico-system,Attempt:0,}" Jan 17 12:22:55.956317 containerd[1460]: time="2025-01-17T12:22:55.956131220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:55.956317 containerd[1460]: time="2025-01-17T12:22:55.956263603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:55.956317 containerd[1460]: time="2025-01-17T12:22:55.956285802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:55.956742 containerd[1460]: time="2025-01-17T12:22:55.956425871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:55.978478 kubelet[2547]: E0117 12:22:55.978121 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.978478 kubelet[2547]: W0117 12:22:55.978148 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.978478 kubelet[2547]: E0117 12:22:55.978189 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.979070 kubelet[2547]: E0117 12:22:55.979032 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.979070 kubelet[2547]: W0117 12:22:55.979057 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.979225 kubelet[2547]: E0117 12:22:55.979086 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.979565 kubelet[2547]: E0117 12:22:55.979441 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.979565 kubelet[2547]: W0117 12:22:55.979459 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.979565 kubelet[2547]: E0117 12:22:55.979492 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.980010 kubelet[2547]: E0117 12:22:55.979986 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.980010 kubelet[2547]: W0117 12:22:55.980005 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.980111 kubelet[2547]: E0117 12:22:55.980029 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.982079 kubelet[2547]: E0117 12:22:55.980528 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.982079 kubelet[2547]: W0117 12:22:55.980546 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.982079 kubelet[2547]: E0117 12:22:55.980669 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.982079 kubelet[2547]: E0117 12:22:55.981047 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.982079 kubelet[2547]: W0117 12:22:55.981061 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.982079 kubelet[2547]: E0117 12:22:55.981263 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.982079 kubelet[2547]: E0117 12:22:55.981428 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.982079 kubelet[2547]: W0117 12:22:55.981441 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.982079 kubelet[2547]: E0117 12:22:55.981504 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.982079 kubelet[2547]: E0117 12:22:55.981864 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.982420 kubelet[2547]: W0117 12:22:55.981907 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.982420 kubelet[2547]: E0117 12:22:55.981987 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.982420 kubelet[2547]: E0117 12:22:55.982329 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.982420 kubelet[2547]: W0117 12:22:55.982375 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.983911 kubelet[2547]: E0117 12:22:55.982559 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.983911 kubelet[2547]: E0117 12:22:55.982917 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.983911 kubelet[2547]: W0117 12:22:55.982932 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.983911 kubelet[2547]: E0117 12:22:55.983321 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.983911 kubelet[2547]: E0117 12:22:55.983429 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.983911 kubelet[2547]: W0117 12:22:55.983441 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.983911 kubelet[2547]: E0117 12:22:55.983631 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.985259 kubelet[2547]: E0117 12:22:55.983919 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.985259 kubelet[2547]: W0117 12:22:55.983933 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.985259 kubelet[2547]: E0117 12:22:55.984440 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.985259 kubelet[2547]: W0117 12:22:55.984546 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.985259 kubelet[2547]: E0117 12:22:55.984845 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.985259 kubelet[2547]: W0117 12:22:55.984857 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.985259 kubelet[2547]: E0117 12:22:55.985120 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.985259 kubelet[2547]: W0117 12:22:55.985132 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.985487 kubelet[2547]: E0117 12:22:55.985426 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.985487 kubelet[2547]: W0117 12:22:55.985439 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.985487 kubelet[2547]: E0117 12:22:55.985459 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.986410 kubelet[2547]: E0117 12:22:55.986382 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.986540 kubelet[2547]: W0117 12:22:55.986514 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.986572 kubelet[2547]: E0117 12:22:55.986546 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.986611 kubelet[2547]: E0117 12:22:55.986603 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.986979 kubelet[2547]: E0117 12:22:55.986956 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.986979 kubelet[2547]: W0117 12:22:55.986974 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.987053 kubelet[2547]: E0117 12:22:55.986997 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.987525 kubelet[2547]: E0117 12:22:55.987499 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.987525 kubelet[2547]: W0117 12:22:55.987522 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.987609 kubelet[2547]: E0117 12:22:55.987543 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.988533 kubelet[2547]: E0117 12:22:55.988337 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.988932 kubelet[2547]: E0117 12:22:55.988913 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.988999 kubelet[2547]: W0117 12:22:55.988931 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.988999 kubelet[2547]: E0117 12:22:55.988963 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.989575 systemd[1]: Started cri-containerd-b6aff15774fd558fc0caa68a9d34046edf12f839a8c60d489c540278339c10e2.scope - libcontainer container b6aff15774fd558fc0caa68a9d34046edf12f839a8c60d489c540278339c10e2. Jan 17 12:22:55.990680 kubelet[2547]: E0117 12:22:55.990657 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.990680 kubelet[2547]: W0117 12:22:55.990677 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.990770 kubelet[2547]: E0117 12:22:55.990698 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.990770 kubelet[2547]: E0117 12:22:55.990738 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.991519 kubelet[2547]: E0117 12:22:55.991038 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.991519 kubelet[2547]: W0117 12:22:55.991058 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.991519 kubelet[2547]: E0117 12:22:55.991078 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.992554 kubelet[2547]: E0117 12:22:55.991946 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.992554 kubelet[2547]: W0117 12:22:55.991983 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.992554 kubelet[2547]: E0117 12:22:55.992002 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.992554 kubelet[2547]: E0117 12:22:55.992038 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.992554 kubelet[2547]: E0117 12:22:55.992356 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.992554 kubelet[2547]: W0117 12:22:55.992368 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.992554 kubelet[2547]: E0117 12:22:55.992386 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:55.993141 kubelet[2547]: E0117 12:22:55.992749 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:55.993141 kubelet[2547]: W0117 12:22:55.992762 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:55.993141 kubelet[2547]: E0117 12:22:55.992894 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:56.013302 kubelet[2547]: E0117 12:22:56.013262 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:56.013302 kubelet[2547]: W0117 12:22:56.013290 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:56.013302 kubelet[2547]: E0117 12:22:56.013318 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:56.053919 containerd[1460]: time="2025-01-17T12:22:56.053811422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m2jmt,Uid:d592a5fe-0a0c-4ded-8820-1420c58546f4,Namespace:calico-system,Attempt:0,} returns sandbox id \"b6aff15774fd558fc0caa68a9d34046edf12f839a8c60d489c540278339c10e2\"" Jan 17 12:22:56.056079 kubelet[2547]: E0117 12:22:56.055792 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:57.092963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3263979614.mount: Deactivated successfully. Jan 17 12:22:57.402520 kubelet[2547]: E0117 12:22:57.400702 2547 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzkcx" podUID="151ac44f-4692-405d-a3ad-26a51dc59114" Jan 17 12:22:57.770994 containerd[1460]: time="2025-01-17T12:22:57.770518250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:57.773594 containerd[1460]: time="2025-01-17T12:22:57.773306616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 17 12:22:57.775384 containerd[1460]: time="2025-01-17T12:22:57.775293071Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:57.778891 containerd[1460]: time="2025-01-17T12:22:57.778802520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:57.779972 containerd[1460]: time="2025-01-17T12:22:57.779780317Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.939429318s" Jan 17 12:22:57.779972 containerd[1460]: time="2025-01-17T12:22:57.779837866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 17 12:22:57.782557 containerd[1460]: time="2025-01-17T12:22:57.781240447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 17 12:22:57.803031 containerd[1460]: time="2025-01-17T12:22:57.802961384Z" level=info msg="CreateContainer within sandbox \"c3d540feb0d7830f83d907df48d631903703157200614a0618c854fa3335fb4a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 12:22:57.817155 containerd[1460]: time="2025-01-17T12:22:57.817018208Z" level=info msg="CreateContainer within sandbox \"c3d540feb0d7830f83d907df48d631903703157200614a0618c854fa3335fb4a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8069d5c57706e50bbd9844d67fc689206d09bcfb0eabb179b00806b81116b575\"" Jan 17 12:22:57.817816 containerd[1460]: time="2025-01-17T12:22:57.817774535Z" level=info msg="StartContainer for \"8069d5c57706e50bbd9844d67fc689206d09bcfb0eabb179b00806b81116b575\"" Jan 17 12:22:57.873067 systemd[1]: Started cri-containerd-8069d5c57706e50bbd9844d67fc689206d09bcfb0eabb179b00806b81116b575.scope - libcontainer container 8069d5c57706e50bbd9844d67fc689206d09bcfb0eabb179b00806b81116b575. Jan 17 12:22:57.925283 containerd[1460]: time="2025-01-17T12:22:57.925070421Z" level=info msg="StartContainer for \"8069d5c57706e50bbd9844d67fc689206d09bcfb0eabb179b00806b81116b575\" returns successfully" Jan 17 12:22:58.526474 kubelet[2547]: E0117 12:22:58.526432 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:58.620514 kubelet[2547]: E0117 12:22:58.620478 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.620514 kubelet[2547]: W0117 12:22:58.620506 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.620699 kubelet[2547]: E0117 12:22:58.620537 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.620810 kubelet[2547]: E0117 12:22:58.620796 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.620857 kubelet[2547]: W0117 12:22:58.620819 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.620857 kubelet[2547]: E0117 12:22:58.620837 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.621107 kubelet[2547]: E0117 12:22:58.621086 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.621107 kubelet[2547]: W0117 12:22:58.621098 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.621271 kubelet[2547]: E0117 12:22:58.621111 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.621394 kubelet[2547]: E0117 12:22:58.621370 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.621394 kubelet[2547]: W0117 12:22:58.621387 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.621465 kubelet[2547]: E0117 12:22:58.621401 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.621624 kubelet[2547]: E0117 12:22:58.621613 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.621624 kubelet[2547]: W0117 12:22:58.621624 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.621716 kubelet[2547]: E0117 12:22:58.621635 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.621799 kubelet[2547]: E0117 12:22:58.621789 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.621799 kubelet[2547]: W0117 12:22:58.621798 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.621866 kubelet[2547]: E0117 12:22:58.621809 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.621956 kubelet[2547]: E0117 12:22:58.621946 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.621956 kubelet[2547]: W0117 12:22:58.621954 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.622020 kubelet[2547]: E0117 12:22:58.621963 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.622106 kubelet[2547]: E0117 12:22:58.622096 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.622106 kubelet[2547]: W0117 12:22:58.622105 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.622252 kubelet[2547]: E0117 12:22:58.622114 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.622374 kubelet[2547]: E0117 12:22:58.622359 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.622407 kubelet[2547]: W0117 12:22:58.622375 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.622407 kubelet[2547]: E0117 12:22:58.622394 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.622616 kubelet[2547]: E0117 12:22:58.622603 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.622616 kubelet[2547]: W0117 12:22:58.622616 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.622745 kubelet[2547]: E0117 12:22:58.622630 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.622869 kubelet[2547]: E0117 12:22:58.622857 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.622902 kubelet[2547]: W0117 12:22:58.622870 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.622902 kubelet[2547]: E0117 12:22:58.622885 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.623089 kubelet[2547]: E0117 12:22:58.623078 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.623124 kubelet[2547]: W0117 12:22:58.623090 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.623124 kubelet[2547]: E0117 12:22:58.623104 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.623395 kubelet[2547]: E0117 12:22:58.623380 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.623395 kubelet[2547]: W0117 12:22:58.623393 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.623475 kubelet[2547]: E0117 12:22:58.623421 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.623666 kubelet[2547]: E0117 12:22:58.623653 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.623718 kubelet[2547]: W0117 12:22:58.623667 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.623718 kubelet[2547]: E0117 12:22:58.623685 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.623900 kubelet[2547]: E0117 12:22:58.623889 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.623900 kubelet[2547]: W0117 12:22:58.623899 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.623954 kubelet[2547]: E0117 12:22:58.623914 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.700774 kubelet[2547]: E0117 12:22:58.700730 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.700774 kubelet[2547]: W0117 12:22:58.700764 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.701188 kubelet[2547]: E0117 12:22:58.700796 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.701265 kubelet[2547]: E0117 12:22:58.701198 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.701265 kubelet[2547]: W0117 12:22:58.701216 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.701265 kubelet[2547]: E0117 12:22:58.701245 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.701684 kubelet[2547]: E0117 12:22:58.701665 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.701684 kubelet[2547]: W0117 12:22:58.701684 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.701855 kubelet[2547]: E0117 12:22:58.701711 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.702021 kubelet[2547]: E0117 12:22:58.702006 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.702059 kubelet[2547]: W0117 12:22:58.702023 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.702059 kubelet[2547]: E0117 12:22:58.702054 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.702410 kubelet[2547]: E0117 12:22:58.702388 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.702410 kubelet[2547]: W0117 12:22:58.702410 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.702584 kubelet[2547]: E0117 12:22:58.702497 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.702692 kubelet[2547]: E0117 12:22:58.702676 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.702727 kubelet[2547]: W0117 12:22:58.702693 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.702801 kubelet[2547]: E0117 12:22:58.702762 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.702943 kubelet[2547]: E0117 12:22:58.702926 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.702943 kubelet[2547]: W0117 12:22:58.702942 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.703144 kubelet[2547]: E0117 12:22:58.703017 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.703220 kubelet[2547]: E0117 12:22:58.703202 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.703277 kubelet[2547]: W0117 12:22:58.703220 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.703277 kubelet[2547]: E0117 12:22:58.703245 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.703550 kubelet[2547]: E0117 12:22:58.703532 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.703550 kubelet[2547]: W0117 12:22:58.703549 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.703670 kubelet[2547]: E0117 12:22:58.703576 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.704207 kubelet[2547]: E0117 12:22:58.704075 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.704207 kubelet[2547]: W0117 12:22:58.704096 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.704207 kubelet[2547]: E0117 12:22:58.704124 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.704965 kubelet[2547]: E0117 12:22:58.704704 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.704965 kubelet[2547]: W0117 12:22:58.704725 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.704965 kubelet[2547]: E0117 12:22:58.704781 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.705405 kubelet[2547]: E0117 12:22:58.705256 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.705405 kubelet[2547]: W0117 12:22:58.705276 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.705405 kubelet[2547]: E0117 12:22:58.705379 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.706037 kubelet[2547]: E0117 12:22:58.705824 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.706037 kubelet[2547]: W0117 12:22:58.705839 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.706037 kubelet[2547]: E0117 12:22:58.705890 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.706492 kubelet[2547]: E0117 12:22:58.706278 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.706492 kubelet[2547]: W0117 12:22:58.706291 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.707458 kubelet[2547]: E0117 12:22:58.706645 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.707602 kubelet[2547]: E0117 12:22:58.707588 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.707707 kubelet[2547]: W0117 12:22:58.707689 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.707805 kubelet[2547]: E0117 12:22:58.707790 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.708026 kubelet[2547]: E0117 12:22:58.708008 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.708026 kubelet[2547]: W0117 12:22:58.708023 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.708148 kubelet[2547]: E0117 12:22:58.708044 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.708306 kubelet[2547]: E0117 12:22:58.708289 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.708306 kubelet[2547]: W0117 12:22:58.708305 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.708430 kubelet[2547]: E0117 12:22:58.708324 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.708774 kubelet[2547]: E0117 12:22:58.708756 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:58.708774 kubelet[2547]: W0117 12:22:58.708772 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:58.708885 kubelet[2547]: E0117 12:22:58.708789 2547 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:58.789262 systemd[1]: run-containerd-runc-k8s.io-8069d5c57706e50bbd9844d67fc689206d09bcfb0eabb179b00806b81116b575-runc.hoWxNr.mount: Deactivated successfully. Jan 17 12:22:59.085206 containerd[1460]: time="2025-01-17T12:22:59.085123839Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:59.086435 containerd[1460]: time="2025-01-17T12:22:59.086014147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 17 12:22:59.089224 containerd[1460]: time="2025-01-17T12:22:59.087382029Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:59.125953 containerd[1460]: time="2025-01-17T12:22:59.125899377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:59.127266 containerd[1460]: time="2025-01-17T12:22:59.127216155Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.345925206s" Jan 17 12:22:59.127438 containerd[1460]: time="2025-01-17T12:22:59.127419484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 17 12:22:59.130828 containerd[1460]: time="2025-01-17T12:22:59.130065035Z" level=info msg="CreateContainer within sandbox \"b6aff15774fd558fc0caa68a9d34046edf12f839a8c60d489c540278339c10e2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:22:59.176008 containerd[1460]: time="2025-01-17T12:22:59.175968193Z" level=info msg="CreateContainer within sandbox \"b6aff15774fd558fc0caa68a9d34046edf12f839a8c60d489c540278339c10e2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"aba141b491e3afe08daddc2acaff3f76934814b75af62208c3f6680964a17699\"" Jan 17 12:22:59.177441 containerd[1460]: time="2025-01-17T12:22:59.177404013Z" level=info msg="StartContainer for \"aba141b491e3afe08daddc2acaff3f76934814b75af62208c3f6680964a17699\"" Jan 17 12:22:59.231407 systemd[1]: Started cri-containerd-aba141b491e3afe08daddc2acaff3f76934814b75af62208c3f6680964a17699.scope - libcontainer container aba141b491e3afe08daddc2acaff3f76934814b75af62208c3f6680964a17699. Jan 17 12:22:59.268727 containerd[1460]: time="2025-01-17T12:22:59.268659419Z" level=info msg="StartContainer for \"aba141b491e3afe08daddc2acaff3f76934814b75af62208c3f6680964a17699\" returns successfully" Jan 17 12:22:59.292648 systemd[1]: cri-containerd-aba141b491e3afe08daddc2acaff3f76934814b75af62208c3f6680964a17699.scope: Deactivated successfully. Jan 17 12:22:59.318053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aba141b491e3afe08daddc2acaff3f76934814b75af62208c3f6680964a17699-rootfs.mount: Deactivated successfully. Jan 17 12:22:59.335041 containerd[1460]: time="2025-01-17T12:22:59.321186361Z" level=info msg="shim disconnected" id=aba141b491e3afe08daddc2acaff3f76934814b75af62208c3f6680964a17699 namespace=k8s.io Jan 17 12:22:59.335041 containerd[1460]: time="2025-01-17T12:22:59.333995542Z" level=warning msg="cleaning up after shim disconnected" id=aba141b491e3afe08daddc2acaff3f76934814b75af62208c3f6680964a17699 namespace=k8s.io Jan 17 12:22:59.335041 containerd[1460]: time="2025-01-17T12:22:59.334018061Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:22:59.357009 containerd[1460]: time="2025-01-17T12:22:59.356855394Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:22:59Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:22:59.401450 kubelet[2547]: E0117 12:22:59.400707 2547 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzkcx" podUID="151ac44f-4692-405d-a3ad-26a51dc59114" Jan 17 12:22:59.531085 kubelet[2547]: I0117 12:22:59.531032 2547 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:22:59.532325 kubelet[2547]: E0117 12:22:59.532131 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:59.534960 kubelet[2547]: E0117 12:22:59.534523 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:59.535294 containerd[1460]: time="2025-01-17T12:22:59.534886172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 17 12:22:59.568056 kubelet[2547]: I0117 12:22:59.568014 2547 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-556b68dbb6-rfckr" podStartSLOduration=2.627016579 podStartE2EDuration="4.567971739s" podCreationTimestamp="2025-01-17 12:22:55 +0000 UTC" firstStartedPulling="2025-01-17 12:22:55.839556849 +0000 UTC m=+28.615821759" lastFinishedPulling="2025-01-17 12:22:57.780512007 +0000 UTC m=+30.556776919" observedRunningTime="2025-01-17 12:22:58.541510379 +0000 UTC m=+31.317775295" watchObservedRunningTime="2025-01-17 12:22:59.567971739 +0000 UTC m=+32.344236655" Jan 17 12:23:01.401750 kubelet[2547]: E0117 12:23:01.401695 2547 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzkcx" podUID="151ac44f-4692-405d-a3ad-26a51dc59114" Jan 17 12:23:02.812928 containerd[1460]: time="2025-01-17T12:23:02.812380467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:02.814376 containerd[1460]: time="2025-01-17T12:23:02.814298646Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 17 12:23:02.815103 containerd[1460]: time="2025-01-17T12:23:02.815026649Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:02.817190 containerd[1460]: time="2025-01-17T12:23:02.817100412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:02.818106 containerd[1460]: time="2025-01-17T12:23:02.817989031Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.283059564s" Jan 17 12:23:02.818106 containerd[1460]: time="2025-01-17T12:23:02.818022565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 17 12:23:02.820905 containerd[1460]: time="2025-01-17T12:23:02.820810561Z" level=info msg="CreateContainer within sandbox \"b6aff15774fd558fc0caa68a9d34046edf12f839a8c60d489c540278339c10e2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:23:02.833090 containerd[1460]: time="2025-01-17T12:23:02.832563757Z" level=info msg="CreateContainer within sandbox \"b6aff15774fd558fc0caa68a9d34046edf12f839a8c60d489c540278339c10e2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f6bb67ea7695c5da09fff6693e0ef542e05921a4cd785b3ebc5af6f8a36f0b62\"" Jan 17 12:23:02.834388 containerd[1460]: time="2025-01-17T12:23:02.833325947Z" level=info msg="StartContainer for \"f6bb67ea7695c5da09fff6693e0ef542e05921a4cd785b3ebc5af6f8a36f0b62\"" Jan 17 12:23:02.952396 systemd[1]: Started cri-containerd-f6bb67ea7695c5da09fff6693e0ef542e05921a4cd785b3ebc5af6f8a36f0b62.scope - libcontainer container f6bb67ea7695c5da09fff6693e0ef542e05921a4cd785b3ebc5af6f8a36f0b62. Jan 17 12:23:02.987913 containerd[1460]: time="2025-01-17T12:23:02.987730516Z" level=info msg="StartContainer for \"f6bb67ea7695c5da09fff6693e0ef542e05921a4cd785b3ebc5af6f8a36f0b62\" returns successfully" Jan 17 12:23:03.150892 kubelet[2547]: I0117 12:23:03.149854 2547 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:23:03.152083 kubelet[2547]: E0117 12:23:03.151692 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:03.401212 kubelet[2547]: E0117 12:23:03.401025 2547 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzkcx" podUID="151ac44f-4692-405d-a3ad-26a51dc59114" Jan 17 12:23:03.544535 kubelet[2547]: E0117 12:23:03.544432 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:03.544808 kubelet[2547]: E0117 12:23:03.544781 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:04.503685 systemd[1]: cri-containerd-f6bb67ea7695c5da09fff6693e0ef542e05921a4cd785b3ebc5af6f8a36f0b62.scope: Deactivated successfully. Jan 17 12:23:04.651059 kubelet[2547]: E0117 12:23:04.650774 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:04.655190 kubelet[2547]: I0117 12:23:04.654454 2547 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:23:04.685815 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6bb67ea7695c5da09fff6693e0ef542e05921a4cd785b3ebc5af6f8a36f0b62-rootfs.mount: Deactivated successfully. Jan 17 12:23:04.694579 containerd[1460]: time="2025-01-17T12:23:04.694481024Z" level=info msg="shim disconnected" id=f6bb67ea7695c5da09fff6693e0ef542e05921a4cd785b3ebc5af6f8a36f0b62 namespace=k8s.io Jan 17 12:23:04.694579 containerd[1460]: time="2025-01-17T12:23:04.694568571Z" level=warning msg="cleaning up after shim disconnected" id=f6bb67ea7695c5da09fff6693e0ef542e05921a4cd785b3ebc5af6f8a36f0b62 namespace=k8s.io Jan 17 12:23:04.694579 containerd[1460]: time="2025-01-17T12:23:04.694583266Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:23:04.729243 kubelet[2547]: I0117 12:23:04.729198 2547 topology_manager.go:215] "Topology Admit Handler" podUID="80e6a65e-0c98-4ec1-b14d-0f74c5d02c17" podNamespace="kube-system" podName="coredns-76f75df574-c6p9z" Jan 17 12:23:04.739711 systemd[1]: Created slice kubepods-burstable-pod80e6a65e_0c98_4ec1_b14d_0f74c5d02c17.slice - libcontainer container kubepods-burstable-pod80e6a65e_0c98_4ec1_b14d_0f74c5d02c17.slice. Jan 17 12:23:04.748199 kubelet[2547]: I0117 12:23:04.745924 2547 topology_manager.go:215] "Topology Admit Handler" podUID="d31f2d26-9d64-4545-9a49-9ad99ebce942" podNamespace="calico-apiserver" podName="calico-apiserver-644c6b96bd-jvpvw" Jan 17 12:23:04.748199 kubelet[2547]: I0117 12:23:04.746230 2547 topology_manager.go:215] "Topology Admit Handler" podUID="cf0567d8-141c-4c94-af72-85752733c14f" podNamespace="kube-system" podName="coredns-76f75df574-rsr9z" Jan 17 12:23:04.749669 kubelet[2547]: I0117 12:23:04.749635 2547 topology_manager.go:215] "Topology Admit Handler" podUID="4a25078f-72c0-4f3c-95ba-d53d9ddcf023" podNamespace="calico-apiserver" podName="calico-apiserver-644c6b96bd-qlwqs" Jan 17 12:23:04.749847 kubelet[2547]: I0117 12:23:04.749833 2547 topology_manager.go:215] "Topology Admit Handler" podUID="bb7db20c-9339-4707-9d88-fdbe00b2a260" podNamespace="calico-system" podName="calico-kube-controllers-84bb7b955f-qmkwr" Jan 17 12:23:04.760369 systemd[1]: Created slice kubepods-besteffort-podd31f2d26_9d64_4545_9a49_9ad99ebce942.slice - libcontainer container kubepods-besteffort-podd31f2d26_9d64_4545_9a49_9ad99ebce942.slice. Jan 17 12:23:04.772848 systemd[1]: Created slice kubepods-burstable-podcf0567d8_141c_4c94_af72_85752733c14f.slice - libcontainer container kubepods-burstable-podcf0567d8_141c_4c94_af72_85752733c14f.slice. Jan 17 12:23:04.785530 systemd[1]: Created slice kubepods-besteffort-podbb7db20c_9339_4707_9d88_fdbe00b2a260.slice - libcontainer container kubepods-besteffort-podbb7db20c_9339_4707_9d88_fdbe00b2a260.slice. Jan 17 12:23:04.794548 systemd[1]: Created slice kubepods-besteffort-pod4a25078f_72c0_4f3c_95ba_d53d9ddcf023.slice - libcontainer container kubepods-besteffort-pod4a25078f_72c0_4f3c_95ba_d53d9ddcf023.slice. Jan 17 12:23:04.850115 kubelet[2547]: I0117 12:23:04.849665 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4a25078f-72c0-4f3c-95ba-d53d9ddcf023-calico-apiserver-certs\") pod \"calico-apiserver-644c6b96bd-qlwqs\" (UID: \"4a25078f-72c0-4f3c-95ba-d53d9ddcf023\") " pod="calico-apiserver/calico-apiserver-644c6b96bd-qlwqs" Jan 17 12:23:04.850115 kubelet[2547]: I0117 12:23:04.849734 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n6z6\" (UniqueName: \"kubernetes.io/projected/4a25078f-72c0-4f3c-95ba-d53d9ddcf023-kube-api-access-8n6z6\") pod \"calico-apiserver-644c6b96bd-qlwqs\" (UID: \"4a25078f-72c0-4f3c-95ba-d53d9ddcf023\") " pod="calico-apiserver/calico-apiserver-644c6b96bd-qlwqs" Jan 17 12:23:04.850115 kubelet[2547]: I0117 12:23:04.849774 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqfgm\" (UniqueName: \"kubernetes.io/projected/80e6a65e-0c98-4ec1-b14d-0f74c5d02c17-kube-api-access-wqfgm\") pod \"coredns-76f75df574-c6p9z\" (UID: \"80e6a65e-0c98-4ec1-b14d-0f74c5d02c17\") " pod="kube-system/coredns-76f75df574-c6p9z" Jan 17 12:23:04.850115 kubelet[2547]: I0117 12:23:04.849810 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb7db20c-9339-4707-9d88-fdbe00b2a260-tigera-ca-bundle\") pod \"calico-kube-controllers-84bb7b955f-qmkwr\" (UID: \"bb7db20c-9339-4707-9d88-fdbe00b2a260\") " pod="calico-system/calico-kube-controllers-84bb7b955f-qmkwr" Jan 17 12:23:04.850115 kubelet[2547]: I0117 12:23:04.849848 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d31f2d26-9d64-4545-9a49-9ad99ebce942-calico-apiserver-certs\") pod \"calico-apiserver-644c6b96bd-jvpvw\" (UID: \"d31f2d26-9d64-4545-9a49-9ad99ebce942\") " pod="calico-apiserver/calico-apiserver-644c6b96bd-jvpvw" Jan 17 12:23:04.850485 kubelet[2547]: I0117 12:23:04.849883 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpmnj\" (UniqueName: \"kubernetes.io/projected/d31f2d26-9d64-4545-9a49-9ad99ebce942-kube-api-access-qpmnj\") pod \"calico-apiserver-644c6b96bd-jvpvw\" (UID: \"d31f2d26-9d64-4545-9a49-9ad99ebce942\") " pod="calico-apiserver/calico-apiserver-644c6b96bd-jvpvw" Jan 17 12:23:04.850485 kubelet[2547]: I0117 12:23:04.849923 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt9wl\" (UniqueName: \"kubernetes.io/projected/bb7db20c-9339-4707-9d88-fdbe00b2a260-kube-api-access-kt9wl\") pod \"calico-kube-controllers-84bb7b955f-qmkwr\" (UID: \"bb7db20c-9339-4707-9d88-fdbe00b2a260\") " pod="calico-system/calico-kube-controllers-84bb7b955f-qmkwr" Jan 17 12:23:04.850485 kubelet[2547]: I0117 12:23:04.849958 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgl8h\" (UniqueName: \"kubernetes.io/projected/cf0567d8-141c-4c94-af72-85752733c14f-kube-api-access-zgl8h\") pod \"coredns-76f75df574-rsr9z\" (UID: \"cf0567d8-141c-4c94-af72-85752733c14f\") " pod="kube-system/coredns-76f75df574-rsr9z" Jan 17 12:23:04.850485 kubelet[2547]: I0117 12:23:04.849989 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80e6a65e-0c98-4ec1-b14d-0f74c5d02c17-config-volume\") pod \"coredns-76f75df574-c6p9z\" (UID: \"80e6a65e-0c98-4ec1-b14d-0f74c5d02c17\") " pod="kube-system/coredns-76f75df574-c6p9z" Jan 17 12:23:04.850485 kubelet[2547]: I0117 12:23:04.850028 2547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf0567d8-141c-4c94-af72-85752733c14f-config-volume\") pod \"coredns-76f75df574-rsr9z\" (UID: \"cf0567d8-141c-4c94-af72-85752733c14f\") " pod="kube-system/coredns-76f75df574-rsr9z" Jan 17 12:23:05.045895 kubelet[2547]: E0117 12:23:05.045327 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:05.047892 containerd[1460]: time="2025-01-17T12:23:05.047314772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-c6p9z,Uid:80e6a65e-0c98-4ec1-b14d-0f74c5d02c17,Namespace:kube-system,Attempt:0,}" Jan 17 12:23:05.067624 containerd[1460]: time="2025-01-17T12:23:05.067585686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-644c6b96bd-jvpvw,Uid:d31f2d26-9d64-4545-9a49-9ad99ebce942,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:23:05.079241 kubelet[2547]: E0117 12:23:05.078387 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:05.097189 containerd[1460]: time="2025-01-17T12:23:05.096882453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84bb7b955f-qmkwr,Uid:bb7db20c-9339-4707-9d88-fdbe00b2a260,Namespace:calico-system,Attempt:0,}" Jan 17 12:23:05.101963 containerd[1460]: time="2025-01-17T12:23:05.097227512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rsr9z,Uid:cf0567d8-141c-4c94-af72-85752733c14f,Namespace:kube-system,Attempt:0,}" Jan 17 12:23:05.103531 containerd[1460]: time="2025-01-17T12:23:05.102734320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-644c6b96bd-qlwqs,Uid:4a25078f-72c0-4f3c-95ba-d53d9ddcf023,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:23:05.414787 systemd[1]: Created slice kubepods-besteffort-pod151ac44f_4692_405d_a3ad_26a51dc59114.slice - libcontainer container kubepods-besteffort-pod151ac44f_4692_405d_a3ad_26a51dc59114.slice. Jan 17 12:23:05.439426 containerd[1460]: time="2025-01-17T12:23:05.439358158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gzkcx,Uid:151ac44f-4692-405d-a3ad-26a51dc59114,Namespace:calico-system,Attempt:0,}" Jan 17 12:23:05.440489 containerd[1460]: time="2025-01-17T12:23:05.439772997Z" level=error msg="Failed to destroy network for sandbox \"a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:05.443703 containerd[1460]: time="2025-01-17T12:23:05.443640532Z" level=error msg="encountered an error cleaning up failed sandbox \"a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:05.443867 containerd[1460]: time="2025-01-17T12:23:05.443725502Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-c6p9z,Uid:80e6a65e-0c98-4ec1-b14d-0f74c5d02c17,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:05.446103 containerd[1460]: time="2025-01-17T12:23:05.446042252Z" level=error msg="Failed to destroy network for sandbox \"4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:05.446998 containerd[1460]: time="2025-01-17T12:23:05.446825063Z" level=error msg="encountered an error cleaning up failed sandbox \"4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:05.446998 containerd[1460]: time="2025-01-17T12:23:05.446918046Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-644c6b96bd-qlwqs,Uid:4a25078f-72c0-4f3c-95ba-d53d9ddcf023,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:05.447595 containerd[1460]: time="2025-01-17T12:23:05.447460069Z" level=error msg="Failed to destroy network for sandbox \"38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:05.448237 containerd[1460]: time="2025-01-17T12:23:05.448040100Z" level=error msg="encountered an error cleaning up failed sandbox \"38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:05.448237 containerd[1460]: time="2025-01-17T12:23:05.448116566Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rsr9z,Uid:cf0567d8-141c-4c94-af72-85752733c14f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:05.449584 containerd[1460]: time="2025-01-17T12:23:05.449443411Z" level=error msg="Failed to destroy network for sandbox \"1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:05.450137 containerd[1460]: time="2025-01-17T12:23:05.449894754Z" level=error msg="encountered an error cleaning up failed sandbox \"1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:05.450137 containerd[1460]: time="2025-01-17T12:23:05.449949476Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84bb7b955f-qmkwr,Uid:bb7db20c-9339-4707-9d88-fdbe00b2a260,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:05.450137 containerd[1460]: time="2025-01-17T12:23:05.450057780Z" level=error msg="Failed to destroy network for sandbox \"41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:05.450555 containerd[1460]: time="2025-01-17T12:23:05.450528867Z" level=error msg="encountered an error cleaning up failed sandbox \"41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:05.450648 containerd[1460]: time="2025-01-17T12:23:05.450629845Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-644c6b96bd-jvpvw,Uid:d31f2d26-9d64-4545-9a49-9ad99ebce942,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:05.451125 kubelet[2547]: E0117 12:23:05.450994 2547 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:05.451125 kubelet[2547]: E0117 12:23:05.451020 2547 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:05.451125 kubelet[2547]: E0117 12:23:05.451071 2547 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-644c6b96bd-jvpvw" Jan 17 12:23:05.451125 kubelet[2547]: E0117 12:23:05.451094 2547 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-644c6b96bd-jvpvw" Jan 17 12:23:05.451379 kubelet[2547]: E0117 12:23:05.451116 2547 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-c6p9z" Jan 17 12:23:05.451379 kubelet[2547]: E0117 12:23:05.451150 2547 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-c6p9z" Jan 17 12:23:05.451379 kubelet[2547]: E0117 12:23:05.451311 2547 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:05.451379 kubelet[2547]: E0117 12:23:05.451352 2547 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-644c6b96bd-qlwqs" Jan 17 12:23:05.451492 kubelet[2547]: E0117 12:23:05.451379 2547 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-644c6b96bd-qlwqs" Jan 17 12:23:05.451492 kubelet[2547]: E0117 12:23:05.451431 2547 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:05.451492 kubelet[2547]: E0117 12:23:05.451464 2547 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-rsr9z" Jan 17 12:23:05.451492 kubelet[2547]: E0117 12:23:05.451488 2547 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-rsr9z" Jan 17 12:23:05.451601 kubelet[2547]: E0117 12:23:05.451571 2547 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:05.451633 kubelet[2547]: E0117 12:23:05.451608 2547 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84bb7b955f-qmkwr" Jan 17 12:23:05.451661 kubelet[2547]: E0117 12:23:05.451638 2547 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84bb7b955f-qmkwr" Jan 17 12:23:05.452150 kubelet[2547]: E0117 12:23:05.451771 2547 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-644c6b96bd-jvpvw_calico-apiserver(d31f2d26-9d64-4545-9a49-9ad99ebce942)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-644c6b96bd-jvpvw_calico-apiserver(d31f2d26-9d64-4545-9a49-9ad99ebce942)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-644c6b96bd-jvpvw" podUID="d31f2d26-9d64-4545-9a49-9ad99ebce942" Jan 17 12:23:05.452150 kubelet[2547]: E0117 12:23:05.451826 2547 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-c6p9z_kube-system(80e6a65e-0c98-4ec1-b14d-0f74c5d02c17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-c6p9z_kube-system(80e6a65e-0c98-4ec1-b14d-0f74c5d02c17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-c6p9z" podUID="80e6a65e-0c98-4ec1-b14d-0f74c5d02c17" Jan 17 12:23:05.452347 kubelet[2547]: E0117 12:23:05.451858 2547 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-644c6b96bd-qlwqs_calico-apiserver(4a25078f-72c0-4f3c-95ba-d53d9ddcf023)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-644c6b96bd-qlwqs_calico-apiserver(4a25078f-72c0-4f3c-95ba-d53d9ddcf023)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-644c6b96bd-qlwqs" podUID="4a25078f-72c0-4f3c-95ba-d53d9ddcf023" Jan 17 12:23:05.452347 kubelet[2547]: E0117 12:23:05.451889 2547 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-rsr9z_kube-system(cf0567d8-141c-4c94-af72-85752733c14f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-rsr9z_kube-system(cf0567d8-141c-4c94-af72-85752733c14f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-rsr9z" podUID="cf0567d8-141c-4c94-af72-85752733c14f" Jan 17 12:23:05.452447 kubelet[2547]: E0117 12:23:05.451918 2547 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84bb7b955f-qmkwr_calico-system(bb7db20c-9339-4707-9d88-fdbe00b2a260)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84bb7b955f-qmkwr_calico-system(bb7db20c-9339-4707-9d88-fdbe00b2a260)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84bb7b955f-qmkwr" podUID="bb7db20c-9339-4707-9d88-fdbe00b2a260" Jan 17 12:23:05.533320 containerd[1460]: time="2025-01-17T12:23:05.533223815Z" level=error msg="Failed to destroy network for sandbox \"77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:05.533750 containerd[1460]: time="2025-01-17T12:23:05.533698503Z" level=error msg="encountered an error cleaning up failed sandbox \"77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:05.533837 containerd[1460]: time="2025-01-17T12:23:05.533791033Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gzkcx,Uid:151ac44f-4692-405d-a3ad-26a51dc59114,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:05.534447 kubelet[2547]: E0117 12:23:05.534067 2547 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:05.534447 kubelet[2547]: E0117 12:23:05.534131 2547 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gzkcx" Jan 17 12:23:05.534447 kubelet[2547]: E0117 12:23:05.534155 2547 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gzkcx" Jan 17 12:23:05.535393 kubelet[2547]: E0117 12:23:05.534228 2547 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gzkcx_calico-system(151ac44f-4692-405d-a3ad-26a51dc59114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gzkcx_calico-system(151ac44f-4692-405d-a3ad-26a51dc59114)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gzkcx" podUID="151ac44f-4692-405d-a3ad-26a51dc59114" Jan 17 12:23:05.655030 kubelet[2547]: I0117 12:23:05.655001 2547 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" Jan 17 12:23:05.659751 kubelet[2547]: I0117 12:23:05.659255 2547 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" Jan 17 12:23:05.661598 containerd[1460]: time="2025-01-17T12:23:05.660854911Z" level=info msg="StopPodSandbox for \"1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049\"" Jan 17 12:23:05.664326 containerd[1460]: time="2025-01-17T12:23:05.663227444Z" level=info msg="StopPodSandbox for \"4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d\"" Jan 17 12:23:05.664521 containerd[1460]: time="2025-01-17T12:23:05.664489661Z" level=info msg="Ensure that sandbox 1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049 in task-service has been cleanup successfully" Jan 17 12:23:05.668708 containerd[1460]: time="2025-01-17T12:23:05.668587152Z" level=info msg="Ensure that sandbox 4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d in task-service has been cleanup successfully" Jan 17 12:23:05.677314 kubelet[2547]: E0117 12:23:05.677286 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:05.700805 containerd[1460]: time="2025-01-17T12:23:05.700745388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 17 12:23:05.705571 kubelet[2547]: I0117 12:23:05.704597 2547 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" Jan 17 12:23:05.714536 containerd[1460]: time="2025-01-17T12:23:05.714111969Z" level=info msg="StopPodSandbox for \"41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce\"" Jan 17 12:23:05.714963 containerd[1460]: time="2025-01-17T12:23:05.714906727Z" level=info msg="Ensure that sandbox 41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce in task-service has been cleanup successfully" Jan 17 12:23:05.721040 kubelet[2547]: I0117 12:23:05.721006 2547 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" Jan 17 12:23:05.724277 containerd[1460]: time="2025-01-17T12:23:05.723627959Z" level=info msg="StopPodSandbox for \"a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f\"" Jan 17 12:23:05.725753 containerd[1460]: time="2025-01-17T12:23:05.725704746Z" level=info msg="Ensure that sandbox a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f in task-service has been cleanup successfully" Jan 17 12:23:05.727194 kubelet[2547]: I0117 12:23:05.726878 2547 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" Jan 17 12:23:05.736828 containerd[1460]: time="2025-01-17T12:23:05.735564446Z" level=info msg="StopPodSandbox for \"38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1\"" Jan 17 12:23:05.736828 containerd[1460]: time="2025-01-17T12:23:05.735761271Z" level=info msg="Ensure that sandbox 38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1 in task-service has been cleanup successfully" Jan 17 12:23:05.742244 kubelet[2547]: I0117 12:23:05.742205 2547 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" Jan 17 12:23:05.743511 containerd[1460]: time="2025-01-17T12:23:05.743294717Z" level=info msg="StopPodSandbox for \"77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79\"" Jan 17 12:23:05.743511 containerd[1460]: time="2025-01-17T12:23:05.743490981Z" level=info msg="Ensure that sandbox 77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79 in task-service has been cleanup successfully" Jan 17 12:23:05.840899 containerd[1460]: time="2025-01-17T12:23:05.840614092Z" level=error msg="StopPodSandbox for \"1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049\" failed" error="failed to destroy network for sandbox \"1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:05.841105 kubelet[2547]: E0117 12:23:05.840949 2547 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" Jan 17 12:23:05.841105 kubelet[2547]: E0117 12:23:05.841074 2547 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049"} Jan 17 12:23:05.841269 kubelet[2547]: E0117 12:23:05.841117 2547 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bb7db20c-9339-4707-9d88-fdbe00b2a260\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:23:05.841269 kubelet[2547]: E0117 12:23:05.841156 2547 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bb7db20c-9339-4707-9d88-fdbe00b2a260\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84bb7b955f-qmkwr" podUID="bb7db20c-9339-4707-9d88-fdbe00b2a260" Jan 17 12:23:05.846265 containerd[1460]: time="2025-01-17T12:23:05.845769087Z" level=error msg="StopPodSandbox for \"4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d\" failed" error="failed to destroy network for sandbox \"4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:05.847204 kubelet[2547]: E0117 12:23:05.846714 2547 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" Jan 17 12:23:05.847204 kubelet[2547]: E0117 12:23:05.846773 2547 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d"} Jan 17 12:23:05.847204 kubelet[2547]: E0117 12:23:05.846814 2547 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4a25078f-72c0-4f3c-95ba-d53d9ddcf023\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:23:05.847204 kubelet[2547]: E0117 12:23:05.846845 2547 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4a25078f-72c0-4f3c-95ba-d53d9ddcf023\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-644c6b96bd-qlwqs" podUID="4a25078f-72c0-4f3c-95ba-d53d9ddcf023" Jan 17 12:23:05.851438 containerd[1460]: time="2025-01-17T12:23:05.851369188Z" level=error msg="StopPodSandbox for \"a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f\" failed" error="failed to destroy network for sandbox \"a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:05.852193 kubelet[2547]: E0117 12:23:05.851934 2547 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" Jan 17 12:23:05.852193 kubelet[2547]: E0117 12:23:05.851990 2547 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f"} Jan 17 12:23:05.852193 kubelet[2547]: E0117 12:23:05.852031 2547 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"80e6a65e-0c98-4ec1-b14d-0f74c5d02c17\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:23:05.852193 kubelet[2547]: E0117 12:23:05.852062 2547 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"80e6a65e-0c98-4ec1-b14d-0f74c5d02c17\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-c6p9z" podUID="80e6a65e-0c98-4ec1-b14d-0f74c5d02c17" Jan 17 12:23:05.869774 containerd[1460]: time="2025-01-17T12:23:05.869547692Z" level=error msg="StopPodSandbox for \"38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1\" failed" error="failed to destroy network for sandbox \"38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:05.870412 kubelet[2547]: E0117 12:23:05.869954 2547 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" Jan 17 12:23:05.870412 kubelet[2547]: E0117 12:23:05.870015 2547 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1"} Jan 17 12:23:05.870412 kubelet[2547]: E0117 12:23:05.870065 2547 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cf0567d8-141c-4c94-af72-85752733c14f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:23:05.870412 kubelet[2547]: E0117 12:23:05.870097 2547 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cf0567d8-141c-4c94-af72-85752733c14f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-rsr9z" podUID="cf0567d8-141c-4c94-af72-85752733c14f" Jan 17 12:23:05.875753 containerd[1460]: time="2025-01-17T12:23:05.875685763Z" level=error msg="StopPodSandbox for \"41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce\" failed" error="failed to destroy network for sandbox \"41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:05.876204 kubelet[2547]: E0117 12:23:05.876028 2547 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" Jan 17 12:23:05.876204 kubelet[2547]: E0117 12:23:05.876103 2547 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce"} Jan 17 12:23:05.876204 kubelet[2547]: E0117 12:23:05.876158 2547 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d31f2d26-9d64-4545-9a49-9ad99ebce942\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:23:05.876430 kubelet[2547]: E0117 12:23:05.876247 2547 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d31f2d26-9d64-4545-9a49-9ad99ebce942\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-644c6b96bd-jvpvw" podUID="d31f2d26-9d64-4545-9a49-9ad99ebce942" Jan 17 12:23:05.876510 containerd[1460]: time="2025-01-17T12:23:05.876303778Z" level=error msg="StopPodSandbox for \"77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79\" failed" error="failed to destroy network for sandbox \"77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:05.876563 kubelet[2547]: E0117 12:23:05.876535 2547 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" Jan 17 12:23:05.876614 kubelet[2547]: E0117 12:23:05.876587 2547 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79"} Jan 17 12:23:05.876660 kubelet[2547]: E0117 12:23:05.876631 2547 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"151ac44f-4692-405d-a3ad-26a51dc59114\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:23:05.876790 kubelet[2547]: E0117 12:23:05.876664 2547 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"151ac44f-4692-405d-a3ad-26a51dc59114\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gzkcx" podUID="151ac44f-4692-405d-a3ad-26a51dc59114" Jan 17 12:23:08.751253 systemd[1]: Started sshd@7-164.92.109.43:22-139.178.68.195:37536.service - OpenSSH per-connection server daemon (139.178.68.195:37536). Jan 17 12:23:08.922603 sshd[3606]: Accepted publickey for core from 139.178.68.195 port 37536 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:08.927731 sshd[3606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:08.945710 systemd-logind[1442]: New session 8 of user core. Jan 17 12:23:08.951675 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:23:09.219374 sshd[3606]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:09.227648 systemd[1]: sshd@7-164.92.109.43:22-139.178.68.195:37536.service: Deactivated successfully. Jan 17 12:23:09.231871 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:23:09.234011 systemd-logind[1442]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:23:09.236449 systemd-logind[1442]: Removed session 8. Jan 17 12:23:11.687773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2376965250.mount: Deactivated successfully. Jan 17 12:23:11.801513 containerd[1460]: time="2025-01-17T12:23:11.782535227Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 17 12:23:11.801513 containerd[1460]: time="2025-01-17T12:23:11.801384085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:11.803066 containerd[1460]: time="2025-01-17T12:23:11.797611630Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.095197737s" Jan 17 12:23:11.803066 containerd[1460]: time="2025-01-17T12:23:11.802900640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 17 12:23:11.844928 containerd[1460]: time="2025-01-17T12:23:11.844863039Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:11.846000 containerd[1460]: time="2025-01-17T12:23:11.845910603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:11.856753 containerd[1460]: time="2025-01-17T12:23:11.856686027Z" level=info msg="CreateContainer within sandbox \"b6aff15774fd558fc0caa68a9d34046edf12f839a8c60d489c540278339c10e2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:23:11.911230 containerd[1460]: time="2025-01-17T12:23:11.910252152Z" level=info msg="CreateContainer within sandbox \"b6aff15774fd558fc0caa68a9d34046edf12f839a8c60d489c540278339c10e2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"018bd8dd6b18570a4f05c0c98f3920ad2fc525270fd7e4d7cfdcf84b26864576\"" Jan 17 12:23:11.912732 containerd[1460]: time="2025-01-17T12:23:11.912685757Z" level=info msg="StartContainer for \"018bd8dd6b18570a4f05c0c98f3920ad2fc525270fd7e4d7cfdcf84b26864576\"" Jan 17 12:23:12.174550 systemd[1]: Started cri-containerd-018bd8dd6b18570a4f05c0c98f3920ad2fc525270fd7e4d7cfdcf84b26864576.scope - libcontainer container 018bd8dd6b18570a4f05c0c98f3920ad2fc525270fd7e4d7cfdcf84b26864576. Jan 17 12:23:12.220661 containerd[1460]: time="2025-01-17T12:23:12.220582204Z" level=info msg="StartContainer for \"018bd8dd6b18570a4f05c0c98f3920ad2fc525270fd7e4d7cfdcf84b26864576\" returns successfully" Jan 17 12:23:12.318383 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 12:23:12.319001 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 12:23:12.793600 kubelet[2547]: E0117 12:23:12.793503 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:12.829654 kubelet[2547]: I0117 12:23:12.829606 2547 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-m2jmt" podStartSLOduration=2.076974465 podStartE2EDuration="17.822568345s" podCreationTimestamp="2025-01-17 12:22:55 +0000 UTC" firstStartedPulling="2025-01-17 12:22:56.057868907 +0000 UTC m=+28.834133819" lastFinishedPulling="2025-01-17 12:23:11.803462804 +0000 UTC m=+44.579727699" observedRunningTime="2025-01-17 12:23:12.819999038 +0000 UTC m=+45.596263990" watchObservedRunningTime="2025-01-17 12:23:12.822568345 +0000 UTC m=+45.598833263" Jan 17 12:23:14.252691 systemd[1]: Started sshd@8-164.92.109.43:22-139.178.68.195:37544.service - OpenSSH per-connection server daemon (139.178.68.195:37544). Jan 17 12:23:14.406186 sshd[3786]: Accepted publickey for core from 139.178.68.195 port 37544 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:14.411236 sshd[3786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:14.420911 systemd-logind[1442]: New session 9 of user core. Jan 17 12:23:14.425474 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:23:14.478245 kernel: bpftool[3818]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 12:23:14.675015 sshd[3786]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:14.679900 systemd[1]: sshd@8-164.92.109.43:22-139.178.68.195:37544.service: Deactivated successfully. Jan 17 12:23:14.683041 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:23:14.684073 systemd-logind[1442]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:23:14.686466 systemd-logind[1442]: Removed session 9. Jan 17 12:23:14.897462 systemd-networkd[1358]: vxlan.calico: Link UP Jan 17 12:23:14.897475 systemd-networkd[1358]: vxlan.calico: Gained carrier Jan 17 12:23:16.916419 systemd-networkd[1358]: vxlan.calico: Gained IPv6LL Jan 17 12:23:17.413668 containerd[1460]: time="2025-01-17T12:23:17.413603631Z" level=info msg="StopPodSandbox for \"38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1\"" Jan 17 12:23:17.649872 containerd[1460]: 2025-01-17 12:23:17.502 [INFO][3914] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" Jan 17 12:23:17.649872 containerd[1460]: 2025-01-17 12:23:17.503 [INFO][3914] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" iface="eth0" netns="/var/run/netns/cni-4358cca9-d10b-cc53-4af2-f91f6735cd6e" Jan 17 12:23:17.649872 containerd[1460]: 2025-01-17 12:23:17.505 [INFO][3914] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" iface="eth0" netns="/var/run/netns/cni-4358cca9-d10b-cc53-4af2-f91f6735cd6e" Jan 17 12:23:17.649872 containerd[1460]: 2025-01-17 12:23:17.506 [INFO][3914] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" iface="eth0" netns="/var/run/netns/cni-4358cca9-d10b-cc53-4af2-f91f6735cd6e" Jan 17 12:23:17.649872 containerd[1460]: 2025-01-17 12:23:17.506 [INFO][3914] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" Jan 17 12:23:17.649872 containerd[1460]: 2025-01-17 12:23:17.506 [INFO][3914] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" Jan 17 12:23:17.649872 containerd[1460]: 2025-01-17 12:23:17.626 [INFO][3920] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" HandleID="k8s-pod-network.38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" Workload="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--rsr9z-eth0" Jan 17 12:23:17.649872 containerd[1460]: 2025-01-17 12:23:17.627 [INFO][3920] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:17.649872 containerd[1460]: 2025-01-17 12:23:17.628 [INFO][3920] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:17.649872 containerd[1460]: 2025-01-17 12:23:17.640 [WARNING][3920] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" HandleID="k8s-pod-network.38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" Workload="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--rsr9z-eth0" Jan 17 12:23:17.649872 containerd[1460]: 2025-01-17 12:23:17.640 [INFO][3920] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" HandleID="k8s-pod-network.38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" Workload="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--rsr9z-eth0" Jan 17 12:23:17.649872 containerd[1460]: 2025-01-17 12:23:17.644 [INFO][3920] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:17.649872 containerd[1460]: 2025-01-17 12:23:17.647 [INFO][3914] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" Jan 17 12:23:17.652802 containerd[1460]: time="2025-01-17T12:23:17.650099484Z" level=info msg="TearDown network for sandbox \"38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1\" successfully" Jan 17 12:23:17.652802 containerd[1460]: time="2025-01-17T12:23:17.650262419Z" level=info msg="StopPodSandbox for \"38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1\" returns successfully" Jan 17 12:23:17.654730 systemd[1]: run-netns-cni\x2d4358cca9\x2dd10b\x2dcc53\x2d4af2\x2df91f6735cd6e.mount: Deactivated successfully. Jan 17 12:23:17.695255 kubelet[2547]: E0117 12:23:17.694725 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:17.717151 containerd[1460]: time="2025-01-17T12:23:17.717104645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rsr9z,Uid:cf0567d8-141c-4c94-af72-85752733c14f,Namespace:kube-system,Attempt:1,}" Jan 17 12:23:17.929155 systemd-networkd[1358]: cali79f6d4bfbf7: Link UP Jan 17 12:23:17.930878 systemd-networkd[1358]: cali79f6d4bfbf7: Gained carrier Jan 17 12:23:17.963216 containerd[1460]: 2025-01-17 12:23:17.819 [INFO][3928] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--rsr9z-eth0 coredns-76f75df574- kube-system cf0567d8-141c-4c94-af72-85752733c14f 873 0 2025-01-17 12:22:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-6-c2def92c28 coredns-76f75df574-rsr9z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali79f6d4bfbf7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d30caa4efca5fb8d4ba558c6b218b381bce55c2d1a05def124c072865fa0426f" Namespace="kube-system" Pod="coredns-76f75df574-rsr9z" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--rsr9z-" Jan 17 12:23:17.963216 containerd[1460]: 2025-01-17 12:23:17.819 [INFO][3928] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d30caa4efca5fb8d4ba558c6b218b381bce55c2d1a05def124c072865fa0426f" Namespace="kube-system" Pod="coredns-76f75df574-rsr9z" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--rsr9z-eth0" Jan 17 12:23:17.963216 containerd[1460]: 2025-01-17 12:23:17.860 [INFO][3938] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d30caa4efca5fb8d4ba558c6b218b381bce55c2d1a05def124c072865fa0426f" HandleID="k8s-pod-network.d30caa4efca5fb8d4ba558c6b218b381bce55c2d1a05def124c072865fa0426f" Workload="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--rsr9z-eth0" Jan 17 12:23:17.963216 containerd[1460]: 2025-01-17 12:23:17.870 [INFO][3938] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d30caa4efca5fb8d4ba558c6b218b381bce55c2d1a05def124c072865fa0426f" HandleID="k8s-pod-network.d30caa4efca5fb8d4ba558c6b218b381bce55c2d1a05def124c072865fa0426f" Workload="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--rsr9z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318040), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-6-c2def92c28", "pod":"coredns-76f75df574-rsr9z", "timestamp":"2025-01-17 12:23:17.860735963 +0000 UTC"}, Hostname:"ci-4081.3.0-6-c2def92c28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:23:17.963216 containerd[1460]: 2025-01-17 12:23:17.871 [INFO][3938] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:17.963216 containerd[1460]: 2025-01-17 12:23:17.871 [INFO][3938] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:17.963216 containerd[1460]: 2025-01-17 12:23:17.871 [INFO][3938] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-6-c2def92c28' Jan 17 12:23:17.963216 containerd[1460]: 2025-01-17 12:23:17.874 [INFO][3938] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d30caa4efca5fb8d4ba558c6b218b381bce55c2d1a05def124c072865fa0426f" host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:17.963216 containerd[1460]: 2025-01-17 12:23:17.883 [INFO][3938] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:17.963216 containerd[1460]: 2025-01-17 12:23:17.889 [INFO][3938] ipam/ipam.go 489: Trying affinity for 192.168.120.192/26 host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:17.963216 containerd[1460]: 2025-01-17 12:23:17.892 [INFO][3938] ipam/ipam.go 155: Attempting to load block cidr=192.168.120.192/26 host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:17.963216 containerd[1460]: 2025-01-17 12:23:17.895 [INFO][3938] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.120.192/26 host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:17.963216 containerd[1460]: 2025-01-17 12:23:17.895 [INFO][3938] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.120.192/26 handle="k8s-pod-network.d30caa4efca5fb8d4ba558c6b218b381bce55c2d1a05def124c072865fa0426f" host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:17.963216 containerd[1460]: 2025-01-17 12:23:17.897 [INFO][3938] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d30caa4efca5fb8d4ba558c6b218b381bce55c2d1a05def124c072865fa0426f Jan 17 12:23:17.963216 containerd[1460]: 2025-01-17 12:23:17.903 [INFO][3938] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.120.192/26 handle="k8s-pod-network.d30caa4efca5fb8d4ba558c6b218b381bce55c2d1a05def124c072865fa0426f" host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:17.963216 containerd[1460]: 2025-01-17 12:23:17.918 [INFO][3938] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.120.193/26] block=192.168.120.192/26 handle="k8s-pod-network.d30caa4efca5fb8d4ba558c6b218b381bce55c2d1a05def124c072865fa0426f" host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:17.963216 containerd[1460]: 2025-01-17 12:23:17.918 [INFO][3938] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.120.193/26] handle="k8s-pod-network.d30caa4efca5fb8d4ba558c6b218b381bce55c2d1a05def124c072865fa0426f" host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:17.963216 containerd[1460]: 2025-01-17 12:23:17.918 [INFO][3938] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:17.963216 containerd[1460]: 2025-01-17 12:23:17.918 [INFO][3938] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.193/26] IPv6=[] ContainerID="d30caa4efca5fb8d4ba558c6b218b381bce55c2d1a05def124c072865fa0426f" HandleID="k8s-pod-network.d30caa4efca5fb8d4ba558c6b218b381bce55c2d1a05def124c072865fa0426f" Workload="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--rsr9z-eth0" Jan 17 12:23:17.963880 containerd[1460]: 2025-01-17 12:23:17.924 [INFO][3928] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d30caa4efca5fb8d4ba558c6b218b381bce55c2d1a05def124c072865fa0426f" Namespace="kube-system" Pod="coredns-76f75df574-rsr9z" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--rsr9z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--rsr9z-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"cf0567d8-141c-4c94-af72-85752733c14f", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-6-c2def92c28", ContainerID:"", Pod:"coredns-76f75df574-rsr9z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali79f6d4bfbf7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:17.963880 containerd[1460]: 2025-01-17 12:23:17.924 [INFO][3928] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.120.193/32] ContainerID="d30caa4efca5fb8d4ba558c6b218b381bce55c2d1a05def124c072865fa0426f" Namespace="kube-system" Pod="coredns-76f75df574-rsr9z" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--rsr9z-eth0" Jan 17 12:23:17.963880 containerd[1460]: 2025-01-17 12:23:17.924 [INFO][3928] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79f6d4bfbf7 ContainerID="d30caa4efca5fb8d4ba558c6b218b381bce55c2d1a05def124c072865fa0426f" Namespace="kube-system" Pod="coredns-76f75df574-rsr9z" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--rsr9z-eth0" Jan 17 12:23:17.963880 containerd[1460]: 2025-01-17 12:23:17.933 [INFO][3928] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d30caa4efca5fb8d4ba558c6b218b381bce55c2d1a05def124c072865fa0426f" Namespace="kube-system" Pod="coredns-76f75df574-rsr9z" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--rsr9z-eth0" Jan 17 12:23:17.963880 containerd[1460]: 2025-01-17 12:23:17.934 [INFO][3928] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d30caa4efca5fb8d4ba558c6b218b381bce55c2d1a05def124c072865fa0426f" Namespace="kube-system" Pod="coredns-76f75df574-rsr9z" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--rsr9z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--rsr9z-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"cf0567d8-141c-4c94-af72-85752733c14f", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-6-c2def92c28", ContainerID:"d30caa4efca5fb8d4ba558c6b218b381bce55c2d1a05def124c072865fa0426f", Pod:"coredns-76f75df574-rsr9z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali79f6d4bfbf7", MAC:"9a:95:9f:03:72:9f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:17.963880 containerd[1460]: 2025-01-17 12:23:17.956 [INFO][3928] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d30caa4efca5fb8d4ba558c6b218b381bce55c2d1a05def124c072865fa0426f" Namespace="kube-system" Pod="coredns-76f75df574-rsr9z" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--rsr9z-eth0" Jan 17 12:23:18.039824 containerd[1460]: time="2025-01-17T12:23:18.039396321Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:23:18.040354 containerd[1460]: time="2025-01-17T12:23:18.040217717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:23:18.040354 containerd[1460]: time="2025-01-17T12:23:18.040247533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:18.040733 containerd[1460]: time="2025-01-17T12:23:18.040670778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:18.107467 systemd[1]: Started cri-containerd-d30caa4efca5fb8d4ba558c6b218b381bce55c2d1a05def124c072865fa0426f.scope - libcontainer container d30caa4efca5fb8d4ba558c6b218b381bce55c2d1a05def124c072865fa0426f. Jan 17 12:23:18.168133 containerd[1460]: time="2025-01-17T12:23:18.168079180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rsr9z,Uid:cf0567d8-141c-4c94-af72-85752733c14f,Namespace:kube-system,Attempt:1,} returns sandbox id \"d30caa4efca5fb8d4ba558c6b218b381bce55c2d1a05def124c072865fa0426f\"" Jan 17 12:23:18.169871 kubelet[2547]: E0117 12:23:18.169605 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:18.177993 containerd[1460]: time="2025-01-17T12:23:18.177689048Z" level=info msg="CreateContainer within sandbox \"d30caa4efca5fb8d4ba558c6b218b381bce55c2d1a05def124c072865fa0426f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:23:18.190981 containerd[1460]: time="2025-01-17T12:23:18.190737047Z" level=info msg="CreateContainer within sandbox \"d30caa4efca5fb8d4ba558c6b218b381bce55c2d1a05def124c072865fa0426f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"45d66b61900270715b43f52fce0c3ddaad58ed26b0ca2804d79005b403bffdf1\"" Jan 17 12:23:18.191727 containerd[1460]: time="2025-01-17T12:23:18.191696769Z" level=info msg="StartContainer for \"45d66b61900270715b43f52fce0c3ddaad58ed26b0ca2804d79005b403bffdf1\"" Jan 17 12:23:18.229446 systemd[1]: Started cri-containerd-45d66b61900270715b43f52fce0c3ddaad58ed26b0ca2804d79005b403bffdf1.scope - libcontainer container 45d66b61900270715b43f52fce0c3ddaad58ed26b0ca2804d79005b403bffdf1. Jan 17 12:23:18.302455 containerd[1460]: time="2025-01-17T12:23:18.302401226Z" level=info msg="StartContainer for \"45d66b61900270715b43f52fce0c3ddaad58ed26b0ca2804d79005b403bffdf1\" returns successfully" Jan 17 12:23:18.402197 containerd[1460]: time="2025-01-17T12:23:18.401514883Z" level=info msg="StopPodSandbox for \"77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79\"" Jan 17 12:23:18.402197 containerd[1460]: time="2025-01-17T12:23:18.401536487Z" level=info msg="StopPodSandbox for \"a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f\"" Jan 17 12:23:18.586462 containerd[1460]: 2025-01-17 12:23:18.507 [INFO][4061] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" Jan 17 12:23:18.586462 containerd[1460]: 2025-01-17 12:23:18.507 [INFO][4061] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" iface="eth0" netns="/var/run/netns/cni-92c0a77b-ea41-aaab-6abe-7cbfca12f1ec" Jan 17 12:23:18.586462 containerd[1460]: 2025-01-17 12:23:18.508 [INFO][4061] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" iface="eth0" netns="/var/run/netns/cni-92c0a77b-ea41-aaab-6abe-7cbfca12f1ec" Jan 17 12:23:18.586462 containerd[1460]: 2025-01-17 12:23:18.508 [INFO][4061] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" iface="eth0" netns="/var/run/netns/cni-92c0a77b-ea41-aaab-6abe-7cbfca12f1ec" Jan 17 12:23:18.586462 containerd[1460]: 2025-01-17 12:23:18.508 [INFO][4061] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" Jan 17 12:23:18.586462 containerd[1460]: 2025-01-17 12:23:18.509 [INFO][4061] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" Jan 17 12:23:18.586462 containerd[1460]: 2025-01-17 12:23:18.572 [INFO][4076] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" HandleID="k8s-pod-network.77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" Workload="ci--4081.3.0--6--c2def92c28-k8s-csi--node--driver--gzkcx-eth0" Jan 17 12:23:18.586462 containerd[1460]: 2025-01-17 12:23:18.573 [INFO][4076] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:18.586462 containerd[1460]: 2025-01-17 12:23:18.573 [INFO][4076] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:18.586462 containerd[1460]: 2025-01-17 12:23:18.579 [WARNING][4076] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" HandleID="k8s-pod-network.77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" Workload="ci--4081.3.0--6--c2def92c28-k8s-csi--node--driver--gzkcx-eth0" Jan 17 12:23:18.586462 containerd[1460]: 2025-01-17 12:23:18.579 [INFO][4076] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" HandleID="k8s-pod-network.77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" Workload="ci--4081.3.0--6--c2def92c28-k8s-csi--node--driver--gzkcx-eth0" Jan 17 12:23:18.586462 containerd[1460]: 2025-01-17 12:23:18.582 [INFO][4076] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:18.586462 containerd[1460]: 2025-01-17 12:23:18.584 [INFO][4061] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" Jan 17 12:23:18.588464 containerd[1460]: time="2025-01-17T12:23:18.587468201Z" level=info msg="TearDown network for sandbox \"77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79\" successfully" Jan 17 12:23:18.588464 containerd[1460]: time="2025-01-17T12:23:18.587858081Z" level=info msg="StopPodSandbox for \"77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79\" returns successfully" Jan 17 12:23:18.594193 containerd[1460]: time="2025-01-17T12:23:18.594125183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gzkcx,Uid:151ac44f-4692-405d-a3ad-26a51dc59114,Namespace:calico-system,Attempt:1,}" Jan 17 12:23:18.608832 containerd[1460]: 2025-01-17 12:23:18.540 [INFO][4065] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" Jan 17 12:23:18.608832 containerd[1460]: 2025-01-17 12:23:18.541 [INFO][4065] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" iface="eth0" netns="/var/run/netns/cni-0d60ab6a-f905-df52-adba-c1b83111d3d2" Jan 17 12:23:18.608832 containerd[1460]: 2025-01-17 12:23:18.542 [INFO][4065] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" iface="eth0" netns="/var/run/netns/cni-0d60ab6a-f905-df52-adba-c1b83111d3d2" Jan 17 12:23:18.608832 containerd[1460]: 2025-01-17 12:23:18.542 [INFO][4065] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" iface="eth0" netns="/var/run/netns/cni-0d60ab6a-f905-df52-adba-c1b83111d3d2" Jan 17 12:23:18.608832 containerd[1460]: 2025-01-17 12:23:18.542 [INFO][4065] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" Jan 17 12:23:18.608832 containerd[1460]: 2025-01-17 12:23:18.542 [INFO][4065] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" Jan 17 12:23:18.608832 containerd[1460]: 2025-01-17 12:23:18.590 [INFO][4080] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" HandleID="k8s-pod-network.a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" Workload="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--c6p9z-eth0" Jan 17 12:23:18.608832 containerd[1460]: 2025-01-17 12:23:18.590 [INFO][4080] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:18.608832 containerd[1460]: 2025-01-17 12:23:18.592 [INFO][4080] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:18.608832 containerd[1460]: 2025-01-17 12:23:18.600 [WARNING][4080] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" HandleID="k8s-pod-network.a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" Workload="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--c6p9z-eth0" Jan 17 12:23:18.608832 containerd[1460]: 2025-01-17 12:23:18.600 [INFO][4080] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" HandleID="k8s-pod-network.a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" Workload="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--c6p9z-eth0" Jan 17 12:23:18.608832 containerd[1460]: 2025-01-17 12:23:18.603 [INFO][4080] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:18.608832 containerd[1460]: 2025-01-17 12:23:18.605 [INFO][4065] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" Jan 17 12:23:18.610459 containerd[1460]: time="2025-01-17T12:23:18.610292259Z" level=info msg="TearDown network for sandbox \"a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f\" successfully" Jan 17 12:23:18.610459 containerd[1460]: time="2025-01-17T12:23:18.610340308Z" level=info msg="StopPodSandbox for \"a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f\" returns successfully" Jan 17 12:23:18.611218 kubelet[2547]: E0117 12:23:18.610840 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:18.611966 containerd[1460]: time="2025-01-17T12:23:18.611601052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-c6p9z,Uid:80e6a65e-0c98-4ec1-b14d-0f74c5d02c17,Namespace:kube-system,Attempt:1,}" Jan 17 12:23:18.661727 systemd[1]: run-netns-cni\x2d92c0a77b\x2dea41\x2daaab\x2d6abe\x2d7cbfca12f1ec.mount: Deactivated successfully. Jan 17 12:23:18.661843 systemd[1]: run-netns-cni\x2d0d60ab6a\x2df905\x2ddf52\x2dadba\x2dc1b83111d3d2.mount: Deactivated successfully. Jan 17 12:23:18.803525 systemd-networkd[1358]: cali31353300c2f: Link UP Jan 17 12:23:18.806901 systemd-networkd[1358]: cali31353300c2f: Gained carrier Jan 17 12:23:18.826553 kubelet[2547]: E0117 12:23:18.825064 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:18.842050 containerd[1460]: 2025-01-17 12:23:18.685 [INFO][4090] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--6--c2def92c28-k8s-csi--node--driver--gzkcx-eth0 csi-node-driver- calico-system 151ac44f-4692-405d-a3ad-26a51dc59114 888 0 2025-01-17 12:22:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-6-c2def92c28 csi-node-driver-gzkcx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali31353300c2f [] []}} ContainerID="c5a8be4a05936d2c6d032726c3551f552e45dba70a6d568a14cf0c8f76694fb1" Namespace="calico-system" Pod="csi-node-driver-gzkcx" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-csi--node--driver--gzkcx-" Jan 17 12:23:18.842050 containerd[1460]: 2025-01-17 12:23:18.685 [INFO][4090] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c5a8be4a05936d2c6d032726c3551f552e45dba70a6d568a14cf0c8f76694fb1" Namespace="calico-system" Pod="csi-node-driver-gzkcx" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-csi--node--driver--gzkcx-eth0" Jan 17 12:23:18.842050 containerd[1460]: 2025-01-17 12:23:18.744 [INFO][4114] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c5a8be4a05936d2c6d032726c3551f552e45dba70a6d568a14cf0c8f76694fb1" HandleID="k8s-pod-network.c5a8be4a05936d2c6d032726c3551f552e45dba70a6d568a14cf0c8f76694fb1" Workload="ci--4081.3.0--6--c2def92c28-k8s-csi--node--driver--gzkcx-eth0" Jan 17 12:23:18.842050 containerd[1460]: 2025-01-17 12:23:18.756 [INFO][4114] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c5a8be4a05936d2c6d032726c3551f552e45dba70a6d568a14cf0c8f76694fb1" HandleID="k8s-pod-network.c5a8be4a05936d2c6d032726c3551f552e45dba70a6d568a14cf0c8f76694fb1" Workload="ci--4081.3.0--6--c2def92c28-k8s-csi--node--driver--gzkcx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003341d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-6-c2def92c28", "pod":"csi-node-driver-gzkcx", "timestamp":"2025-01-17 12:23:18.74428771 +0000 UTC"}, Hostname:"ci-4081.3.0-6-c2def92c28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:23:18.842050 containerd[1460]: 2025-01-17 12:23:18.756 [INFO][4114] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:18.842050 containerd[1460]: 2025-01-17 12:23:18.756 [INFO][4114] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:18.842050 containerd[1460]: 2025-01-17 12:23:18.756 [INFO][4114] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-6-c2def92c28' Jan 17 12:23:18.842050 containerd[1460]: 2025-01-17 12:23:18.759 [INFO][4114] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c5a8be4a05936d2c6d032726c3551f552e45dba70a6d568a14cf0c8f76694fb1" host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:18.842050 containerd[1460]: 2025-01-17 12:23:18.764 [INFO][4114] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:18.842050 containerd[1460]: 2025-01-17 12:23:18.771 [INFO][4114] ipam/ipam.go 489: Trying affinity for 192.168.120.192/26 host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:18.842050 containerd[1460]: 2025-01-17 12:23:18.773 [INFO][4114] ipam/ipam.go 155: Attempting to load block cidr=192.168.120.192/26 host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:18.842050 containerd[1460]: 2025-01-17 12:23:18.776 [INFO][4114] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.120.192/26 host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:18.842050 containerd[1460]: 2025-01-17 12:23:18.776 [INFO][4114] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.120.192/26 handle="k8s-pod-network.c5a8be4a05936d2c6d032726c3551f552e45dba70a6d568a14cf0c8f76694fb1" host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:18.842050 containerd[1460]: 2025-01-17 12:23:18.778 [INFO][4114] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c5a8be4a05936d2c6d032726c3551f552e45dba70a6d568a14cf0c8f76694fb1 Jan 17 12:23:18.842050 containerd[1460]: 2025-01-17 12:23:18.783 [INFO][4114] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.120.192/26 handle="k8s-pod-network.c5a8be4a05936d2c6d032726c3551f552e45dba70a6d568a14cf0c8f76694fb1" host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:18.842050 containerd[1460]: 2025-01-17 12:23:18.793 [INFO][4114] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.120.194/26] block=192.168.120.192/26 handle="k8s-pod-network.c5a8be4a05936d2c6d032726c3551f552e45dba70a6d568a14cf0c8f76694fb1" host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:18.842050 containerd[1460]: 2025-01-17 12:23:18.793 [INFO][4114] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.120.194/26] handle="k8s-pod-network.c5a8be4a05936d2c6d032726c3551f552e45dba70a6d568a14cf0c8f76694fb1" host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:18.842050 containerd[1460]: 2025-01-17 12:23:18.795 [INFO][4114] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:18.842050 containerd[1460]: 2025-01-17 12:23:18.795 [INFO][4114] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.194/26] IPv6=[] ContainerID="c5a8be4a05936d2c6d032726c3551f552e45dba70a6d568a14cf0c8f76694fb1" HandleID="k8s-pod-network.c5a8be4a05936d2c6d032726c3551f552e45dba70a6d568a14cf0c8f76694fb1" Workload="ci--4081.3.0--6--c2def92c28-k8s-csi--node--driver--gzkcx-eth0" Jan 17 12:23:18.842765 containerd[1460]: 2025-01-17 12:23:18.798 [INFO][4090] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c5a8be4a05936d2c6d032726c3551f552e45dba70a6d568a14cf0c8f76694fb1" Namespace="calico-system" Pod="csi-node-driver-gzkcx" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-csi--node--driver--gzkcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--6--c2def92c28-k8s-csi--node--driver--gzkcx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"151ac44f-4692-405d-a3ad-26a51dc59114", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-6-c2def92c28", ContainerID:"", Pod:"csi-node-driver-gzkcx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali31353300c2f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:18.842765 containerd[1460]: 2025-01-17 12:23:18.799 [INFO][4090] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.120.194/32] ContainerID="c5a8be4a05936d2c6d032726c3551f552e45dba70a6d568a14cf0c8f76694fb1" Namespace="calico-system" Pod="csi-node-driver-gzkcx" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-csi--node--driver--gzkcx-eth0" Jan 17 12:23:18.842765 containerd[1460]: 2025-01-17 12:23:18.799 [INFO][4090] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali31353300c2f ContainerID="c5a8be4a05936d2c6d032726c3551f552e45dba70a6d568a14cf0c8f76694fb1" Namespace="calico-system" Pod="csi-node-driver-gzkcx" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-csi--node--driver--gzkcx-eth0" Jan 17 12:23:18.842765 containerd[1460]: 2025-01-17 12:23:18.809 [INFO][4090] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c5a8be4a05936d2c6d032726c3551f552e45dba70a6d568a14cf0c8f76694fb1" Namespace="calico-system" Pod="csi-node-driver-gzkcx" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-csi--node--driver--gzkcx-eth0" Jan 17 12:23:18.842765 containerd[1460]: 2025-01-17 12:23:18.811 [INFO][4090] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c5a8be4a05936d2c6d032726c3551f552e45dba70a6d568a14cf0c8f76694fb1" Namespace="calico-system" Pod="csi-node-driver-gzkcx" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-csi--node--driver--gzkcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--6--c2def92c28-k8s-csi--node--driver--gzkcx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"151ac44f-4692-405d-a3ad-26a51dc59114", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-6-c2def92c28", ContainerID:"c5a8be4a05936d2c6d032726c3551f552e45dba70a6d568a14cf0c8f76694fb1", Pod:"csi-node-driver-gzkcx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali31353300c2f", MAC:"7e:d7:62:40:74:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:18.842765 containerd[1460]: 2025-01-17 12:23:18.835 [INFO][4090] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c5a8be4a05936d2c6d032726c3551f552e45dba70a6d568a14cf0c8f76694fb1" Namespace="calico-system" Pod="csi-node-driver-gzkcx" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-csi--node--driver--gzkcx-eth0" Jan 17 12:23:18.893133 containerd[1460]: time="2025-01-17T12:23:18.892941070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:23:18.893695 containerd[1460]: time="2025-01-17T12:23:18.893085131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:23:18.893871 containerd[1460]: time="2025-01-17T12:23:18.893814776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:18.895324 containerd[1460]: time="2025-01-17T12:23:18.895148067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:18.909611 kubelet[2547]: I0117 12:23:18.908952 2547 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-rsr9z" podStartSLOduration=38.908550103 podStartE2EDuration="38.908550103s" podCreationTimestamp="2025-01-17 12:22:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:23:18.869660367 +0000 UTC m=+51.645925285" watchObservedRunningTime="2025-01-17 12:23:18.908550103 +0000 UTC m=+51.684814998" Jan 17 12:23:18.929618 systemd[1]: Started cri-containerd-c5a8be4a05936d2c6d032726c3551f552e45dba70a6d568a14cf0c8f76694fb1.scope - libcontainer container c5a8be4a05936d2c6d032726c3551f552e45dba70a6d568a14cf0c8f76694fb1. Jan 17 12:23:18.935350 systemd-networkd[1358]: cali59107cb3610: Link UP Jan 17 12:23:18.940566 systemd-networkd[1358]: cali59107cb3610: Gained carrier Jan 17 12:23:18.995018 containerd[1460]: 2025-01-17 12:23:18.698 [INFO][4091] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--c6p9z-eth0 coredns-76f75df574- kube-system 80e6a65e-0c98-4ec1-b14d-0f74c5d02c17 889 0 2025-01-17 12:22:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-6-c2def92c28 coredns-76f75df574-c6p9z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali59107cb3610 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="79ff7284a9f782329f1d39ed3dae8f3ca0ace1d89e0e90e8570752d0b82775f1" Namespace="kube-system" Pod="coredns-76f75df574-c6p9z" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--c6p9z-" Jan 17 12:23:18.995018 containerd[1460]: 2025-01-17 12:23:18.698 [INFO][4091] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="79ff7284a9f782329f1d39ed3dae8f3ca0ace1d89e0e90e8570752d0b82775f1" Namespace="kube-system" Pod="coredns-76f75df574-c6p9z" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--c6p9z-eth0" Jan 17 12:23:18.995018 containerd[1460]: 2025-01-17 12:23:18.738 [INFO][4119] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79ff7284a9f782329f1d39ed3dae8f3ca0ace1d89e0e90e8570752d0b82775f1" HandleID="k8s-pod-network.79ff7284a9f782329f1d39ed3dae8f3ca0ace1d89e0e90e8570752d0b82775f1" Workload="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--c6p9z-eth0" Jan 17 12:23:18.995018 containerd[1460]: 2025-01-17 12:23:18.758 [INFO][4119] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="79ff7284a9f782329f1d39ed3dae8f3ca0ace1d89e0e90e8570752d0b82775f1" HandleID="k8s-pod-network.79ff7284a9f782329f1d39ed3dae8f3ca0ace1d89e0e90e8570752d0b82775f1" Workload="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--c6p9z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318fd0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-6-c2def92c28", "pod":"coredns-76f75df574-c6p9z", "timestamp":"2025-01-17 12:23:18.738446076 +0000 UTC"}, Hostname:"ci-4081.3.0-6-c2def92c28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:23:18.995018 containerd[1460]: 2025-01-17 12:23:18.758 [INFO][4119] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:18.995018 containerd[1460]: 2025-01-17 12:23:18.794 [INFO][4119] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:18.995018 containerd[1460]: 2025-01-17 12:23:18.794 [INFO][4119] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-6-c2def92c28' Jan 17 12:23:18.995018 containerd[1460]: 2025-01-17 12:23:18.799 [INFO][4119] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.79ff7284a9f782329f1d39ed3dae8f3ca0ace1d89e0e90e8570752d0b82775f1" host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:18.995018 containerd[1460]: 2025-01-17 12:23:18.818 [INFO][4119] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:18.995018 containerd[1460]: 2025-01-17 12:23:18.830 [INFO][4119] ipam/ipam.go 489: Trying affinity for 192.168.120.192/26 host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:18.995018 containerd[1460]: 2025-01-17 12:23:18.841 [INFO][4119] ipam/ipam.go 155: Attempting to load block cidr=192.168.120.192/26 host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:18.995018 containerd[1460]: 2025-01-17 12:23:18.851 [INFO][4119] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.120.192/26 host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:18.995018 containerd[1460]: 2025-01-17 12:23:18.851 [INFO][4119] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.120.192/26 handle="k8s-pod-network.79ff7284a9f782329f1d39ed3dae8f3ca0ace1d89e0e90e8570752d0b82775f1" host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:18.995018 containerd[1460]: 2025-01-17 12:23:18.859 [INFO][4119] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.79ff7284a9f782329f1d39ed3dae8f3ca0ace1d89e0e90e8570752d0b82775f1 Jan 17 12:23:18.995018 containerd[1460]: 2025-01-17 12:23:18.874 [INFO][4119] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.120.192/26 handle="k8s-pod-network.79ff7284a9f782329f1d39ed3dae8f3ca0ace1d89e0e90e8570752d0b82775f1" host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:18.995018 containerd[1460]: 2025-01-17 12:23:18.921 [INFO][4119] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.120.195/26] block=192.168.120.192/26 handle="k8s-pod-network.79ff7284a9f782329f1d39ed3dae8f3ca0ace1d89e0e90e8570752d0b82775f1" host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:18.995018 containerd[1460]: 2025-01-17 12:23:18.922 [INFO][4119] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.120.195/26] handle="k8s-pod-network.79ff7284a9f782329f1d39ed3dae8f3ca0ace1d89e0e90e8570752d0b82775f1" host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:18.995018 containerd[1460]: 2025-01-17 12:23:18.922 [INFO][4119] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:18.995018 containerd[1460]: 2025-01-17 12:23:18.922 [INFO][4119] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.195/26] IPv6=[] ContainerID="79ff7284a9f782329f1d39ed3dae8f3ca0ace1d89e0e90e8570752d0b82775f1" HandleID="k8s-pod-network.79ff7284a9f782329f1d39ed3dae8f3ca0ace1d89e0e90e8570752d0b82775f1" Workload="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--c6p9z-eth0" Jan 17 12:23:18.995818 containerd[1460]: 2025-01-17 12:23:18.928 [INFO][4091] cni-plugin/k8s.go 386: Populated endpoint ContainerID="79ff7284a9f782329f1d39ed3dae8f3ca0ace1d89e0e90e8570752d0b82775f1" Namespace="kube-system" Pod="coredns-76f75df574-c6p9z" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--c6p9z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--c6p9z-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"80e6a65e-0c98-4ec1-b14d-0f74c5d02c17", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-6-c2def92c28", ContainerID:"", Pod:"coredns-76f75df574-c6p9z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali59107cb3610", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:18.995818 containerd[1460]: 2025-01-17 12:23:18.928 [INFO][4091] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.120.195/32] ContainerID="79ff7284a9f782329f1d39ed3dae8f3ca0ace1d89e0e90e8570752d0b82775f1" Namespace="kube-system" Pod="coredns-76f75df574-c6p9z" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--c6p9z-eth0" Jan 17 12:23:18.995818 containerd[1460]: 2025-01-17 12:23:18.928 [INFO][4091] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali59107cb3610 ContainerID="79ff7284a9f782329f1d39ed3dae8f3ca0ace1d89e0e90e8570752d0b82775f1" Namespace="kube-system" Pod="coredns-76f75df574-c6p9z" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--c6p9z-eth0" Jan 17 12:23:18.995818 containerd[1460]: 2025-01-17 12:23:18.943 [INFO][4091] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="79ff7284a9f782329f1d39ed3dae8f3ca0ace1d89e0e90e8570752d0b82775f1" Namespace="kube-system" Pod="coredns-76f75df574-c6p9z" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--c6p9z-eth0" Jan 17 12:23:18.995818 containerd[1460]: 2025-01-17 12:23:18.948 [INFO][4091] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="79ff7284a9f782329f1d39ed3dae8f3ca0ace1d89e0e90e8570752d0b82775f1" Namespace="kube-system" Pod="coredns-76f75df574-c6p9z" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--c6p9z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--c6p9z-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"80e6a65e-0c98-4ec1-b14d-0f74c5d02c17", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-6-c2def92c28", ContainerID:"79ff7284a9f782329f1d39ed3dae8f3ca0ace1d89e0e90e8570752d0b82775f1", Pod:"coredns-76f75df574-c6p9z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali59107cb3610", MAC:"42:a6:36:66:57:9a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:18.995818 containerd[1460]: 2025-01-17 12:23:18.985 [INFO][4091] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="79ff7284a9f782329f1d39ed3dae8f3ca0ace1d89e0e90e8570752d0b82775f1" Namespace="kube-system" Pod="coredns-76f75df574-c6p9z" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--c6p9z-eth0" Jan 17 12:23:19.047918 containerd[1460]: time="2025-01-17T12:23:19.047878815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gzkcx,Uid:151ac44f-4692-405d-a3ad-26a51dc59114,Namespace:calico-system,Attempt:1,} returns sandbox id \"c5a8be4a05936d2c6d032726c3551f552e45dba70a6d568a14cf0c8f76694fb1\"" Jan 17 12:23:19.058406 containerd[1460]: time="2025-01-17T12:23:19.058358756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 17 12:23:19.065080 containerd[1460]: time="2025-01-17T12:23:19.063548706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:23:19.065080 containerd[1460]: time="2025-01-17T12:23:19.063609744Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:23:19.065080 containerd[1460]: time="2025-01-17T12:23:19.063620410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:19.065080 containerd[1460]: time="2025-01-17T12:23:19.063801291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:19.103654 systemd[1]: Started cri-containerd-79ff7284a9f782329f1d39ed3dae8f3ca0ace1d89e0e90e8570752d0b82775f1.scope - libcontainer container 79ff7284a9f782329f1d39ed3dae8f3ca0ace1d89e0e90e8570752d0b82775f1. Jan 17 12:23:19.160552 containerd[1460]: time="2025-01-17T12:23:19.160404808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-c6p9z,Uid:80e6a65e-0c98-4ec1-b14d-0f74c5d02c17,Namespace:kube-system,Attempt:1,} returns sandbox id \"79ff7284a9f782329f1d39ed3dae8f3ca0ace1d89e0e90e8570752d0b82775f1\"" Jan 17 12:23:19.162360 kubelet[2547]: E0117 12:23:19.162331 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:19.165946 containerd[1460]: time="2025-01-17T12:23:19.165911638Z" level=info msg="CreateContainer within sandbox \"79ff7284a9f782329f1d39ed3dae8f3ca0ace1d89e0e90e8570752d0b82775f1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:23:19.190118 containerd[1460]: time="2025-01-17T12:23:19.190065788Z" level=info msg="CreateContainer within sandbox \"79ff7284a9f782329f1d39ed3dae8f3ca0ace1d89e0e90e8570752d0b82775f1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0bfc9c791102e76902af63c00ed91edccc938067d2256115f006f262283cc589\"" Jan 17 12:23:19.191234 containerd[1460]: time="2025-01-17T12:23:19.191200938Z" level=info msg="StartContainer for \"0bfc9c791102e76902af63c00ed91edccc938067d2256115f006f262283cc589\"" Jan 17 12:23:19.234404 systemd[1]: Started cri-containerd-0bfc9c791102e76902af63c00ed91edccc938067d2256115f006f262283cc589.scope - libcontainer container 0bfc9c791102e76902af63c00ed91edccc938067d2256115f006f262283cc589. Jan 17 12:23:19.268292 containerd[1460]: time="2025-01-17T12:23:19.268085607Z" level=info msg="StartContainer for \"0bfc9c791102e76902af63c00ed91edccc938067d2256115f006f262283cc589\" returns successfully" Jan 17 12:23:19.348352 systemd-networkd[1358]: cali79f6d4bfbf7: Gained IPv6LL Jan 17 12:23:19.403358 containerd[1460]: time="2025-01-17T12:23:19.402927464Z" level=info msg="StopPodSandbox for \"1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049\"" Jan 17 12:23:19.406601 containerd[1460]: time="2025-01-17T12:23:19.405883069Z" level=info msg="StopPodSandbox for \"4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d\"" Jan 17 12:23:19.555215 containerd[1460]: 2025-01-17 12:23:19.493 [INFO][4308] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" Jan 17 12:23:19.555215 containerd[1460]: 2025-01-17 12:23:19.493 [INFO][4308] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" iface="eth0" netns="/var/run/netns/cni-94315284-3661-e1eb-466e-877b2e12bcd6" Jan 17 12:23:19.555215 containerd[1460]: 2025-01-17 12:23:19.494 [INFO][4308] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" iface="eth0" netns="/var/run/netns/cni-94315284-3661-e1eb-466e-877b2e12bcd6" Jan 17 12:23:19.555215 containerd[1460]: 2025-01-17 12:23:19.494 [INFO][4308] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" iface="eth0" netns="/var/run/netns/cni-94315284-3661-e1eb-466e-877b2e12bcd6" Jan 17 12:23:19.555215 containerd[1460]: 2025-01-17 12:23:19.494 [INFO][4308] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" Jan 17 12:23:19.555215 containerd[1460]: 2025-01-17 12:23:19.494 [INFO][4308] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" Jan 17 12:23:19.555215 containerd[1460]: 2025-01-17 12:23:19.533 [INFO][4320] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" HandleID="k8s-pod-network.1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--kube--controllers--84bb7b955f--qmkwr-eth0" Jan 17 12:23:19.555215 containerd[1460]: 2025-01-17 12:23:19.533 [INFO][4320] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:19.555215 containerd[1460]: 2025-01-17 12:23:19.533 [INFO][4320] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:19.555215 containerd[1460]: 2025-01-17 12:23:19.547 [WARNING][4320] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" HandleID="k8s-pod-network.1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--kube--controllers--84bb7b955f--qmkwr-eth0" Jan 17 12:23:19.555215 containerd[1460]: 2025-01-17 12:23:19.547 [INFO][4320] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" HandleID="k8s-pod-network.1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--kube--controllers--84bb7b955f--qmkwr-eth0" Jan 17 12:23:19.555215 containerd[1460]: 2025-01-17 12:23:19.550 [INFO][4320] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:19.555215 containerd[1460]: 2025-01-17 12:23:19.552 [INFO][4308] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" Jan 17 12:23:19.555681 containerd[1460]: time="2025-01-17T12:23:19.555379640Z" level=info msg="TearDown network for sandbox \"1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049\" successfully" Jan 17 12:23:19.555681 containerd[1460]: time="2025-01-17T12:23:19.555419745Z" level=info msg="StopPodSandbox for \"1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049\" returns successfully" Jan 17 12:23:19.556952 containerd[1460]: time="2025-01-17T12:23:19.556702279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84bb7b955f-qmkwr,Uid:bb7db20c-9339-4707-9d88-fdbe00b2a260,Namespace:calico-system,Attempt:1,}" Jan 17 12:23:19.572067 containerd[1460]: 2025-01-17 12:23:19.509 [INFO][4307] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" Jan 17 12:23:19.572067 containerd[1460]: 2025-01-17 12:23:19.511 [INFO][4307] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" iface="eth0" netns="/var/run/netns/cni-ca08ceec-78ac-2188-c151-43e661af5a1b" Jan 17 12:23:19.572067 containerd[1460]: 2025-01-17 12:23:19.512 [INFO][4307] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" iface="eth0" netns="/var/run/netns/cni-ca08ceec-78ac-2188-c151-43e661af5a1b" Jan 17 12:23:19.572067 containerd[1460]: 2025-01-17 12:23:19.512 [INFO][4307] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" iface="eth0" netns="/var/run/netns/cni-ca08ceec-78ac-2188-c151-43e661af5a1b" Jan 17 12:23:19.572067 containerd[1460]: 2025-01-17 12:23:19.512 [INFO][4307] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" Jan 17 12:23:19.572067 containerd[1460]: 2025-01-17 12:23:19.512 [INFO][4307] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" Jan 17 12:23:19.572067 containerd[1460]: 2025-01-17 12:23:19.547 [INFO][4324] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" HandleID="k8s-pod-network.4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--qlwqs-eth0" Jan 17 12:23:19.572067 containerd[1460]: 2025-01-17 12:23:19.547 [INFO][4324] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:19.572067 containerd[1460]: 2025-01-17 12:23:19.550 [INFO][4324] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:19.572067 containerd[1460]: 2025-01-17 12:23:19.559 [WARNING][4324] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" HandleID="k8s-pod-network.4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--qlwqs-eth0" Jan 17 12:23:19.572067 containerd[1460]: 2025-01-17 12:23:19.559 [INFO][4324] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" HandleID="k8s-pod-network.4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--qlwqs-eth0" Jan 17 12:23:19.572067 containerd[1460]: 2025-01-17 12:23:19.563 [INFO][4324] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:19.572067 containerd[1460]: 2025-01-17 12:23:19.566 [INFO][4307] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" Jan 17 12:23:19.572766 containerd[1460]: time="2025-01-17T12:23:19.572278323Z" level=info msg="TearDown network for sandbox \"4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d\" successfully" Jan 17 12:23:19.572766 containerd[1460]: time="2025-01-17T12:23:19.572305056Z" level=info msg="StopPodSandbox for \"4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d\" returns successfully" Jan 17 12:23:19.573572 containerd[1460]: time="2025-01-17T12:23:19.573066686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-644c6b96bd-qlwqs,Uid:4a25078f-72c0-4f3c-95ba-d53d9ddcf023,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:23:19.662697 systemd[1]: run-netns-cni\x2dca08ceec\x2d78ac\x2d2188\x2dc151\x2d43e661af5a1b.mount: Deactivated successfully. Jan 17 12:23:19.662851 systemd[1]: run-netns-cni\x2d94315284\x2d3661\x2de1eb\x2d466e\x2d877b2e12bcd6.mount: Deactivated successfully. Jan 17 12:23:19.696694 systemd[1]: Started sshd@9-164.92.109.43:22-139.178.68.195:43216.service - OpenSSH per-connection server daemon (139.178.68.195:43216). Jan 17 12:23:19.789263 systemd-networkd[1358]: cali5f49d3ce4ac: Link UP Jan 17 12:23:19.790557 systemd-networkd[1358]: cali5f49d3ce4ac: Gained carrier Jan 17 12:23:19.809209 sshd[4367]: Accepted publickey for core from 139.178.68.195 port 43216 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:19.816077 sshd[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:19.818279 containerd[1460]: 2025-01-17 12:23:19.622 [INFO][4333] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--6--c2def92c28-k8s-calico--kube--controllers--84bb7b955f--qmkwr-eth0 calico-kube-controllers-84bb7b955f- calico-system bb7db20c-9339-4707-9d88-fdbe00b2a260 924 0 2025-01-17 12:22:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84bb7b955f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-6-c2def92c28 calico-kube-controllers-84bb7b955f-qmkwr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5f49d3ce4ac [] []}} ContainerID="e2e20b1f8b43ba99eaab1f4c13e695921662b0af756d6115806caa36b027c05a" Namespace="calico-system" Pod="calico-kube-controllers-84bb7b955f-qmkwr" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-calico--kube--controllers--84bb7b955f--qmkwr-" Jan 17 12:23:19.818279 containerd[1460]: 2025-01-17 12:23:19.622 [INFO][4333] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e2e20b1f8b43ba99eaab1f4c13e695921662b0af756d6115806caa36b027c05a" Namespace="calico-system" Pod="calico-kube-controllers-84bb7b955f-qmkwr" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-calico--kube--controllers--84bb7b955f--qmkwr-eth0" Jan 17 12:23:19.818279 containerd[1460]: 2025-01-17 12:23:19.682 [INFO][4355] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2e20b1f8b43ba99eaab1f4c13e695921662b0af756d6115806caa36b027c05a" HandleID="k8s-pod-network.e2e20b1f8b43ba99eaab1f4c13e695921662b0af756d6115806caa36b027c05a" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--kube--controllers--84bb7b955f--qmkwr-eth0" Jan 17 12:23:19.818279 containerd[1460]: 2025-01-17 12:23:19.706 [INFO][4355] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e2e20b1f8b43ba99eaab1f4c13e695921662b0af756d6115806caa36b027c05a" HandleID="k8s-pod-network.e2e20b1f8b43ba99eaab1f4c13e695921662b0af756d6115806caa36b027c05a" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--kube--controllers--84bb7b955f--qmkwr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000220b60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-6-c2def92c28", "pod":"calico-kube-controllers-84bb7b955f-qmkwr", "timestamp":"2025-01-17 12:23:19.681905601 +0000 UTC"}, Hostname:"ci-4081.3.0-6-c2def92c28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:23:19.818279 containerd[1460]: 2025-01-17 12:23:19.706 [INFO][4355] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:19.818279 containerd[1460]: 2025-01-17 12:23:19.706 [INFO][4355] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:19.818279 containerd[1460]: 2025-01-17 12:23:19.706 [INFO][4355] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-6-c2def92c28' Jan 17 12:23:19.818279 containerd[1460]: 2025-01-17 12:23:19.710 [INFO][4355] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e2e20b1f8b43ba99eaab1f4c13e695921662b0af756d6115806caa36b027c05a" host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:19.818279 containerd[1460]: 2025-01-17 12:23:19.728 [INFO][4355] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:19.818279 containerd[1460]: 2025-01-17 12:23:19.746 [INFO][4355] ipam/ipam.go 489: Trying affinity for 192.168.120.192/26 host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:19.818279 containerd[1460]: 2025-01-17 12:23:19.752 [INFO][4355] ipam/ipam.go 155: Attempting to load block cidr=192.168.120.192/26 host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:19.818279 containerd[1460]: 2025-01-17 12:23:19.755 [INFO][4355] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.120.192/26 host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:19.818279 containerd[1460]: 2025-01-17 12:23:19.755 [INFO][4355] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.120.192/26 handle="k8s-pod-network.e2e20b1f8b43ba99eaab1f4c13e695921662b0af756d6115806caa36b027c05a" host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:19.818279 containerd[1460]: 2025-01-17 12:23:19.758 [INFO][4355] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e2e20b1f8b43ba99eaab1f4c13e695921662b0af756d6115806caa36b027c05a Jan 17 12:23:19.818279 containerd[1460]: 2025-01-17 12:23:19.766 [INFO][4355] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.120.192/26 handle="k8s-pod-network.e2e20b1f8b43ba99eaab1f4c13e695921662b0af756d6115806caa36b027c05a" host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:19.818279 containerd[1460]: 2025-01-17 12:23:19.778 [INFO][4355] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.120.196/26] block=192.168.120.192/26 handle="k8s-pod-network.e2e20b1f8b43ba99eaab1f4c13e695921662b0af756d6115806caa36b027c05a" host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:19.818279 containerd[1460]: 2025-01-17 12:23:19.778 [INFO][4355] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.120.196/26] handle="k8s-pod-network.e2e20b1f8b43ba99eaab1f4c13e695921662b0af756d6115806caa36b027c05a" host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:19.818279 containerd[1460]: 2025-01-17 12:23:19.778 [INFO][4355] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:19.818279 containerd[1460]: 2025-01-17 12:23:19.778 [INFO][4355] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.196/26] IPv6=[] ContainerID="e2e20b1f8b43ba99eaab1f4c13e695921662b0af756d6115806caa36b027c05a" HandleID="k8s-pod-network.e2e20b1f8b43ba99eaab1f4c13e695921662b0af756d6115806caa36b027c05a" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--kube--controllers--84bb7b955f--qmkwr-eth0" Jan 17 12:23:19.820682 containerd[1460]: 2025-01-17 12:23:19.782 [INFO][4333] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e2e20b1f8b43ba99eaab1f4c13e695921662b0af756d6115806caa36b027c05a" Namespace="calico-system" Pod="calico-kube-controllers-84bb7b955f-qmkwr" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-calico--kube--controllers--84bb7b955f--qmkwr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--6--c2def92c28-k8s-calico--kube--controllers--84bb7b955f--qmkwr-eth0", GenerateName:"calico-kube-controllers-84bb7b955f-", Namespace:"calico-system", SelfLink:"", UID:"bb7db20c-9339-4707-9d88-fdbe00b2a260", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84bb7b955f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-6-c2def92c28", ContainerID:"", Pod:"calico-kube-controllers-84bb7b955f-qmkwr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5f49d3ce4ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:19.820682 containerd[1460]: 2025-01-17 12:23:19.782 [INFO][4333] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.120.196/32] ContainerID="e2e20b1f8b43ba99eaab1f4c13e695921662b0af756d6115806caa36b027c05a" Namespace="calico-system" Pod="calico-kube-controllers-84bb7b955f-qmkwr" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-calico--kube--controllers--84bb7b955f--qmkwr-eth0" Jan 17 12:23:19.820682 containerd[1460]: 2025-01-17 12:23:19.782 [INFO][4333] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5f49d3ce4ac ContainerID="e2e20b1f8b43ba99eaab1f4c13e695921662b0af756d6115806caa36b027c05a" Namespace="calico-system" Pod="calico-kube-controllers-84bb7b955f-qmkwr" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-calico--kube--controllers--84bb7b955f--qmkwr-eth0" Jan 17 12:23:19.820682 containerd[1460]: 2025-01-17 12:23:19.786 [INFO][4333] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2e20b1f8b43ba99eaab1f4c13e695921662b0af756d6115806caa36b027c05a" Namespace="calico-system" Pod="calico-kube-controllers-84bb7b955f-qmkwr" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-calico--kube--controllers--84bb7b955f--qmkwr-eth0" Jan 17 12:23:19.820682 containerd[1460]: 2025-01-17 12:23:19.786 [INFO][4333] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e2e20b1f8b43ba99eaab1f4c13e695921662b0af756d6115806caa36b027c05a" Namespace="calico-system" Pod="calico-kube-controllers-84bb7b955f-qmkwr" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-calico--kube--controllers--84bb7b955f--qmkwr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--6--c2def92c28-k8s-calico--kube--controllers--84bb7b955f--qmkwr-eth0", GenerateName:"calico-kube-controllers-84bb7b955f-", Namespace:"calico-system", SelfLink:"", UID:"bb7db20c-9339-4707-9d88-fdbe00b2a260", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84bb7b955f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-6-c2def92c28", ContainerID:"e2e20b1f8b43ba99eaab1f4c13e695921662b0af756d6115806caa36b027c05a", Pod:"calico-kube-controllers-84bb7b955f-qmkwr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5f49d3ce4ac", MAC:"3e:bb:4a:ed:73:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:19.820682 containerd[1460]: 2025-01-17 12:23:19.814 [INFO][4333] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e2e20b1f8b43ba99eaab1f4c13e695921662b0af756d6115806caa36b027c05a" Namespace="calico-system" Pod="calico-kube-controllers-84bb7b955f-qmkwr" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-calico--kube--controllers--84bb7b955f--qmkwr-eth0" Jan 17 12:23:19.834357 systemd-logind[1442]: New session 10 of user core. Jan 17 12:23:19.838511 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:23:19.852386 kubelet[2547]: E0117 12:23:19.845214 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:19.864517 kubelet[2547]: E0117 12:23:19.864390 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:19.913661 systemd-networkd[1358]: cali4a5f741ba23: Link UP Jan 17 12:23:19.913983 systemd-networkd[1358]: cali4a5f741ba23: Gained carrier Jan 17 12:23:19.930006 kubelet[2547]: I0117 12:23:19.926959 2547 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-c6p9z" podStartSLOduration=39.926908644 podStartE2EDuration="39.926908644s" podCreationTimestamp="2025-01-17 12:22:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:23:19.915309374 +0000 UTC m=+52.691574303" watchObservedRunningTime="2025-01-17 12:23:19.926908644 +0000 UTC m=+52.703173560" Jan 17 12:23:19.966255 containerd[1460]: time="2025-01-17T12:23:19.962637399Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:23:19.966255 containerd[1460]: time="2025-01-17T12:23:19.962722786Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:23:19.966255 containerd[1460]: time="2025-01-17T12:23:19.962738788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:19.966255 containerd[1460]: time="2025-01-17T12:23:19.962879850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:20.005224 containerd[1460]: 2025-01-17 12:23:19.655 [INFO][4344] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--qlwqs-eth0 calico-apiserver-644c6b96bd- calico-apiserver 4a25078f-72c0-4f3c-95ba-d53d9ddcf023 925 0 2025-01-17 12:22:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:644c6b96bd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-6-c2def92c28 calico-apiserver-644c6b96bd-qlwqs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4a5f741ba23 [] []}} ContainerID="b71a3241894b1b7f866c29aef2496e78ba24711780f4cab34eabce86553a032e" Namespace="calico-apiserver" Pod="calico-apiserver-644c6b96bd-qlwqs" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--qlwqs-" Jan 17 12:23:20.005224 containerd[1460]: 2025-01-17 12:23:19.659 [INFO][4344] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b71a3241894b1b7f866c29aef2496e78ba24711780f4cab34eabce86553a032e" Namespace="calico-apiserver" Pod="calico-apiserver-644c6b96bd-qlwqs" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--qlwqs-eth0" Jan 17 12:23:20.005224 containerd[1460]: 2025-01-17 12:23:19.731 [INFO][4363] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b71a3241894b1b7f866c29aef2496e78ba24711780f4cab34eabce86553a032e" HandleID="k8s-pod-network.b71a3241894b1b7f866c29aef2496e78ba24711780f4cab34eabce86553a032e" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--qlwqs-eth0" Jan 17 12:23:20.005224 containerd[1460]: 2025-01-17 12:23:19.748 [INFO][4363] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b71a3241894b1b7f866c29aef2496e78ba24711780f4cab34eabce86553a032e" HandleID="k8s-pod-network.b71a3241894b1b7f866c29aef2496e78ba24711780f4cab34eabce86553a032e" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--qlwqs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319a00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-6-c2def92c28", "pod":"calico-apiserver-644c6b96bd-qlwqs", "timestamp":"2025-01-17 12:23:19.731240864 +0000 UTC"}, Hostname:"ci-4081.3.0-6-c2def92c28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:23:20.005224 containerd[1460]: 2025-01-17 12:23:19.748 [INFO][4363] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:20.005224 containerd[1460]: 2025-01-17 12:23:19.778 [INFO][4363] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:20.005224 containerd[1460]: 2025-01-17 12:23:19.778 [INFO][4363] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-6-c2def92c28' Jan 17 12:23:20.005224 containerd[1460]: 2025-01-17 12:23:19.786 [INFO][4363] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b71a3241894b1b7f866c29aef2496e78ba24711780f4cab34eabce86553a032e" host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:20.005224 containerd[1460]: 2025-01-17 12:23:19.802 [INFO][4363] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:20.005224 containerd[1460]: 2025-01-17 12:23:19.813 [INFO][4363] ipam/ipam.go 489: Trying affinity for 192.168.120.192/26 host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:20.005224 containerd[1460]: 2025-01-17 12:23:19.824 [INFO][4363] ipam/ipam.go 155: Attempting to load block cidr=192.168.120.192/26 host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:20.005224 containerd[1460]: 2025-01-17 12:23:19.836 [INFO][4363] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.120.192/26 host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:20.005224 containerd[1460]: 2025-01-17 12:23:19.837 [INFO][4363] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.120.192/26 handle="k8s-pod-network.b71a3241894b1b7f866c29aef2496e78ba24711780f4cab34eabce86553a032e" host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:20.005224 containerd[1460]: 2025-01-17 12:23:19.846 [INFO][4363] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b71a3241894b1b7f866c29aef2496e78ba24711780f4cab34eabce86553a032e Jan 17 12:23:20.005224 containerd[1460]: 2025-01-17 12:23:19.864 [INFO][4363] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.120.192/26 handle="k8s-pod-network.b71a3241894b1b7f866c29aef2496e78ba24711780f4cab34eabce86553a032e" host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:20.005224 containerd[1460]: 2025-01-17 12:23:19.888 [INFO][4363] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.120.197/26] block=192.168.120.192/26 handle="k8s-pod-network.b71a3241894b1b7f866c29aef2496e78ba24711780f4cab34eabce86553a032e" host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:20.005224 containerd[1460]: 2025-01-17 12:23:19.888 [INFO][4363] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.120.197/26] handle="k8s-pod-network.b71a3241894b1b7f866c29aef2496e78ba24711780f4cab34eabce86553a032e" host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:20.005224 containerd[1460]: 2025-01-17 12:23:19.888 [INFO][4363] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:20.005224 containerd[1460]: 2025-01-17 12:23:19.888 [INFO][4363] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.197/26] IPv6=[] ContainerID="b71a3241894b1b7f866c29aef2496e78ba24711780f4cab34eabce86553a032e" HandleID="k8s-pod-network.b71a3241894b1b7f866c29aef2496e78ba24711780f4cab34eabce86553a032e" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--qlwqs-eth0" Jan 17 12:23:20.006108 containerd[1460]: 2025-01-17 12:23:19.902 [INFO][4344] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b71a3241894b1b7f866c29aef2496e78ba24711780f4cab34eabce86553a032e" Namespace="calico-apiserver" Pod="calico-apiserver-644c6b96bd-qlwqs" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--qlwqs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--qlwqs-eth0", GenerateName:"calico-apiserver-644c6b96bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"4a25078f-72c0-4f3c-95ba-d53d9ddcf023", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"644c6b96bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-6-c2def92c28", ContainerID:"", Pod:"calico-apiserver-644c6b96bd-qlwqs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4a5f741ba23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:20.006108 containerd[1460]: 2025-01-17 12:23:19.903 [INFO][4344] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.120.197/32] ContainerID="b71a3241894b1b7f866c29aef2496e78ba24711780f4cab34eabce86553a032e" Namespace="calico-apiserver" Pod="calico-apiserver-644c6b96bd-qlwqs" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--qlwqs-eth0" Jan 17 12:23:20.006108 containerd[1460]: 2025-01-17 12:23:19.903 [INFO][4344] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a5f741ba23 ContainerID="b71a3241894b1b7f866c29aef2496e78ba24711780f4cab34eabce86553a032e" Namespace="calico-apiserver" Pod="calico-apiserver-644c6b96bd-qlwqs" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--qlwqs-eth0" Jan 17 12:23:20.006108 containerd[1460]: 2025-01-17 12:23:19.913 [INFO][4344] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b71a3241894b1b7f866c29aef2496e78ba24711780f4cab34eabce86553a032e" Namespace="calico-apiserver" Pod="calico-apiserver-644c6b96bd-qlwqs" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--qlwqs-eth0" Jan 17 12:23:20.006108 containerd[1460]: 2025-01-17 12:23:19.923 [INFO][4344] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b71a3241894b1b7f866c29aef2496e78ba24711780f4cab34eabce86553a032e" Namespace="calico-apiserver" Pod="calico-apiserver-644c6b96bd-qlwqs" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--qlwqs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--qlwqs-eth0", GenerateName:"calico-apiserver-644c6b96bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"4a25078f-72c0-4f3c-95ba-d53d9ddcf023", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"644c6b96bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-6-c2def92c28", ContainerID:"b71a3241894b1b7f866c29aef2496e78ba24711780f4cab34eabce86553a032e", Pod:"calico-apiserver-644c6b96bd-qlwqs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4a5f741ba23", MAC:"82:32:3c:19:cc:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:20.006108 containerd[1460]: 2025-01-17 12:23:19.988 [INFO][4344] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b71a3241894b1b7f866c29aef2496e78ba24711780f4cab34eabce86553a032e" Namespace="calico-apiserver" Pod="calico-apiserver-644c6b96bd-qlwqs" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--qlwqs-eth0" Jan 17 12:23:20.069193 systemd[1]: run-containerd-runc-k8s.io-e2e20b1f8b43ba99eaab1f4c13e695921662b0af756d6115806caa36b027c05a-runc.vBNlij.mount: Deactivated successfully. Jan 17 12:23:20.085444 systemd[1]: Started cri-containerd-e2e20b1f8b43ba99eaab1f4c13e695921662b0af756d6115806caa36b027c05a.scope - libcontainer container e2e20b1f8b43ba99eaab1f4c13e695921662b0af756d6115806caa36b027c05a. Jan 17 12:23:20.114871 containerd[1460]: time="2025-01-17T12:23:20.113721052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:23:20.117197 containerd[1460]: time="2025-01-17T12:23:20.114803104Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:23:20.117197 containerd[1460]: time="2025-01-17T12:23:20.116266043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:20.117197 containerd[1460]: time="2025-01-17T12:23:20.116400755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:20.148389 sshd[4367]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:20.151407 systemd[1]: Started cri-containerd-b71a3241894b1b7f866c29aef2496e78ba24711780f4cab34eabce86553a032e.scope - libcontainer container b71a3241894b1b7f866c29aef2496e78ba24711780f4cab34eabce86553a032e. Jan 17 12:23:20.160907 systemd[1]: sshd@9-164.92.109.43:22-139.178.68.195:43216.service: Deactivated successfully. Jan 17 12:23:20.163587 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:23:20.168088 systemd-logind[1442]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:23:20.179874 systemd[1]: Started sshd@10-164.92.109.43:22-139.178.68.195:43218.service - OpenSSH per-connection server daemon (139.178.68.195:43218). Jan 17 12:23:20.184334 systemd-logind[1442]: Removed session 10. Jan 17 12:23:20.246106 sshd[4478]: Accepted publickey for core from 139.178.68.195 port 43218 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:20.248445 containerd[1460]: time="2025-01-17T12:23:20.248365206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84bb7b955f-qmkwr,Uid:bb7db20c-9339-4707-9d88-fdbe00b2a260,Namespace:calico-system,Attempt:1,} returns sandbox id \"e2e20b1f8b43ba99eaab1f4c13e695921662b0af756d6115806caa36b027c05a\"" Jan 17 12:23:20.248633 sshd[4478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:20.260229 systemd-logind[1442]: New session 11 of user core. Jan 17 12:23:20.266107 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:23:20.273120 containerd[1460]: time="2025-01-17T12:23:20.273046456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-644c6b96bd-qlwqs,Uid:4a25078f-72c0-4f3c-95ba-d53d9ddcf023,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b71a3241894b1b7f866c29aef2496e78ba24711780f4cab34eabce86553a032e\"" Jan 17 12:23:20.373429 systemd-networkd[1358]: cali59107cb3610: Gained IPv6LL Jan 17 12:23:20.408613 containerd[1460]: time="2025-01-17T12:23:20.408503838Z" level=info msg="StopPodSandbox for \"41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce\"" Jan 17 12:23:20.618717 sshd[4478]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:20.632977 systemd[1]: sshd@10-164.92.109.43:22-139.178.68.195:43218.service: Deactivated successfully. Jan 17 12:23:20.640077 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:23:20.645530 systemd-logind[1442]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:23:20.656632 systemd[1]: Started sshd@11-164.92.109.43:22-139.178.68.195:43234.service - OpenSSH per-connection server daemon (139.178.68.195:43234). Jan 17 12:23:20.666930 systemd-logind[1442]: Removed session 11. Jan 17 12:23:20.764768 sshd[4542]: Accepted publickey for core from 139.178.68.195 port 43234 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:20.768810 sshd[4542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:20.772600 containerd[1460]: 2025-01-17 12:23:20.592 [INFO][4524] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" Jan 17 12:23:20.772600 containerd[1460]: 2025-01-17 12:23:20.594 [INFO][4524] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" iface="eth0" netns="/var/run/netns/cni-bf0f6c76-b847-0940-4877-916d2669b688" Jan 17 12:23:20.772600 containerd[1460]: 2025-01-17 12:23:20.595 [INFO][4524] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" iface="eth0" netns="/var/run/netns/cni-bf0f6c76-b847-0940-4877-916d2669b688" Jan 17 12:23:20.772600 containerd[1460]: 2025-01-17 12:23:20.600 [INFO][4524] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" iface="eth0" netns="/var/run/netns/cni-bf0f6c76-b847-0940-4877-916d2669b688" Jan 17 12:23:20.772600 containerd[1460]: 2025-01-17 12:23:20.601 [INFO][4524] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" Jan 17 12:23:20.772600 containerd[1460]: 2025-01-17 12:23:20.603 [INFO][4524] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" Jan 17 12:23:20.772600 containerd[1460]: 2025-01-17 12:23:20.736 [INFO][4534] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" HandleID="k8s-pod-network.41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--jvpvw-eth0" Jan 17 12:23:20.772600 containerd[1460]: 2025-01-17 12:23:20.736 [INFO][4534] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:20.772600 containerd[1460]: 2025-01-17 12:23:20.737 [INFO][4534] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:20.772600 containerd[1460]: 2025-01-17 12:23:20.751 [WARNING][4534] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" HandleID="k8s-pod-network.41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--jvpvw-eth0" Jan 17 12:23:20.772600 containerd[1460]: 2025-01-17 12:23:20.752 [INFO][4534] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" HandleID="k8s-pod-network.41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--jvpvw-eth0" Jan 17 12:23:20.772600 containerd[1460]: 2025-01-17 12:23:20.755 [INFO][4534] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:20.772600 containerd[1460]: 2025-01-17 12:23:20.762 [INFO][4524] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" Jan 17 12:23:20.775435 containerd[1460]: time="2025-01-17T12:23:20.774384859Z" level=info msg="TearDown network for sandbox \"41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce\" successfully" Jan 17 12:23:20.775435 containerd[1460]: time="2025-01-17T12:23:20.774422836Z" level=info msg="StopPodSandbox for \"41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce\" returns successfully" Jan 17 12:23:20.776300 systemd[1]: run-netns-cni\x2dbf0f6c76\x2db847\x2d0940\x2d4877\x2d916d2669b688.mount: Deactivated successfully. Jan 17 12:23:20.782041 containerd[1460]: time="2025-01-17T12:23:20.781304062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-644c6b96bd-jvpvw,Uid:d31f2d26-9d64-4545-9a49-9ad99ebce942,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:23:20.785254 systemd-logind[1442]: New session 12 of user core. Jan 17 12:23:20.790469 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:23:20.821684 systemd-networkd[1358]: cali31353300c2f: Gained IPv6LL Jan 17 12:23:20.892756 kubelet[2547]: E0117 12:23:20.892144 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:20.896639 kubelet[2547]: E0117 12:23:20.895657 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:20.966859 containerd[1460]: time="2025-01-17T12:23:20.966371601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:20.972366 containerd[1460]: time="2025-01-17T12:23:20.971360859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 17 12:23:20.972366 containerd[1460]: time="2025-01-17T12:23:20.972095090Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:20.977112 containerd[1460]: time="2025-01-17T12:23:20.977053498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:20.980670 containerd[1460]: time="2025-01-17T12:23:20.979786078Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.921128653s" Jan 17 12:23:20.980670 containerd[1460]: time="2025-01-17T12:23:20.979848970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 17 12:23:20.983421 containerd[1460]: time="2025-01-17T12:23:20.982293226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 17 12:23:20.985755 containerd[1460]: time="2025-01-17T12:23:20.984661224Z" level=info msg="CreateContainer within sandbox \"c5a8be4a05936d2c6d032726c3551f552e45dba70a6d568a14cf0c8f76694fb1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 17 12:23:21.013860 containerd[1460]: time="2025-01-17T12:23:21.013690867Z" level=info msg="CreateContainer within sandbox \"c5a8be4a05936d2c6d032726c3551f552e45dba70a6d568a14cf0c8f76694fb1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"cc2b07c9fa649817d65dab78f2c1eeb8aae3a711a5ae953f0fe6d4f056230ab6\"" Jan 17 12:23:21.018325 containerd[1460]: time="2025-01-17T12:23:21.016375621Z" level=info msg="StartContainer for \"cc2b07c9fa649817d65dab78f2c1eeb8aae3a711a5ae953f0fe6d4f056230ab6\"" Jan 17 12:23:21.056908 sshd[4542]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:21.067997 systemd[1]: sshd@11-164.92.109.43:22-139.178.68.195:43234.service: Deactivated successfully. Jan 17 12:23:21.071782 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:23:21.076856 systemd-logind[1442]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:23:21.081060 systemd-logind[1442]: Removed session 12. Jan 17 12:23:21.092156 systemd-networkd[1358]: calia3b625ad029: Link UP Jan 17 12:23:21.093118 systemd-networkd[1358]: calia3b625ad029: Gained carrier Jan 17 12:23:21.123778 containerd[1460]: 2025-01-17 12:23:20.912 [INFO][4548] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--jvpvw-eth0 calico-apiserver-644c6b96bd- calico-apiserver d31f2d26-9d64-4545-9a49-9ad99ebce942 950 0 2025-01-17 12:22:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:644c6b96bd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-6-c2def92c28 calico-apiserver-644c6b96bd-jvpvw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia3b625ad029 [] []}} ContainerID="cbedc8a5fd87f5d28a093bd8486fc7bae161b4b959fccbe8fa84ae39fe396615" Namespace="calico-apiserver" Pod="calico-apiserver-644c6b96bd-jvpvw" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--jvpvw-" Jan 17 12:23:21.123778 containerd[1460]: 2025-01-17 12:23:20.914 [INFO][4548] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cbedc8a5fd87f5d28a093bd8486fc7bae161b4b959fccbe8fa84ae39fe396615" Namespace="calico-apiserver" Pod="calico-apiserver-644c6b96bd-jvpvw" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--jvpvw-eth0" Jan 17 12:23:21.123778 containerd[1460]: 2025-01-17 12:23:20.992 [INFO][4568] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cbedc8a5fd87f5d28a093bd8486fc7bae161b4b959fccbe8fa84ae39fe396615" HandleID="k8s-pod-network.cbedc8a5fd87f5d28a093bd8486fc7bae161b4b959fccbe8fa84ae39fe396615" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--jvpvw-eth0" Jan 17 12:23:21.123778 containerd[1460]: 2025-01-17 12:23:21.010 [INFO][4568] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cbedc8a5fd87f5d28a093bd8486fc7bae161b4b959fccbe8fa84ae39fe396615" HandleID="k8s-pod-network.cbedc8a5fd87f5d28a093bd8486fc7bae161b4b959fccbe8fa84ae39fe396615" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--jvpvw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000311970), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-6-c2def92c28", "pod":"calico-apiserver-644c6b96bd-jvpvw", "timestamp":"2025-01-17 12:23:20.992464614 +0000 UTC"}, Hostname:"ci-4081.3.0-6-c2def92c28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:23:21.123778 containerd[1460]: 2025-01-17 12:23:21.010 [INFO][4568] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:21.123778 containerd[1460]: 2025-01-17 12:23:21.010 [INFO][4568] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:21.123778 containerd[1460]: 2025-01-17 12:23:21.010 [INFO][4568] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-6-c2def92c28' Jan 17 12:23:21.123778 containerd[1460]: 2025-01-17 12:23:21.015 [INFO][4568] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cbedc8a5fd87f5d28a093bd8486fc7bae161b4b959fccbe8fa84ae39fe396615" host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:21.123778 containerd[1460]: 2025-01-17 12:23:21.025 [INFO][4568] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:21.123778 containerd[1460]: 2025-01-17 12:23:21.039 [INFO][4568] ipam/ipam.go 489: Trying affinity for 192.168.120.192/26 host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:21.123778 containerd[1460]: 2025-01-17 12:23:21.044 [INFO][4568] ipam/ipam.go 155: Attempting to load block cidr=192.168.120.192/26 host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:21.123778 containerd[1460]: 2025-01-17 12:23:21.048 [INFO][4568] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.120.192/26 host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:21.123778 containerd[1460]: 2025-01-17 12:23:21.048 [INFO][4568] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.120.192/26 handle="k8s-pod-network.cbedc8a5fd87f5d28a093bd8486fc7bae161b4b959fccbe8fa84ae39fe396615" host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:21.123778 containerd[1460]: 2025-01-17 12:23:21.055 [INFO][4568] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cbedc8a5fd87f5d28a093bd8486fc7bae161b4b959fccbe8fa84ae39fe396615 Jan 17 12:23:21.123778 containerd[1460]: 2025-01-17 12:23:21.068 [INFO][4568] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.120.192/26 handle="k8s-pod-network.cbedc8a5fd87f5d28a093bd8486fc7bae161b4b959fccbe8fa84ae39fe396615" host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:21.123778 containerd[1460]: 2025-01-17 12:23:21.085 [INFO][4568] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.120.198/26] block=192.168.120.192/26 handle="k8s-pod-network.cbedc8a5fd87f5d28a093bd8486fc7bae161b4b959fccbe8fa84ae39fe396615" host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:21.123778 containerd[1460]: 2025-01-17 12:23:21.085 [INFO][4568] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.120.198/26] handle="k8s-pod-network.cbedc8a5fd87f5d28a093bd8486fc7bae161b4b959fccbe8fa84ae39fe396615" host="ci-4081.3.0-6-c2def92c28" Jan 17 12:23:21.123778 containerd[1460]: 2025-01-17 12:23:21.085 [INFO][4568] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:21.123778 containerd[1460]: 2025-01-17 12:23:21.086 [INFO][4568] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.198/26] IPv6=[] ContainerID="cbedc8a5fd87f5d28a093bd8486fc7bae161b4b959fccbe8fa84ae39fe396615" HandleID="k8s-pod-network.cbedc8a5fd87f5d28a093bd8486fc7bae161b4b959fccbe8fa84ae39fe396615" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--jvpvw-eth0" Jan 17 12:23:21.125608 containerd[1460]: 2025-01-17 12:23:21.088 [INFO][4548] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cbedc8a5fd87f5d28a093bd8486fc7bae161b4b959fccbe8fa84ae39fe396615" Namespace="calico-apiserver" Pod="calico-apiserver-644c6b96bd-jvpvw" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--jvpvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--jvpvw-eth0", GenerateName:"calico-apiserver-644c6b96bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"d31f2d26-9d64-4545-9a49-9ad99ebce942", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"644c6b96bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-6-c2def92c28", ContainerID:"", Pod:"calico-apiserver-644c6b96bd-jvpvw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia3b625ad029", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:21.125608 containerd[1460]: 2025-01-17 12:23:21.088 [INFO][4548] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.120.198/32] ContainerID="cbedc8a5fd87f5d28a093bd8486fc7bae161b4b959fccbe8fa84ae39fe396615" Namespace="calico-apiserver" Pod="calico-apiserver-644c6b96bd-jvpvw" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--jvpvw-eth0" Jan 17 12:23:21.125608 containerd[1460]: 2025-01-17 12:23:21.088 [INFO][4548] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3b625ad029 ContainerID="cbedc8a5fd87f5d28a093bd8486fc7bae161b4b959fccbe8fa84ae39fe396615" Namespace="calico-apiserver" Pod="calico-apiserver-644c6b96bd-jvpvw" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--jvpvw-eth0" Jan 17 12:23:21.125608 containerd[1460]: 2025-01-17 12:23:21.094 [INFO][4548] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cbedc8a5fd87f5d28a093bd8486fc7bae161b4b959fccbe8fa84ae39fe396615" Namespace="calico-apiserver" Pod="calico-apiserver-644c6b96bd-jvpvw" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--jvpvw-eth0" Jan 17 12:23:21.125608 containerd[1460]: 2025-01-17 12:23:21.095 [INFO][4548] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cbedc8a5fd87f5d28a093bd8486fc7bae161b4b959fccbe8fa84ae39fe396615" Namespace="calico-apiserver" Pod="calico-apiserver-644c6b96bd-jvpvw" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--jvpvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--jvpvw-eth0", GenerateName:"calico-apiserver-644c6b96bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"d31f2d26-9d64-4545-9a49-9ad99ebce942", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"644c6b96bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-6-c2def92c28", ContainerID:"cbedc8a5fd87f5d28a093bd8486fc7bae161b4b959fccbe8fa84ae39fe396615", Pod:"calico-apiserver-644c6b96bd-jvpvw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia3b625ad029", MAC:"6a:a3:6a:2f:20:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:21.125608 containerd[1460]: 2025-01-17 12:23:21.110 [INFO][4548] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cbedc8a5fd87f5d28a093bd8486fc7bae161b4b959fccbe8fa84ae39fe396615" Namespace="calico-apiserver" Pod="calico-apiserver-644c6b96bd-jvpvw" WorkloadEndpoint="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--jvpvw-eth0" Jan 17 12:23:21.134463 systemd[1]: Started cri-containerd-cc2b07c9fa649817d65dab78f2c1eeb8aae3a711a5ae953f0fe6d4f056230ab6.scope - libcontainer container cc2b07c9fa649817d65dab78f2c1eeb8aae3a711a5ae953f0fe6d4f056230ab6. Jan 17 12:23:21.186242 containerd[1460]: time="2025-01-17T12:23:21.185038799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:23:21.186242 containerd[1460]: time="2025-01-17T12:23:21.185107101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:23:21.186242 containerd[1460]: time="2025-01-17T12:23:21.185118165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:21.186242 containerd[1460]: time="2025-01-17T12:23:21.185225268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:21.190013 containerd[1460]: time="2025-01-17T12:23:21.189719397Z" level=info msg="StartContainer for \"cc2b07c9fa649817d65dab78f2c1eeb8aae3a711a5ae953f0fe6d4f056230ab6\" returns successfully" Jan 17 12:23:21.212421 systemd[1]: Started cri-containerd-cbedc8a5fd87f5d28a093bd8486fc7bae161b4b959fccbe8fa84ae39fe396615.scope - libcontainer container cbedc8a5fd87f5d28a093bd8486fc7bae161b4b959fccbe8fa84ae39fe396615. Jan 17 12:23:21.264414 containerd[1460]: time="2025-01-17T12:23:21.264352945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-644c6b96bd-jvpvw,Uid:d31f2d26-9d64-4545-9a49-9ad99ebce942,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"cbedc8a5fd87f5d28a093bd8486fc7bae161b4b959fccbe8fa84ae39fe396615\"" Jan 17 12:23:21.396625 systemd-networkd[1358]: cali4a5f741ba23: Gained IPv6LL Jan 17 12:23:21.663783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3015384329.mount: Deactivated successfully. Jan 17 12:23:21.716635 systemd-networkd[1358]: cali5f49d3ce4ac: Gained IPv6LL Jan 17 12:23:21.894904 kubelet[2547]: E0117 12:23:21.894846 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:22.357147 systemd-networkd[1358]: calia3b625ad029: Gained IPv6LL Jan 17 12:23:22.994900 containerd[1460]: time="2025-01-17T12:23:22.994848888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:22.995928 containerd[1460]: time="2025-01-17T12:23:22.995562612Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 17 12:23:22.996712 containerd[1460]: time="2025-01-17T12:23:22.996414830Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:22.998584 containerd[1460]: time="2025-01-17T12:23:22.998545020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:22.999495 containerd[1460]: time="2025-01-17T12:23:22.999440499Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.017108336s" Jan 17 12:23:22.999627 containerd[1460]: time="2025-01-17T12:23:22.999608962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 17 12:23:23.000431 containerd[1460]: time="2025-01-17T12:23:23.000373397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:23:23.021873 containerd[1460]: time="2025-01-17T12:23:23.021468861Z" level=info msg="CreateContainer within sandbox \"e2e20b1f8b43ba99eaab1f4c13e695921662b0af756d6115806caa36b027c05a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 17 12:23:23.048999 containerd[1460]: time="2025-01-17T12:23:23.048853404Z" level=info msg="CreateContainer within sandbox \"e2e20b1f8b43ba99eaab1f4c13e695921662b0af756d6115806caa36b027c05a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b40c3f8c62ef171f78b03c5dba2d9e57b9f48db63c6ef8f810cc0fc729f9631e\"" Jan 17 12:23:23.051061 containerd[1460]: time="2025-01-17T12:23:23.050378399Z" level=info msg="StartContainer for \"b40c3f8c62ef171f78b03c5dba2d9e57b9f48db63c6ef8f810cc0fc729f9631e\"" Jan 17 12:23:23.093428 systemd[1]: Started cri-containerd-b40c3f8c62ef171f78b03c5dba2d9e57b9f48db63c6ef8f810cc0fc729f9631e.scope - libcontainer container b40c3f8c62ef171f78b03c5dba2d9e57b9f48db63c6ef8f810cc0fc729f9631e. Jan 17 12:23:23.146191 containerd[1460]: time="2025-01-17T12:23:23.146114409Z" level=info msg="StartContainer for \"b40c3f8c62ef171f78b03c5dba2d9e57b9f48db63c6ef8f810cc0fc729f9631e\" returns successfully" Jan 17 12:23:24.077056 kubelet[2547]: I0117 12:23:24.076996 2547 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-84bb7b955f-qmkwr" podStartSLOduration=26.329374409 podStartE2EDuration="29.076940019s" podCreationTimestamp="2025-01-17 12:22:55 +0000 UTC" firstStartedPulling="2025-01-17 12:23:20.252554358 +0000 UTC m=+53.028819255" lastFinishedPulling="2025-01-17 12:23:23.000119939 +0000 UTC m=+55.776384865" observedRunningTime="2025-01-17 12:23:24.055724911 +0000 UTC m=+56.831989829" watchObservedRunningTime="2025-01-17 12:23:24.076940019 +0000 UTC m=+56.853204935" Jan 17 12:23:25.366201 containerd[1460]: time="2025-01-17T12:23:25.364686612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:25.366201 containerd[1460]: time="2025-01-17T12:23:25.365549594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 17 12:23:25.367454 containerd[1460]: time="2025-01-17T12:23:25.367084485Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:25.370468 containerd[1460]: time="2025-01-17T12:23:25.370412709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:25.372363 containerd[1460]: time="2025-01-17T12:23:25.372282766Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.371876167s" Jan 17 12:23:25.372363 containerd[1460]: time="2025-01-17T12:23:25.372361754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 17 12:23:25.373737 containerd[1460]: time="2025-01-17T12:23:25.373632593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 17 12:23:25.376919 containerd[1460]: time="2025-01-17T12:23:25.376475018Z" level=info msg="CreateContainer within sandbox \"b71a3241894b1b7f866c29aef2496e78ba24711780f4cab34eabce86553a032e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:23:25.394665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1961460158.mount: Deactivated successfully. Jan 17 12:23:25.404205 containerd[1460]: time="2025-01-17T12:23:25.403776700Z" level=info msg="CreateContainer within sandbox \"b71a3241894b1b7f866c29aef2496e78ba24711780f4cab34eabce86553a032e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c526765332ab41ffc3c80565fafa0c924bd41475399eb6df25f356062edced7d\"" Jan 17 12:23:25.407142 containerd[1460]: time="2025-01-17T12:23:25.407082568Z" level=info msg="StartContainer for \"c526765332ab41ffc3c80565fafa0c924bd41475399eb6df25f356062edced7d\"" Jan 17 12:23:25.466497 systemd[1]: Started cri-containerd-c526765332ab41ffc3c80565fafa0c924bd41475399eb6df25f356062edced7d.scope - libcontainer container c526765332ab41ffc3c80565fafa0c924bd41475399eb6df25f356062edced7d. Jan 17 12:23:25.529312 containerd[1460]: time="2025-01-17T12:23:25.529107584Z" level=info msg="StartContainer for \"c526765332ab41ffc3c80565fafa0c924bd41475399eb6df25f356062edced7d\" returns successfully" Jan 17 12:23:25.963202 kubelet[2547]: I0117 12:23:25.962875 2547 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-644c6b96bd-qlwqs" podStartSLOduration=25.86563556 podStartE2EDuration="30.962809954s" podCreationTimestamp="2025-01-17 12:22:55 +0000 UTC" firstStartedPulling="2025-01-17 12:23:20.275694516 +0000 UTC m=+53.051959424" lastFinishedPulling="2025-01-17 12:23:25.372868923 +0000 UTC m=+58.149133818" observedRunningTime="2025-01-17 12:23:25.962317902 +0000 UTC m=+58.738582820" watchObservedRunningTime="2025-01-17 12:23:25.962809954 +0000 UTC m=+58.739074871" Jan 17 12:23:26.080464 systemd[1]: Started sshd@12-164.92.109.43:22-139.178.68.195:55040.service - OpenSSH per-connection server daemon (139.178.68.195:55040). Jan 17 12:23:26.186645 sshd[4788]: Accepted publickey for core from 139.178.68.195 port 55040 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:26.188089 sshd[4788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:26.195501 systemd-logind[1442]: New session 13 of user core. Jan 17 12:23:26.199373 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:23:26.808562 sshd[4788]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:26.814794 systemd[1]: sshd@12-164.92.109.43:22-139.178.68.195:55040.service: Deactivated successfully. Jan 17 12:23:26.819482 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:23:26.825208 systemd-logind[1442]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:23:26.827930 systemd-logind[1442]: Removed session 13. Jan 17 12:23:26.940964 kubelet[2547]: I0117 12:23:26.939386 2547 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:23:27.045101 containerd[1460]: time="2025-01-17T12:23:27.045055339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:27.046642 containerd[1460]: time="2025-01-17T12:23:27.046202709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 17 12:23:27.046879 containerd[1460]: time="2025-01-17T12:23:27.046851928Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:27.051049 containerd[1460]: time="2025-01-17T12:23:27.050970504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:27.052343 containerd[1460]: time="2025-01-17T12:23:27.052301679Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.678631462s" Jan 17 12:23:27.052343 containerd[1460]: time="2025-01-17T12:23:27.052340961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 17 12:23:27.053547 containerd[1460]: time="2025-01-17T12:23:27.053064841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:23:27.056604 containerd[1460]: time="2025-01-17T12:23:27.055896822Z" level=info msg="CreateContainer within sandbox \"c5a8be4a05936d2c6d032726c3551f552e45dba70a6d568a14cf0c8f76694fb1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 17 12:23:27.079194 containerd[1460]: time="2025-01-17T12:23:27.079130871Z" level=info msg="CreateContainer within sandbox \"c5a8be4a05936d2c6d032726c3551f552e45dba70a6d568a14cf0c8f76694fb1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"449f5eeb8178920debe68b21bba9a438dbaeec49039d584f6e78841bc3aa3df6\"" Jan 17 12:23:27.080046 containerd[1460]: time="2025-01-17T12:23:27.080020239Z" level=info msg="StartContainer for \"449f5eeb8178920debe68b21bba9a438dbaeec49039d584f6e78841bc3aa3df6\"" Jan 17 12:23:27.160494 systemd[1]: Started cri-containerd-449f5eeb8178920debe68b21bba9a438dbaeec49039d584f6e78841bc3aa3df6.scope - libcontainer container 449f5eeb8178920debe68b21bba9a438dbaeec49039d584f6e78841bc3aa3df6. Jan 17 12:23:27.210498 containerd[1460]: time="2025-01-17T12:23:27.210430355Z" level=info msg="StartContainer for \"449f5eeb8178920debe68b21bba9a438dbaeec49039d584f6e78841bc3aa3df6\" returns successfully" Jan 17 12:23:27.413117 containerd[1460]: time="2025-01-17T12:23:27.412309740Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:27.413117 containerd[1460]: time="2025-01-17T12:23:27.412866564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 17 12:23:27.421658 containerd[1460]: time="2025-01-17T12:23:27.421602521Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 368.499975ms" Jan 17 12:23:27.422013 containerd[1460]: time="2025-01-17T12:23:27.421822945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 17 12:23:27.426798 containerd[1460]: time="2025-01-17T12:23:27.426451155Z" level=info msg="CreateContainer within sandbox \"cbedc8a5fd87f5d28a093bd8486fc7bae161b4b959fccbe8fa84ae39fe396615\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:23:27.428742 containerd[1460]: time="2025-01-17T12:23:27.428405132Z" level=info msg="StopPodSandbox for \"a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f\"" Jan 17 12:23:27.436713 containerd[1460]: time="2025-01-17T12:23:27.436405196Z" level=info msg="CreateContainer within sandbox \"cbedc8a5fd87f5d28a093bd8486fc7bae161b4b959fccbe8fa84ae39fe396615\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8f1b8f8b863aa87ccef5c4b11c904eb17b4750bea637a87ab148ef36649c08f1\"" Jan 17 12:23:27.442245 containerd[1460]: time="2025-01-17T12:23:27.439448679Z" level=info msg="StartContainer for \"8f1b8f8b863aa87ccef5c4b11c904eb17b4750bea637a87ab148ef36649c08f1\"" Jan 17 12:23:27.522497 systemd[1]: Started cri-containerd-8f1b8f8b863aa87ccef5c4b11c904eb17b4750bea637a87ab148ef36649c08f1.scope - libcontainer container 8f1b8f8b863aa87ccef5c4b11c904eb17b4750bea637a87ab148ef36649c08f1. Jan 17 12:23:27.659482 containerd[1460]: time="2025-01-17T12:23:27.659095655Z" level=info msg="StartContainer for \"8f1b8f8b863aa87ccef5c4b11c904eb17b4750bea637a87ab148ef36649c08f1\" returns successfully" Jan 17 12:23:27.666274 kubelet[2547]: I0117 12:23:27.666099 2547 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 17 12:23:27.675189 kubelet[2547]: I0117 12:23:27.674936 2547 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 17 12:23:27.782766 containerd[1460]: 2025-01-17 12:23:27.638 [WARNING][4864] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--c6p9z-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"80e6a65e-0c98-4ec1-b14d-0f74c5d02c17", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-6-c2def92c28", ContainerID:"79ff7284a9f782329f1d39ed3dae8f3ca0ace1d89e0e90e8570752d0b82775f1", Pod:"coredns-76f75df574-c6p9z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali59107cb3610", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:27.782766 containerd[1460]: 2025-01-17 12:23:27.641 [INFO][4864] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" Jan 17 12:23:27.782766 containerd[1460]: 2025-01-17 12:23:27.641 [INFO][4864] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" iface="eth0" netns="" Jan 17 12:23:27.782766 containerd[1460]: 2025-01-17 12:23:27.641 [INFO][4864] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" Jan 17 12:23:27.782766 containerd[1460]: 2025-01-17 12:23:27.641 [INFO][4864] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" Jan 17 12:23:27.782766 containerd[1460]: 2025-01-17 12:23:27.754 [INFO][4897] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" HandleID="k8s-pod-network.a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" Workload="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--c6p9z-eth0" Jan 17 12:23:27.782766 containerd[1460]: 2025-01-17 12:23:27.754 [INFO][4897] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:27.782766 containerd[1460]: 2025-01-17 12:23:27.754 [INFO][4897] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:27.782766 containerd[1460]: 2025-01-17 12:23:27.768 [WARNING][4897] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" HandleID="k8s-pod-network.a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" Workload="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--c6p9z-eth0" Jan 17 12:23:27.782766 containerd[1460]: 2025-01-17 12:23:27.768 [INFO][4897] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" HandleID="k8s-pod-network.a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" Workload="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--c6p9z-eth0" Jan 17 12:23:27.782766 containerd[1460]: 2025-01-17 12:23:27.773 [INFO][4897] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:27.782766 containerd[1460]: 2025-01-17 12:23:27.778 [INFO][4864] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" Jan 17 12:23:27.784630 containerd[1460]: time="2025-01-17T12:23:27.782783138Z" level=info msg="TearDown network for sandbox \"a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f\" successfully" Jan 17 12:23:27.784630 containerd[1460]: time="2025-01-17T12:23:27.782819257Z" level=info msg="StopPodSandbox for \"a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f\" returns successfully" Jan 17 12:23:27.793536 containerd[1460]: time="2025-01-17T12:23:27.791611582Z" level=info msg="RemovePodSandbox for \"a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f\"" Jan 17 12:23:27.793536 containerd[1460]: time="2025-01-17T12:23:27.791662837Z" level=info msg="Forcibly stopping sandbox \"a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f\"" Jan 17 12:23:27.919673 containerd[1460]: 2025-01-17 12:23:27.876 [WARNING][4919] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--c6p9z-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"80e6a65e-0c98-4ec1-b14d-0f74c5d02c17", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-6-c2def92c28", ContainerID:"79ff7284a9f782329f1d39ed3dae8f3ca0ace1d89e0e90e8570752d0b82775f1", Pod:"coredns-76f75df574-c6p9z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali59107cb3610", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:27.919673 containerd[1460]: 2025-01-17 12:23:27.876 [INFO][4919] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" Jan 17 12:23:27.919673 containerd[1460]: 2025-01-17 12:23:27.876 [INFO][4919] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" iface="eth0" netns="" Jan 17 12:23:27.919673 containerd[1460]: 2025-01-17 12:23:27.876 [INFO][4919] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" Jan 17 12:23:27.919673 containerd[1460]: 2025-01-17 12:23:27.876 [INFO][4919] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" Jan 17 12:23:27.919673 containerd[1460]: 2025-01-17 12:23:27.904 [INFO][4928] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" HandleID="k8s-pod-network.a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" Workload="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--c6p9z-eth0" Jan 17 12:23:27.919673 containerd[1460]: 2025-01-17 12:23:27.904 [INFO][4928] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:27.919673 containerd[1460]: 2025-01-17 12:23:27.904 [INFO][4928] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:27.919673 containerd[1460]: 2025-01-17 12:23:27.913 [WARNING][4928] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" HandleID="k8s-pod-network.a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" Workload="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--c6p9z-eth0" Jan 17 12:23:27.919673 containerd[1460]: 2025-01-17 12:23:27.913 [INFO][4928] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" HandleID="k8s-pod-network.a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" Workload="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--c6p9z-eth0" Jan 17 12:23:27.919673 containerd[1460]: 2025-01-17 12:23:27.915 [INFO][4928] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:27.919673 containerd[1460]: 2025-01-17 12:23:27.917 [INFO][4919] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f" Jan 17 12:23:27.919673 containerd[1460]: time="2025-01-17T12:23:27.919615802Z" level=info msg="TearDown network for sandbox \"a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f\" successfully" Jan 17 12:23:27.929184 containerd[1460]: time="2025-01-17T12:23:27.928951614Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:23:27.929184 containerd[1460]: time="2025-01-17T12:23:27.929109534Z" level=info msg="RemovePodSandbox \"a2c6f241e367c87b54ba25665c5a8e0203614aafe92e6ff49e041098a168c26f\" returns successfully" Jan 17 12:23:27.930716 containerd[1460]: time="2025-01-17T12:23:27.930363371Z" level=info msg="StopPodSandbox for \"38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1\"" Jan 17 12:23:27.957232 kubelet[2547]: I0117 12:23:27.955783 2547 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-gzkcx" podStartSLOduration=24.958342116 podStartE2EDuration="32.955733255s" podCreationTimestamp="2025-01-17 12:22:55 +0000 UTC" firstStartedPulling="2025-01-17 12:23:19.055383501 +0000 UTC m=+51.831648396" lastFinishedPulling="2025-01-17 12:23:27.052774639 +0000 UTC m=+59.829039535" observedRunningTime="2025-01-17 12:23:27.954291389 +0000 UTC m=+60.730556305" watchObservedRunningTime="2025-01-17 12:23:27.955733255 +0000 UTC m=+60.731998162" Jan 17 12:23:28.063251 containerd[1460]: 2025-01-17 12:23:28.019 [WARNING][4945] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--rsr9z-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"cf0567d8-141c-4c94-af72-85752733c14f", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-6-c2def92c28", ContainerID:"d30caa4efca5fb8d4ba558c6b218b381bce55c2d1a05def124c072865fa0426f", Pod:"coredns-76f75df574-rsr9z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali79f6d4bfbf7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:28.063251 containerd[1460]: 2025-01-17 12:23:28.020 [INFO][4945] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" Jan 17 12:23:28.063251 containerd[1460]: 2025-01-17 12:23:28.020 [INFO][4945] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" iface="eth0" netns="" Jan 17 12:23:28.063251 containerd[1460]: 2025-01-17 12:23:28.020 [INFO][4945] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" Jan 17 12:23:28.063251 containerd[1460]: 2025-01-17 12:23:28.020 [INFO][4945] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" Jan 17 12:23:28.063251 containerd[1460]: 2025-01-17 12:23:28.050 [INFO][4954] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" HandleID="k8s-pod-network.38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" Workload="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--rsr9z-eth0" Jan 17 12:23:28.063251 containerd[1460]: 2025-01-17 12:23:28.050 [INFO][4954] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:28.063251 containerd[1460]: 2025-01-17 12:23:28.050 [INFO][4954] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:28.063251 containerd[1460]: 2025-01-17 12:23:28.056 [WARNING][4954] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" HandleID="k8s-pod-network.38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" Workload="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--rsr9z-eth0" Jan 17 12:23:28.063251 containerd[1460]: 2025-01-17 12:23:28.056 [INFO][4954] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" HandleID="k8s-pod-network.38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" Workload="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--rsr9z-eth0" Jan 17 12:23:28.063251 containerd[1460]: 2025-01-17 12:23:28.058 [INFO][4954] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:28.063251 containerd[1460]: 2025-01-17 12:23:28.060 [INFO][4945] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" Jan 17 12:23:28.063251 containerd[1460]: time="2025-01-17T12:23:28.063232112Z" level=info msg="TearDown network for sandbox \"38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1\" successfully" Jan 17 12:23:28.064325 containerd[1460]: time="2025-01-17T12:23:28.063267080Z" level=info msg="StopPodSandbox for \"38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1\" returns successfully" Jan 17 12:23:28.064865 containerd[1460]: time="2025-01-17T12:23:28.064804909Z" level=info msg="RemovePodSandbox for \"38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1\"" Jan 17 12:23:28.064865 containerd[1460]: time="2025-01-17T12:23:28.064845780Z" level=info msg="Forcibly stopping sandbox \"38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1\"" Jan 17 12:23:28.210309 containerd[1460]: 2025-01-17 12:23:28.144 [WARNING][4972] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--rsr9z-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"cf0567d8-141c-4c94-af72-85752733c14f", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-6-c2def92c28", ContainerID:"d30caa4efca5fb8d4ba558c6b218b381bce55c2d1a05def124c072865fa0426f", Pod:"coredns-76f75df574-rsr9z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali79f6d4bfbf7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:28.210309 containerd[1460]: 2025-01-17 12:23:28.144 [INFO][4972] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" Jan 17 12:23:28.210309 containerd[1460]: 2025-01-17 12:23:28.144 [INFO][4972] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" iface="eth0" netns="" Jan 17 12:23:28.210309 containerd[1460]: 2025-01-17 12:23:28.144 [INFO][4972] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" Jan 17 12:23:28.210309 containerd[1460]: 2025-01-17 12:23:28.144 [INFO][4972] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" Jan 17 12:23:28.210309 containerd[1460]: 2025-01-17 12:23:28.182 [INFO][4978] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" HandleID="k8s-pod-network.38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" Workload="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--rsr9z-eth0" Jan 17 12:23:28.210309 containerd[1460]: 2025-01-17 12:23:28.182 [INFO][4978] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:28.210309 containerd[1460]: 2025-01-17 12:23:28.182 [INFO][4978] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:28.210309 containerd[1460]: 2025-01-17 12:23:28.201 [WARNING][4978] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" HandleID="k8s-pod-network.38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" Workload="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--rsr9z-eth0" Jan 17 12:23:28.210309 containerd[1460]: 2025-01-17 12:23:28.201 [INFO][4978] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" HandleID="k8s-pod-network.38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" Workload="ci--4081.3.0--6--c2def92c28-k8s-coredns--76f75df574--rsr9z-eth0" Jan 17 12:23:28.210309 containerd[1460]: 2025-01-17 12:23:28.204 [INFO][4978] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:28.210309 containerd[1460]: 2025-01-17 12:23:28.206 [INFO][4972] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1" Jan 17 12:23:28.210309 containerd[1460]: time="2025-01-17T12:23:28.210257406Z" level=info msg="TearDown network for sandbox \"38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1\" successfully" Jan 17 12:23:28.215485 containerd[1460]: time="2025-01-17T12:23:28.215421339Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:23:28.215685 containerd[1460]: time="2025-01-17T12:23:28.215507130Z" level=info msg="RemovePodSandbox \"38dd1444836d8fb6a61d1fab283d9a394c9ec49acb52bb8d1cf573e36348a5c1\" returns successfully" Jan 17 12:23:28.218415 containerd[1460]: time="2025-01-17T12:23:28.216273362Z" level=info msg="StopPodSandbox for \"1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049\"" Jan 17 12:23:28.365524 containerd[1460]: 2025-01-17 12:23:28.307 [WARNING][4999] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--6--c2def92c28-k8s-calico--kube--controllers--84bb7b955f--qmkwr-eth0", GenerateName:"calico-kube-controllers-84bb7b955f-", Namespace:"calico-system", SelfLink:"", UID:"bb7db20c-9339-4707-9d88-fdbe00b2a260", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84bb7b955f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-6-c2def92c28", ContainerID:"e2e20b1f8b43ba99eaab1f4c13e695921662b0af756d6115806caa36b027c05a", Pod:"calico-kube-controllers-84bb7b955f-qmkwr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5f49d3ce4ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:28.365524 containerd[1460]: 2025-01-17 12:23:28.307 [INFO][4999] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" Jan 17 12:23:28.365524 containerd[1460]: 2025-01-17 12:23:28.307 [INFO][4999] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" iface="eth0" netns="" Jan 17 12:23:28.365524 containerd[1460]: 2025-01-17 12:23:28.307 [INFO][4999] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" Jan 17 12:23:28.365524 containerd[1460]: 2025-01-17 12:23:28.307 [INFO][4999] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" Jan 17 12:23:28.365524 containerd[1460]: 2025-01-17 12:23:28.344 [INFO][5005] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" HandleID="k8s-pod-network.1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--kube--controllers--84bb7b955f--qmkwr-eth0" Jan 17 12:23:28.365524 containerd[1460]: 2025-01-17 12:23:28.345 [INFO][5005] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:28.365524 containerd[1460]: 2025-01-17 12:23:28.345 [INFO][5005] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:28.365524 containerd[1460]: 2025-01-17 12:23:28.356 [WARNING][5005] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" HandleID="k8s-pod-network.1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--kube--controllers--84bb7b955f--qmkwr-eth0" Jan 17 12:23:28.365524 containerd[1460]: 2025-01-17 12:23:28.356 [INFO][5005] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" HandleID="k8s-pod-network.1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--kube--controllers--84bb7b955f--qmkwr-eth0" Jan 17 12:23:28.365524 containerd[1460]: 2025-01-17 12:23:28.358 [INFO][5005] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:28.365524 containerd[1460]: 2025-01-17 12:23:28.361 [INFO][4999] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" Jan 17 12:23:28.365524 containerd[1460]: time="2025-01-17T12:23:28.365376468Z" level=info msg="TearDown network for sandbox \"1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049\" successfully" Jan 17 12:23:28.365524 containerd[1460]: time="2025-01-17T12:23:28.365403301Z" level=info msg="StopPodSandbox for \"1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049\" returns successfully" Jan 17 12:23:28.367881 containerd[1460]: time="2025-01-17T12:23:28.366985439Z" level=info msg="RemovePodSandbox for \"1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049\"" Jan 17 12:23:28.367881 containerd[1460]: time="2025-01-17T12:23:28.367024922Z" level=info msg="Forcibly stopping sandbox \"1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049\"" Jan 17 12:23:28.498763 containerd[1460]: 2025-01-17 12:23:28.433 [WARNING][5023] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--6--c2def92c28-k8s-calico--kube--controllers--84bb7b955f--qmkwr-eth0", GenerateName:"calico-kube-controllers-84bb7b955f-", Namespace:"calico-system", SelfLink:"", UID:"bb7db20c-9339-4707-9d88-fdbe00b2a260", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84bb7b955f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-6-c2def92c28", ContainerID:"e2e20b1f8b43ba99eaab1f4c13e695921662b0af756d6115806caa36b027c05a", Pod:"calico-kube-controllers-84bb7b955f-qmkwr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5f49d3ce4ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:28.498763 containerd[1460]: 2025-01-17 12:23:28.433 [INFO][5023] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" Jan 17 12:23:28.498763 containerd[1460]: 2025-01-17 12:23:28.434 [INFO][5023] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" iface="eth0" netns="" Jan 17 12:23:28.498763 containerd[1460]: 2025-01-17 12:23:28.434 [INFO][5023] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" Jan 17 12:23:28.498763 containerd[1460]: 2025-01-17 12:23:28.434 [INFO][5023] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" Jan 17 12:23:28.498763 containerd[1460]: 2025-01-17 12:23:28.477 [INFO][5029] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" HandleID="k8s-pod-network.1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--kube--controllers--84bb7b955f--qmkwr-eth0" Jan 17 12:23:28.498763 containerd[1460]: 2025-01-17 12:23:28.477 [INFO][5029] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:28.498763 containerd[1460]: 2025-01-17 12:23:28.477 [INFO][5029] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:28.498763 containerd[1460]: 2025-01-17 12:23:28.487 [WARNING][5029] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" HandleID="k8s-pod-network.1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--kube--controllers--84bb7b955f--qmkwr-eth0" Jan 17 12:23:28.498763 containerd[1460]: 2025-01-17 12:23:28.487 [INFO][5029] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" HandleID="k8s-pod-network.1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--kube--controllers--84bb7b955f--qmkwr-eth0" Jan 17 12:23:28.498763 containerd[1460]: 2025-01-17 12:23:28.494 [INFO][5029] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:28.498763 containerd[1460]: 2025-01-17 12:23:28.496 [INFO][5023] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049" Jan 17 12:23:28.498763 containerd[1460]: time="2025-01-17T12:23:28.498604739Z" level=info msg="TearDown network for sandbox \"1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049\" successfully" Jan 17 12:23:28.502885 containerd[1460]: time="2025-01-17T12:23:28.502736017Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:23:28.502885 containerd[1460]: time="2025-01-17T12:23:28.502844016Z" level=info msg="RemovePodSandbox \"1df718df8c3060d339e4290dfd32a3f18e24d9cfb37f9833f5ea6eb3f1b70049\" returns successfully" Jan 17 12:23:28.504126 containerd[1460]: time="2025-01-17T12:23:28.503814796Z" level=info msg="StopPodSandbox for \"41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce\"" Jan 17 12:23:28.604408 containerd[1460]: 2025-01-17 12:23:28.560 [WARNING][5048] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--jvpvw-eth0", GenerateName:"calico-apiserver-644c6b96bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"d31f2d26-9d64-4545-9a49-9ad99ebce942", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"644c6b96bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-6-c2def92c28", ContainerID:"cbedc8a5fd87f5d28a093bd8486fc7bae161b4b959fccbe8fa84ae39fe396615", Pod:"calico-apiserver-644c6b96bd-jvpvw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia3b625ad029", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:28.604408 containerd[1460]: 2025-01-17 12:23:28.561 [INFO][5048] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" Jan 17 12:23:28.604408 containerd[1460]: 2025-01-17 12:23:28.561 [INFO][5048] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" iface="eth0" netns="" Jan 17 12:23:28.604408 containerd[1460]: 2025-01-17 12:23:28.561 [INFO][5048] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" Jan 17 12:23:28.604408 containerd[1460]: 2025-01-17 12:23:28.561 [INFO][5048] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" Jan 17 12:23:28.604408 containerd[1460]: 2025-01-17 12:23:28.590 [INFO][5055] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" HandleID="k8s-pod-network.41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--jvpvw-eth0" Jan 17 12:23:28.604408 containerd[1460]: 2025-01-17 12:23:28.590 [INFO][5055] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:28.604408 containerd[1460]: 2025-01-17 12:23:28.591 [INFO][5055] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:28.604408 containerd[1460]: 2025-01-17 12:23:28.598 [WARNING][5055] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" HandleID="k8s-pod-network.41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--jvpvw-eth0" Jan 17 12:23:28.604408 containerd[1460]: 2025-01-17 12:23:28.598 [INFO][5055] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" HandleID="k8s-pod-network.41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--jvpvw-eth0" Jan 17 12:23:28.604408 containerd[1460]: 2025-01-17 12:23:28.599 [INFO][5055] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:28.604408 containerd[1460]: 2025-01-17 12:23:28.601 [INFO][5048] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" Jan 17 12:23:28.605355 containerd[1460]: time="2025-01-17T12:23:28.604782564Z" level=info msg="TearDown network for sandbox \"41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce\" successfully" Jan 17 12:23:28.605355 containerd[1460]: time="2025-01-17T12:23:28.604822713Z" level=info msg="StopPodSandbox for \"41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce\" returns successfully" Jan 17 12:23:28.606350 containerd[1460]: time="2025-01-17T12:23:28.605773871Z" level=info msg="RemovePodSandbox for \"41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce\"" Jan 17 12:23:28.606350 containerd[1460]: time="2025-01-17T12:23:28.605817673Z" level=info msg="Forcibly stopping sandbox \"41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce\"" Jan 17 12:23:28.698664 containerd[1460]: 2025-01-17 12:23:28.654 [WARNING][5073] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--jvpvw-eth0", GenerateName:"calico-apiserver-644c6b96bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"d31f2d26-9d64-4545-9a49-9ad99ebce942", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"644c6b96bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-6-c2def92c28", ContainerID:"cbedc8a5fd87f5d28a093bd8486fc7bae161b4b959fccbe8fa84ae39fe396615", Pod:"calico-apiserver-644c6b96bd-jvpvw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia3b625ad029", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:28.698664 containerd[1460]: 2025-01-17 12:23:28.655 [INFO][5073] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" Jan 17 12:23:28.698664 containerd[1460]: 2025-01-17 12:23:28.655 [INFO][5073] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" iface="eth0" netns="" Jan 17 12:23:28.698664 containerd[1460]: 2025-01-17 12:23:28.655 [INFO][5073] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" Jan 17 12:23:28.698664 containerd[1460]: 2025-01-17 12:23:28.655 [INFO][5073] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" Jan 17 12:23:28.698664 containerd[1460]: 2025-01-17 12:23:28.686 [INFO][5080] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" HandleID="k8s-pod-network.41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--jvpvw-eth0" Jan 17 12:23:28.698664 containerd[1460]: 2025-01-17 12:23:28.686 [INFO][5080] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:28.698664 containerd[1460]: 2025-01-17 12:23:28.686 [INFO][5080] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:28.698664 containerd[1460]: 2025-01-17 12:23:28.692 [WARNING][5080] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" HandleID="k8s-pod-network.41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--jvpvw-eth0" Jan 17 12:23:28.698664 containerd[1460]: 2025-01-17 12:23:28.692 [INFO][5080] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" HandleID="k8s-pod-network.41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--jvpvw-eth0" Jan 17 12:23:28.698664 containerd[1460]: 2025-01-17 12:23:28.694 [INFO][5080] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:28.698664 containerd[1460]: 2025-01-17 12:23:28.696 [INFO][5073] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce" Jan 17 12:23:28.699644 containerd[1460]: time="2025-01-17T12:23:28.699299472Z" level=info msg="TearDown network for sandbox \"41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce\" successfully" Jan 17 12:23:28.702872 containerd[1460]: time="2025-01-17T12:23:28.702734553Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:23:28.702872 containerd[1460]: time="2025-01-17T12:23:28.702842814Z" level=info msg="RemovePodSandbox \"41f4927bc9b5cabcb616d4a13f260d5de9e91de8060b8414d56045d683d718ce\" returns successfully" Jan 17 12:23:28.703991 containerd[1460]: time="2025-01-17T12:23:28.703585534Z" level=info msg="StopPodSandbox for \"4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d\"" Jan 17 12:23:28.824063 containerd[1460]: 2025-01-17 12:23:28.748 [WARNING][5098] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--qlwqs-eth0", GenerateName:"calico-apiserver-644c6b96bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"4a25078f-72c0-4f3c-95ba-d53d9ddcf023", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"644c6b96bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-6-c2def92c28", ContainerID:"b71a3241894b1b7f866c29aef2496e78ba24711780f4cab34eabce86553a032e", Pod:"calico-apiserver-644c6b96bd-qlwqs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4a5f741ba23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:28.824063 containerd[1460]: 2025-01-17 12:23:28.749 [INFO][5098] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" Jan 17 12:23:28.824063 containerd[1460]: 2025-01-17 12:23:28.749 [INFO][5098] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" iface="eth0" netns="" Jan 17 12:23:28.824063 containerd[1460]: 2025-01-17 12:23:28.749 [INFO][5098] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" Jan 17 12:23:28.824063 containerd[1460]: 2025-01-17 12:23:28.749 [INFO][5098] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" Jan 17 12:23:28.824063 containerd[1460]: 2025-01-17 12:23:28.799 [INFO][5104] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" HandleID="k8s-pod-network.4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--qlwqs-eth0" Jan 17 12:23:28.824063 containerd[1460]: 2025-01-17 12:23:28.799 [INFO][5104] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:28.824063 containerd[1460]: 2025-01-17 12:23:28.799 [INFO][5104] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:28.824063 containerd[1460]: 2025-01-17 12:23:28.810 [WARNING][5104] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" HandleID="k8s-pod-network.4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--qlwqs-eth0" Jan 17 12:23:28.824063 containerd[1460]: 2025-01-17 12:23:28.811 [INFO][5104] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" HandleID="k8s-pod-network.4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--qlwqs-eth0" Jan 17 12:23:28.824063 containerd[1460]: 2025-01-17 12:23:28.816 [INFO][5104] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:28.824063 containerd[1460]: 2025-01-17 12:23:28.821 [INFO][5098] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" Jan 17 12:23:28.825318 containerd[1460]: time="2025-01-17T12:23:28.824232650Z" level=info msg="TearDown network for sandbox \"4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d\" successfully" Jan 17 12:23:28.825318 containerd[1460]: time="2025-01-17T12:23:28.824280319Z" level=info msg="StopPodSandbox for \"4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d\" returns successfully" Jan 17 12:23:28.825612 containerd[1460]: time="2025-01-17T12:23:28.825579133Z" level=info msg="RemovePodSandbox for \"4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d\"" Jan 17 12:23:28.825690 containerd[1460]: time="2025-01-17T12:23:28.825669716Z" level=info msg="Forcibly stopping sandbox \"4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d\"" Jan 17 12:23:28.956240 kubelet[2547]: I0117 12:23:28.955017 2547 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:23:28.970449 containerd[1460]: 2025-01-17 12:23:28.892 [WARNING][5122] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--qlwqs-eth0", GenerateName:"calico-apiserver-644c6b96bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"4a25078f-72c0-4f3c-95ba-d53d9ddcf023", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"644c6b96bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-6-c2def92c28", ContainerID:"b71a3241894b1b7f866c29aef2496e78ba24711780f4cab34eabce86553a032e", Pod:"calico-apiserver-644c6b96bd-qlwqs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4a5f741ba23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:28.970449 containerd[1460]: 2025-01-17 12:23:28.893 [INFO][5122] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" Jan 17 12:23:28.970449 containerd[1460]: 2025-01-17 12:23:28.893 [INFO][5122] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" iface="eth0" netns="" Jan 17 12:23:28.970449 containerd[1460]: 2025-01-17 12:23:28.893 [INFO][5122] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" Jan 17 12:23:28.970449 containerd[1460]: 2025-01-17 12:23:28.893 [INFO][5122] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" Jan 17 12:23:28.970449 containerd[1460]: 2025-01-17 12:23:28.942 [INFO][5129] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" HandleID="k8s-pod-network.4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--qlwqs-eth0" Jan 17 12:23:28.970449 containerd[1460]: 2025-01-17 12:23:28.942 [INFO][5129] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:28.970449 containerd[1460]: 2025-01-17 12:23:28.942 [INFO][5129] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:28.970449 containerd[1460]: 2025-01-17 12:23:28.962 [WARNING][5129] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" HandleID="k8s-pod-network.4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--qlwqs-eth0" Jan 17 12:23:28.970449 containerd[1460]: 2025-01-17 12:23:28.962 [INFO][5129] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" HandleID="k8s-pod-network.4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" Workload="ci--4081.3.0--6--c2def92c28-k8s-calico--apiserver--644c6b96bd--qlwqs-eth0" Jan 17 12:23:28.970449 containerd[1460]: 2025-01-17 12:23:28.964 [INFO][5129] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:28.970449 containerd[1460]: 2025-01-17 12:23:28.968 [INFO][5122] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d" Jan 17 12:23:28.972360 containerd[1460]: time="2025-01-17T12:23:28.971120036Z" level=info msg="TearDown network for sandbox \"4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d\" successfully" Jan 17 12:23:28.976721 containerd[1460]: time="2025-01-17T12:23:28.976512449Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:23:28.976721 containerd[1460]: time="2025-01-17T12:23:28.976611643Z" level=info msg="RemovePodSandbox \"4c41624cc8b20b4debb867799724d37c4ecad9cb752d21b70800210151afd11d\" returns successfully" Jan 17 12:23:28.977418 containerd[1460]: time="2025-01-17T12:23:28.977283424Z" level=info msg="StopPodSandbox for \"77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79\"" Jan 17 12:23:29.108503 containerd[1460]: 2025-01-17 12:23:29.053 [WARNING][5147] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--6--c2def92c28-k8s-csi--node--driver--gzkcx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"151ac44f-4692-405d-a3ad-26a51dc59114", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-6-c2def92c28", ContainerID:"c5a8be4a05936d2c6d032726c3551f552e45dba70a6d568a14cf0c8f76694fb1", Pod:"csi-node-driver-gzkcx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali31353300c2f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:29.108503 containerd[1460]: 2025-01-17 12:23:29.053 [INFO][5147] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" Jan 17 12:23:29.108503 containerd[1460]: 2025-01-17 12:23:29.053 [INFO][5147] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" iface="eth0" netns="" Jan 17 12:23:29.108503 containerd[1460]: 2025-01-17 12:23:29.053 [INFO][5147] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" Jan 17 12:23:29.108503 containerd[1460]: 2025-01-17 12:23:29.053 [INFO][5147] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" Jan 17 12:23:29.108503 containerd[1460]: 2025-01-17 12:23:29.091 [INFO][5154] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" HandleID="k8s-pod-network.77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" Workload="ci--4081.3.0--6--c2def92c28-k8s-csi--node--driver--gzkcx-eth0" Jan 17 12:23:29.108503 containerd[1460]: 2025-01-17 12:23:29.091 [INFO][5154] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:29.108503 containerd[1460]: 2025-01-17 12:23:29.091 [INFO][5154] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:29.108503 containerd[1460]: 2025-01-17 12:23:29.100 [WARNING][5154] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" HandleID="k8s-pod-network.77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" Workload="ci--4081.3.0--6--c2def92c28-k8s-csi--node--driver--gzkcx-eth0" Jan 17 12:23:29.108503 containerd[1460]: 2025-01-17 12:23:29.100 [INFO][5154] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" HandleID="k8s-pod-network.77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" Workload="ci--4081.3.0--6--c2def92c28-k8s-csi--node--driver--gzkcx-eth0" Jan 17 12:23:29.108503 containerd[1460]: 2025-01-17 12:23:29.102 [INFO][5154] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:29.108503 containerd[1460]: 2025-01-17 12:23:29.104 [INFO][5147] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" Jan 17 12:23:29.108503 containerd[1460]: time="2025-01-17T12:23:29.107320673Z" level=info msg="TearDown network for sandbox \"77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79\" successfully" Jan 17 12:23:29.108503 containerd[1460]: time="2025-01-17T12:23:29.107346020Z" level=info msg="StopPodSandbox for \"77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79\" returns successfully" Jan 17 12:23:29.110426 containerd[1460]: time="2025-01-17T12:23:29.110391345Z" level=info msg="RemovePodSandbox for \"77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79\"" Jan 17 12:23:29.110426 containerd[1460]: time="2025-01-17T12:23:29.110429391Z" level=info msg="Forcibly stopping sandbox \"77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79\"" Jan 17 12:23:29.229869 containerd[1460]: 2025-01-17 12:23:29.173 [WARNING][5172] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--6--c2def92c28-k8s-csi--node--driver--gzkcx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"151ac44f-4692-405d-a3ad-26a51dc59114", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-6-c2def92c28", ContainerID:"c5a8be4a05936d2c6d032726c3551f552e45dba70a6d568a14cf0c8f76694fb1", Pod:"csi-node-driver-gzkcx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali31353300c2f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:29.229869 containerd[1460]: 2025-01-17 12:23:29.173 [INFO][5172] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" Jan 17 12:23:29.229869 containerd[1460]: 2025-01-17 12:23:29.173 [INFO][5172] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" iface="eth0" netns="" Jan 17 12:23:29.229869 containerd[1460]: 2025-01-17 12:23:29.174 [INFO][5172] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" Jan 17 12:23:29.229869 containerd[1460]: 2025-01-17 12:23:29.174 [INFO][5172] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" Jan 17 12:23:29.229869 containerd[1460]: 2025-01-17 12:23:29.212 [INFO][5178] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" HandleID="k8s-pod-network.77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" Workload="ci--4081.3.0--6--c2def92c28-k8s-csi--node--driver--gzkcx-eth0" Jan 17 12:23:29.229869 containerd[1460]: 2025-01-17 12:23:29.212 [INFO][5178] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:29.229869 containerd[1460]: 2025-01-17 12:23:29.212 [INFO][5178] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:29.229869 containerd[1460]: 2025-01-17 12:23:29.222 [WARNING][5178] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" HandleID="k8s-pod-network.77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" Workload="ci--4081.3.0--6--c2def92c28-k8s-csi--node--driver--gzkcx-eth0" Jan 17 12:23:29.229869 containerd[1460]: 2025-01-17 12:23:29.222 [INFO][5178] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" HandleID="k8s-pod-network.77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" Workload="ci--4081.3.0--6--c2def92c28-k8s-csi--node--driver--gzkcx-eth0" Jan 17 12:23:29.229869 containerd[1460]: 2025-01-17 12:23:29.224 [INFO][5178] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:29.229869 containerd[1460]: 2025-01-17 12:23:29.227 [INFO][5172] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79" Jan 17 12:23:29.231558 containerd[1460]: time="2025-01-17T12:23:29.229894803Z" level=info msg="TearDown network for sandbox \"77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79\" successfully" Jan 17 12:23:29.232662 containerd[1460]: time="2025-01-17T12:23:29.232613443Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:23:29.232767 containerd[1460]: time="2025-01-17T12:23:29.232702046Z" level=info msg="RemovePodSandbox \"77cf2a3313c8c68aa1a0e25b7792ad70fe4b08ef750f67250a1836b2d8260f79\" returns successfully" Jan 17 12:23:29.536556 kubelet[2547]: I0117 12:23:29.536425 2547 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-644c6b96bd-jvpvw" podStartSLOduration=28.380383174 podStartE2EDuration="34.536382907s" podCreationTimestamp="2025-01-17 12:22:55 +0000 UTC" firstStartedPulling="2025-01-17 12:23:21.266280229 +0000 UTC m=+54.042545125" lastFinishedPulling="2025-01-17 12:23:27.422279947 +0000 UTC m=+60.198544858" observedRunningTime="2025-01-17 12:23:27.987559178 +0000 UTC m=+60.763824094" watchObservedRunningTime="2025-01-17 12:23:29.536382907 +0000 UTC m=+62.312647824" Jan 17 12:23:31.825554 systemd[1]: Started sshd@13-164.92.109.43:22-139.178.68.195:55050.service - OpenSSH per-connection server daemon (139.178.68.195:55050). Jan 17 12:23:31.922115 sshd[5189]: Accepted publickey for core from 139.178.68.195 port 55050 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:31.924676 sshd[5189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:31.931066 systemd-logind[1442]: New session 14 of user core. Jan 17 12:23:31.938633 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:23:32.283209 sshd[5189]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:32.288310 systemd[1]: sshd@13-164.92.109.43:22-139.178.68.195:55050.service: Deactivated successfully. Jan 17 12:23:32.291901 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:23:32.293028 systemd-logind[1442]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:23:32.294180 systemd-logind[1442]: Removed session 14. Jan 17 12:23:37.305622 systemd[1]: Started sshd@14-164.92.109.43:22-139.178.68.195:60698.service - OpenSSH per-connection server daemon (139.178.68.195:60698). Jan 17 12:23:37.360943 sshd[5227]: Accepted publickey for core from 139.178.68.195 port 60698 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:37.362993 sshd[5227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:37.368806 systemd-logind[1442]: New session 15 of user core. Jan 17 12:23:37.375406 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:23:37.554678 sshd[5227]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:37.559299 systemd-logind[1442]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:23:37.560423 systemd[1]: sshd@14-164.92.109.43:22-139.178.68.195:60698.service: Deactivated successfully. Jan 17 12:23:37.564227 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:23:37.566849 systemd-logind[1442]: Removed session 15. Jan 17 12:23:40.382580 kubelet[2547]: E0117 12:23:40.382412 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:41.012026 kubelet[2547]: E0117 12:23:41.011801 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:41.402367 kubelet[2547]: E0117 12:23:41.401763 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:42.577637 systemd[1]: Started sshd@15-164.92.109.43:22-139.178.68.195:60700.service - OpenSSH per-connection server daemon (139.178.68.195:60700). Jan 17 12:23:42.668476 sshd[5288]: Accepted publickey for core from 139.178.68.195 port 60700 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:42.670998 sshd[5288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:42.676358 systemd-logind[1442]: New session 16 of user core. Jan 17 12:23:42.681424 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:23:43.125672 sshd[5288]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:43.134318 systemd[1]: sshd@15-164.92.109.43:22-139.178.68.195:60700.service: Deactivated successfully. Jan 17 12:23:43.136364 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:23:43.138040 systemd-logind[1442]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:23:43.144848 systemd[1]: Started sshd@16-164.92.109.43:22-139.178.68.195:60714.service - OpenSSH per-connection server daemon (139.178.68.195:60714). Jan 17 12:23:43.147439 systemd-logind[1442]: Removed session 16. Jan 17 12:23:43.192137 sshd[5304]: Accepted publickey for core from 139.178.68.195 port 60714 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:43.193863 sshd[5304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:43.200113 systemd-logind[1442]: New session 17 of user core. Jan 17 12:23:43.208415 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:23:43.518124 sshd[5304]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:43.528405 systemd[1]: sshd@16-164.92.109.43:22-139.178.68.195:60714.service: Deactivated successfully. Jan 17 12:23:43.531112 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:23:43.531931 systemd-logind[1442]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:23:43.540049 systemd[1]: Started sshd@17-164.92.109.43:22-139.178.68.195:60726.service - OpenSSH per-connection server daemon (139.178.68.195:60726). Jan 17 12:23:43.541769 systemd-logind[1442]: Removed session 17. Jan 17 12:23:43.602964 sshd[5315]: Accepted publickey for core from 139.178.68.195 port 60726 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:43.605735 sshd[5315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:43.612108 systemd-logind[1442]: New session 18 of user core. Jan 17 12:23:43.619519 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:23:45.792528 sshd[5315]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:45.820819 systemd[1]: Started sshd@18-164.92.109.43:22-139.178.68.195:60716.service - OpenSSH per-connection server daemon (139.178.68.195:60716). Jan 17 12:23:45.821445 systemd[1]: sshd@17-164.92.109.43:22-139.178.68.195:60726.service: Deactivated successfully. Jan 17 12:23:45.832701 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:23:45.841140 systemd-logind[1442]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:23:45.852282 systemd-logind[1442]: Removed session 18. Jan 17 12:23:45.921953 sshd[5335]: Accepted publickey for core from 139.178.68.195 port 60716 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:45.925281 sshd[5335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:45.936968 systemd-logind[1442]: New session 19 of user core. Jan 17 12:23:45.940595 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:23:46.570159 sshd[5335]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:46.580368 systemd[1]: sshd@18-164.92.109.43:22-139.178.68.195:60716.service: Deactivated successfully. Jan 17 12:23:46.584183 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:23:46.586300 systemd-logind[1442]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:23:46.594612 systemd[1]: Started sshd@19-164.92.109.43:22-139.178.68.195:60720.service - OpenSSH per-connection server daemon (139.178.68.195:60720). Jan 17 12:23:46.598025 systemd-logind[1442]: Removed session 19. Jan 17 12:23:46.644708 sshd[5350]: Accepted publickey for core from 139.178.68.195 port 60720 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:46.646493 sshd[5350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:46.653330 systemd-logind[1442]: New session 20 of user core. Jan 17 12:23:46.658397 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:23:46.790983 sshd[5350]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:46.799038 systemd[1]: sshd@19-164.92.109.43:22-139.178.68.195:60720.service: Deactivated successfully. Jan 17 12:23:46.801813 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:23:46.802799 systemd-logind[1442]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:23:46.803884 systemd-logind[1442]: Removed session 20. Jan 17 12:23:51.809625 systemd[1]: Started sshd@20-164.92.109.43:22-139.178.68.195:60724.service - OpenSSH per-connection server daemon (139.178.68.195:60724). Jan 17 12:23:51.858931 sshd[5366]: Accepted publickey for core from 139.178.68.195 port 60724 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:51.860904 sshd[5366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:51.868763 systemd-logind[1442]: New session 21 of user core. Jan 17 12:23:51.884547 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:23:52.030864 sshd[5366]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:52.038001 systemd[1]: sshd@20-164.92.109.43:22-139.178.68.195:60724.service: Deactivated successfully. Jan 17 12:23:52.042030 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:23:52.043151 systemd-logind[1442]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:23:52.044696 systemd-logind[1442]: Removed session 21. Jan 17 12:23:57.052599 systemd[1]: Started sshd@21-164.92.109.43:22-139.178.68.195:56416.service - OpenSSH per-connection server daemon (139.178.68.195:56416). Jan 17 12:23:57.118093 sshd[5389]: Accepted publickey for core from 139.178.68.195 port 56416 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:57.121661 sshd[5389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:57.131569 systemd-logind[1442]: New session 22 of user core. Jan 17 12:23:57.136478 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:23:57.269909 kubelet[2547]: I0117 12:23:57.269788 2547 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:23:57.367211 sshd[5389]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:57.374111 systemd-logind[1442]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:23:57.375557 systemd[1]: sshd@21-164.92.109.43:22-139.178.68.195:56416.service: Deactivated successfully. Jan 17 12:23:57.378472 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:23:57.381374 systemd-logind[1442]: Removed session 22. Jan 17 12:23:58.416626 kubelet[2547]: E0117 12:23:58.416486 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:24:02.398886 systemd[1]: Started sshd@22-164.92.109.43:22-139.178.68.195:56418.service - OpenSSH per-connection server daemon (139.178.68.195:56418). Jan 17 12:24:02.646701 sshd[5407]: Accepted publickey for core from 139.178.68.195 port 56418 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:24:02.650079 sshd[5407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:24:02.657873 systemd-logind[1442]: New session 23 of user core. Jan 17 12:24:02.662507 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:24:03.014908 sshd[5407]: pam_unix(sshd:session): session closed for user core Jan 17 12:24:03.022597 systemd[1]: sshd@22-164.92.109.43:22-139.178.68.195:56418.service: Deactivated successfully. Jan 17 12:24:03.031053 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:24:03.036350 systemd-logind[1442]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:24:03.038296 systemd-logind[1442]: Removed session 23. Jan 17 12:24:03.417974 kubelet[2547]: E0117 12:24:03.417924 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:24:04.258063 systemd[1]: run-containerd-runc-k8s.io-b40c3f8c62ef171f78b03c5dba2d9e57b9f48db63c6ef8f810cc0fc729f9631e-runc.2sKXWW.mount: Deactivated successfully. Jan 17 12:24:05.125024 systemd[1]: run-containerd-runc-k8s.io-b40c3f8c62ef171f78b03c5dba2d9e57b9f48db63c6ef8f810cc0fc729f9631e-runc.PiH1H8.mount: Deactivated successfully. Jan 17 12:24:08.032748 systemd[1]: Started sshd@23-164.92.109.43:22-139.178.68.195:46346.service - OpenSSH per-connection server daemon (139.178.68.195:46346). Jan 17 12:24:08.094215 sshd[5459]: Accepted publickey for core from 139.178.68.195 port 46346 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:24:08.095681 sshd[5459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:24:08.101488 systemd-logind[1442]: New session 24 of user core. Jan 17 12:24:08.107499 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 12:24:08.294919 sshd[5459]: pam_unix(sshd:session): session closed for user core Jan 17 12:24:08.300599 systemd[1]: sshd@23-164.92.109.43:22-139.178.68.195:46346.service: Deactivated successfully. Jan 17 12:24:08.303417 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 12:24:08.305107 systemd-logind[1442]: Session 24 logged out. Waiting for processes to exit. Jan 17 12:24:08.308306 systemd-logind[1442]: Removed session 24. Jan 17 12:24:09.406496 kubelet[2547]: E0117 12:24:09.405641 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:24:09.406496 kubelet[2547]: E0117 12:24:09.405640 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"