Jan 17 12:17:47.003587 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:17:47.003663 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:17:47.003686 kernel: BIOS-provided physical RAM map: Jan 17 12:17:47.003698 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 12:17:47.003710 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 12:17:47.003723 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 12:17:47.003738 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Jan 17 12:17:47.003750 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Jan 17 12:17:47.003763 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 12:17:47.003779 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 12:17:47.003790 kernel: NX (Execute Disable) protection: active Jan 17 12:17:47.003803 kernel: APIC: Static calls initialized Jan 17 12:17:47.003816 kernel: SMBIOS 2.8 present. Jan 17 12:17:47.003829 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 17 12:17:47.003845 kernel: Hypervisor detected: KVM Jan 17 12:17:47.003862 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:17:47.003876 kernel: kvm-clock: using sched offset of 3585788513 cycles Jan 17 12:17:47.003891 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:17:47.003908 kernel: tsc: Detected 1995.312 MHz processor Jan 17 12:17:47.003922 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:17:47.003937 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:17:47.003951 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Jan 17 12:17:47.003963 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 12:17:47.003974 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:17:47.003987 kernel: ACPI: Early table checksum verification disabled Jan 17 12:17:47.003996 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Jan 17 12:17:47.004007 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:17:47.004017 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:17:47.004027 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:17:47.004037 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 17 12:17:47.004046 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:17:47.004056 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:17:47.004065 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:17:47.004078 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:17:47.004088 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 17 12:17:47.004098 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 17 12:17:47.004107 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 17 12:17:47.004117 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 17 12:17:47.004126 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 17 12:17:47.004137 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 17 12:17:47.004152 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 17 12:17:47.004165 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 12:17:47.004175 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 12:17:47.004233 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 12:17:47.004247 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 17 12:17:47.004260 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Jan 17 12:17:47.004272 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Jan 17 12:17:47.004288 kernel: Zone ranges: Jan 17 12:17:47.004361 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:17:47.004377 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Jan 17 12:17:47.004395 kernel: Normal empty Jan 17 12:17:47.004407 kernel: Movable zone start for each node Jan 17 12:17:47.004418 kernel: Early memory node ranges Jan 17 12:17:47.004429 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 12:17:47.004441 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Jan 17 12:17:47.004454 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Jan 17 12:17:47.004471 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:17:47.004482 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 12:17:47.004496 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Jan 17 12:17:47.004506 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 12:17:47.004517 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:17:47.004529 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:17:47.004539 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 12:17:47.004551 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:17:47.004562 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:17:47.004578 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:17:47.004619 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:17:47.004631 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:17:47.004662 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 12:17:47.004674 kernel: TSC deadline timer available Jan 17 12:17:47.004685 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 12:17:47.004697 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 12:17:47.004709 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 17 12:17:47.004722 kernel: Booting paravirtualized kernel on KVM Jan 17 12:17:47.004738 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:17:47.004750 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 12:17:47.004764 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 17 12:17:47.004777 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 17 12:17:47.004789 kernel: pcpu-alloc: [0] 0 1 Jan 17 12:17:47.004801 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 17 12:17:47.004818 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:17:47.004831 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:17:47.004848 kernel: random: crng init done Jan 17 12:17:47.004858 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:17:47.004869 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 12:17:47.004879 kernel: Fallback order for Node 0: 0 Jan 17 12:17:47.004891 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Jan 17 12:17:47.004902 kernel: Policy zone: DMA32 Jan 17 12:17:47.004913 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:17:47.004926 kernel: Memory: 1971192K/2096600K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 125148K reserved, 0K cma-reserved) Jan 17 12:17:47.004937 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:17:47.004953 kernel: Kernel/User page tables isolation: enabled Jan 17 12:17:47.004965 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:17:47.004978 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:17:47.004988 kernel: Dynamic Preempt: voluntary Jan 17 12:17:47.004999 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:17:47.005011 kernel: rcu: RCU event tracing is enabled. Jan 17 12:17:47.005022 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:17:47.005032 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:17:47.005087 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:17:47.005103 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:17:47.005115 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:17:47.005126 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:17:47.005136 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 12:17:47.005147 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:17:47.005159 kernel: Console: colour VGA+ 80x25 Jan 17 12:17:47.005171 kernel: printk: console [tty0] enabled Jan 17 12:17:47.005182 kernel: printk: console [ttyS0] enabled Jan 17 12:17:47.005193 kernel: ACPI: Core revision 20230628 Jan 17 12:17:47.005205 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 12:17:47.005220 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:17:47.005232 kernel: x2apic enabled Jan 17 12:17:47.005255 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:17:47.005267 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 12:17:47.005278 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Jan 17 12:17:47.005289 kernel: Calibrating delay loop (skipped) preset value.. 3990.62 BogoMIPS (lpj=1995312) Jan 17 12:17:47.005300 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 17 12:17:47.005312 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 17 12:17:47.005339 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:17:47.005351 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 12:17:47.005363 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:17:47.005380 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:17:47.005393 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 17 12:17:47.005406 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:17:47.005418 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:17:47.005431 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 12:17:47.005444 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 12:17:47.005464 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:17:47.005478 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:17:47.005492 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:17:47.005505 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:17:47.005517 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 12:17:47.005529 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:17:47.005542 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:17:47.005555 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:17:47.005572 kernel: landlock: Up and running. Jan 17 12:17:47.005584 kernel: SELinux: Initializing. Jan 17 12:17:47.005628 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:17:47.005642 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:17:47.005700 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 17 12:17:47.005716 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:17:47.005732 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:17:47.005748 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:17:47.005768 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 17 12:17:47.005783 kernel: signal: max sigframe size: 1776 Jan 17 12:17:47.005799 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:17:47.005815 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:17:47.005830 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 12:17:47.005846 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:17:47.005862 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:17:47.005876 kernel: .... node #0, CPUs: #1 Jan 17 12:17:47.005891 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:17:47.005907 kernel: smpboot: Max logical packages: 1 Jan 17 12:17:47.005927 kernel: smpboot: Total of 2 processors activated (7981.24 BogoMIPS) Jan 17 12:17:47.005943 kernel: devtmpfs: initialized Jan 17 12:17:47.005958 kernel: x86/mm: Memory block size: 128MB Jan 17 12:17:47.005973 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:17:47.006001 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:17:47.006045 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:17:47.006060 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:17:47.006075 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:17:47.006091 kernel: audit: type=2000 audit(1737116265.698:1): state=initialized audit_enabled=0 res=1 Jan 17 12:17:47.006109 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:17:47.006125 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:17:47.006140 kernel: cpuidle: using governor menu Jan 17 12:17:47.006155 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:17:47.006170 kernel: dca service started, version 1.12.1 Jan 17 12:17:47.006184 kernel: PCI: Using configuration type 1 for base access Jan 17 12:17:47.006198 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:17:47.006212 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:17:47.006225 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:17:47.006244 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:17:47.006259 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:17:47.006274 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:17:47.006290 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:17:47.006305 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:17:47.006320 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:17:47.006335 kernel: ACPI: Interpreter enabled Jan 17 12:17:47.006350 kernel: ACPI: PM: (supports S0 S5) Jan 17 12:17:47.006365 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:17:47.006409 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:17:47.006424 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:17:47.006439 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 17 12:17:47.006452 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:17:47.006786 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:17:47.006975 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 12:17:47.007137 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 12:17:47.007163 kernel: acpiphp: Slot [3] registered Jan 17 12:17:47.007179 kernel: acpiphp: Slot [4] registered Jan 17 12:17:47.007194 kernel: acpiphp: Slot [5] registered Jan 17 12:17:47.007209 kernel: acpiphp: Slot [6] registered Jan 17 12:17:47.007225 kernel: acpiphp: Slot [7] registered Jan 17 12:17:47.007240 kernel: acpiphp: Slot [8] registered Jan 17 12:17:47.007255 kernel: acpiphp: Slot [9] registered Jan 17 12:17:47.007270 kernel: acpiphp: Slot [10] registered Jan 17 12:17:47.007285 kernel: acpiphp: Slot [11] registered Jan 17 12:17:47.007305 kernel: acpiphp: Slot [12] registered Jan 17 12:17:47.007320 kernel: acpiphp: Slot [13] registered Jan 17 12:17:47.007335 kernel: acpiphp: Slot [14] registered Jan 17 12:17:47.007350 kernel: acpiphp: Slot [15] registered Jan 17 12:17:47.007365 kernel: acpiphp: Slot [16] registered Jan 17 12:17:47.007380 kernel: acpiphp: Slot [17] registered Jan 17 12:17:47.007394 kernel: acpiphp: Slot [18] registered Jan 17 12:17:47.007409 kernel: acpiphp: Slot [19] registered Jan 17 12:17:47.007423 kernel: acpiphp: Slot [20] registered Jan 17 12:17:47.007438 kernel: acpiphp: Slot [21] registered Jan 17 12:17:47.007457 kernel: acpiphp: Slot [22] registered Jan 17 12:17:47.007472 kernel: acpiphp: Slot [23] registered Jan 17 12:17:47.007488 kernel: acpiphp: Slot [24] registered Jan 17 12:17:47.007502 kernel: acpiphp: Slot [25] registered Jan 17 12:17:47.007517 kernel: acpiphp: Slot [26] registered Jan 17 12:17:47.007532 kernel: acpiphp: Slot [27] registered Jan 17 12:17:47.007548 kernel: acpiphp: Slot [28] registered Jan 17 12:17:47.007563 kernel: acpiphp: Slot [29] registered Jan 17 12:17:47.007577 kernel: acpiphp: Slot [30] registered Jan 17 12:17:47.007617 kernel: acpiphp: Slot [31] registered Jan 17 12:17:47.007632 kernel: PCI host bridge to bus 0000:00 Jan 17 12:17:47.007818 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:17:47.007976 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:17:47.008121 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:17:47.008252 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 17 12:17:47.008386 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 17 12:17:47.008519 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:17:47.008767 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 12:17:47.008954 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 17 12:17:47.009141 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 17 12:17:47.009293 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 17 12:17:47.009440 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 17 12:17:47.009649 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 17 12:17:47.009827 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 17 12:17:47.009970 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 17 12:17:47.010764 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 17 12:17:47.010944 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 17 12:17:47.011108 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 17 12:17:47.011260 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 17 12:17:47.011422 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 17 12:17:47.011625 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 17 12:17:47.011785 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 17 12:17:47.011938 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 17 12:17:47.012092 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 17 12:17:47.012249 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 17 12:17:47.012401 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:17:47.012572 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:17:47.014889 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 17 12:17:47.015083 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 17 12:17:47.015251 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 17 12:17:47.015423 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:17:47.015586 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 17 12:17:47.015829 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 17 12:17:47.016083 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 17 12:17:47.016297 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 17 12:17:47.016550 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 17 12:17:47.018952 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 17 12:17:47.019183 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 17 12:17:47.019367 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 17 12:17:47.019544 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 17 12:17:47.019727 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 17 12:17:47.019876 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 17 12:17:47.020083 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 17 12:17:47.020239 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 17 12:17:47.020426 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 17 12:17:47.023708 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 17 12:17:47.023957 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 17 12:17:47.024145 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 17 12:17:47.024307 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 17 12:17:47.024327 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:17:47.024342 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:17:47.024356 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:17:47.024369 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:17:47.024390 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 12:17:47.024405 kernel: iommu: Default domain type: Translated Jan 17 12:17:47.024420 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:17:47.024434 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:17:47.024447 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:17:47.024461 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 12:17:47.024475 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Jan 17 12:17:47.024737 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 17 12:17:47.024891 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 17 12:17:47.025054 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:17:47.025074 kernel: vgaarb: loaded Jan 17 12:17:47.025088 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 12:17:47.025101 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 12:17:47.025114 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:17:47.025126 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:17:47.025140 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:17:47.025154 kernel: pnp: PnP ACPI init Jan 17 12:17:47.025166 kernel: pnp: PnP ACPI: found 4 devices Jan 17 12:17:47.025192 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:17:47.025205 kernel: NET: Registered PF_INET protocol family Jan 17 12:17:47.025218 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:17:47.025232 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 12:17:47.025244 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:17:47.025258 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:17:47.025271 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 12:17:47.025284 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 12:17:47.025297 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:17:47.025312 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:17:47.025324 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:17:47.025336 kernel: NET: Registered PF_XDP protocol family Jan 17 12:17:47.025503 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:17:47.027805 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:17:47.027968 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:17:47.028094 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 17 12:17:47.028220 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 17 12:17:47.028410 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 17 12:17:47.028629 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 12:17:47.028654 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 17 12:17:47.028810 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 35426 usecs Jan 17 12:17:47.028878 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:17:47.028894 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 12:17:47.028906 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Jan 17 12:17:47.028920 kernel: Initialise system trusted keyrings Jan 17 12:17:47.028953 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 12:17:47.028966 kernel: Key type asymmetric registered Jan 17 12:17:47.028978 kernel: Asymmetric key parser 'x509' registered Jan 17 12:17:47.028990 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:17:47.029008 kernel: io scheduler mq-deadline registered Jan 17 12:17:47.029020 kernel: io scheduler kyber registered Jan 17 12:17:47.029032 kernel: io scheduler bfq registered Jan 17 12:17:47.029044 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:17:47.029057 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 17 12:17:47.029074 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 17 12:17:47.029085 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 17 12:17:47.029103 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:17:47.029114 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:17:47.029127 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:17:47.029139 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:17:47.029151 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:17:47.029163 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:17:47.029364 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 17 12:17:47.029512 kernel: rtc_cmos 00:03: registered as rtc0 Jan 17 12:17:47.030786 kernel: rtc_cmos 00:03: setting system clock to 2025-01-17T12:17:46 UTC (1737116266) Jan 17 12:17:47.030947 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 17 12:17:47.030967 kernel: intel_pstate: CPU model not supported Jan 17 12:17:47.030981 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:17:47.030994 kernel: Segment Routing with IPv6 Jan 17 12:17:47.031008 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:17:47.031023 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:17:47.031045 kernel: Key type dns_resolver registered Jan 17 12:17:47.031060 kernel: IPI shorthand broadcast: enabled Jan 17 12:17:47.031075 kernel: sched_clock: Marking stable (1189003367, 150874374)->(1391300493, -51422752) Jan 17 12:17:47.031088 kernel: registered taskstats version 1 Jan 17 12:17:47.031100 kernel: Loading compiled-in X.509 certificates Jan 17 12:17:47.031114 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:17:47.031126 kernel: Key type .fscrypt registered Jan 17 12:17:47.031138 kernel: Key type fscrypt-provisioning registered Jan 17 12:17:47.031153 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:17:47.031171 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:17:47.031185 kernel: ima: No architecture policies found Jan 17 12:17:47.031200 kernel: clk: Disabling unused clocks Jan 17 12:17:47.031214 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:17:47.031229 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:17:47.031271 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:17:47.031287 kernel: Run /init as init process Jan 17 12:17:47.031300 kernel: with arguments: Jan 17 12:17:47.031315 kernel: /init Jan 17 12:17:47.031332 kernel: with environment: Jan 17 12:17:47.031345 kernel: HOME=/ Jan 17 12:17:47.031358 kernel: TERM=linux Jan 17 12:17:47.031372 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:17:47.031391 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:17:47.031409 systemd[1]: Detected virtualization kvm. Jan 17 12:17:47.031426 systemd[1]: Detected architecture x86-64. Jan 17 12:17:47.031444 systemd[1]: Running in initrd. Jan 17 12:17:47.031459 systemd[1]: No hostname configured, using default hostname. Jan 17 12:17:47.031473 systemd[1]: Hostname set to . Jan 17 12:17:47.031488 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:17:47.031504 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:17:47.031531 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:17:47.031547 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:17:47.031566 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:17:47.031587 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:17:47.031623 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:17:47.031638 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:17:47.031655 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:17:47.031671 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:17:47.031687 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:17:47.031702 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:17:47.031723 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:17:47.031738 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:17:47.031754 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:17:47.031772 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:17:47.031788 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:17:47.031804 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:17:47.031824 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:17:47.031839 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:17:47.031855 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:17:47.031872 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:17:47.031889 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:17:47.031904 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:17:47.031920 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:17:47.031934 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:17:47.031952 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:17:47.031967 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:17:47.031983 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:17:47.032000 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:17:47.032061 systemd-journald[183]: Collecting audit messages is disabled. Jan 17 12:17:47.032106 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:17:47.032121 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:17:47.032137 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:17:47.032152 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:17:47.032170 systemd-journald[183]: Journal started Jan 17 12:17:47.032213 systemd-journald[183]: Runtime Journal (/run/log/journal/bb482d9a3cd74ceb8fb3ef30998bd54e) is 4.9M, max 39.3M, 34.4M free. Jan 17 12:17:47.039667 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:17:47.044840 systemd-modules-load[184]: Inserted module 'overlay' Jan 17 12:17:47.091789 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:17:47.091842 kernel: Bridge firewalling registered Jan 17 12:17:47.091863 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:17:47.076981 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 17 12:17:47.094647 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:17:47.099736 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:17:47.102100 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:17:47.111862 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:17:47.114830 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:17:47.116862 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:17:47.126739 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:17:47.145014 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:17:47.149200 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:17:47.152158 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:17:47.170973 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:17:47.172995 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:17:47.183923 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:17:47.186587 dracut-cmdline[216]: dracut-dracut-053 Jan 17 12:17:47.188066 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:17:47.218402 systemd-resolved[221]: Positive Trust Anchors: Jan 17 12:17:47.219394 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:17:47.219432 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:17:47.225784 systemd-resolved[221]: Defaulting to hostname 'linux'. Jan 17 12:17:47.228106 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:17:47.228858 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:17:47.287672 kernel: SCSI subsystem initialized Jan 17 12:17:47.300636 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:17:47.315634 kernel: iscsi: registered transport (tcp) Jan 17 12:17:47.345003 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:17:47.345097 kernel: QLogic iSCSI HBA Driver Jan 17 12:17:47.405105 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:17:47.411911 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:17:47.455992 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:17:47.456075 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:17:47.456089 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:17:47.511685 kernel: raid6: avx2x4 gen() 23557 MB/s Jan 17 12:17:47.528681 kernel: raid6: avx2x2 gen() 29496 MB/s Jan 17 12:17:47.545893 kernel: raid6: avx2x1 gen() 20557 MB/s Jan 17 12:17:47.545982 kernel: raid6: using algorithm avx2x2 gen() 29496 MB/s Jan 17 12:17:47.564694 kernel: raid6: .... xor() 15848 MB/s, rmw enabled Jan 17 12:17:47.564792 kernel: raid6: using avx2x2 recovery algorithm Jan 17 12:17:47.590653 kernel: xor: automatically using best checksumming function avx Jan 17 12:17:47.793674 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:17:47.810702 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:17:47.822028 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:17:47.839112 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jan 17 12:17:47.845618 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:17:47.853837 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:17:47.875970 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Jan 17 12:17:47.919814 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:17:47.925957 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:17:47.991874 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:17:48.001809 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:17:48.034001 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:17:48.037703 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:17:48.038419 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:17:48.040866 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:17:48.048970 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:17:48.067652 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:17:48.075841 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 17 12:17:48.100772 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 17 12:17:48.100943 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:17:48.100964 kernel: GPT:9289727 != 125829119 Jan 17 12:17:48.100981 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:17:48.100992 kernel: GPT:9289727 != 125829119 Jan 17 12:17:48.101002 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:17:48.101012 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:17:48.101022 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 17 12:17:48.163898 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Jan 17 12:17:48.164204 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:17:48.164219 kernel: scsi host0: Virtio SCSI HBA Jan 17 12:17:48.164403 kernel: libata version 3.00 loaded. Jan 17 12:17:48.164422 kernel: ACPI: bus type USB registered Jan 17 12:17:48.164441 kernel: usbcore: registered new interface driver usbfs Jan 17 12:17:48.164459 kernel: usbcore: registered new interface driver hub Jan 17 12:17:48.166048 kernel: usbcore: registered new device driver usb Jan 17 12:17:48.182814 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 17 12:17:48.236398 kernel: scsi host1: ata_piix Jan 17 12:17:48.236557 kernel: scsi host2: ata_piix Jan 17 12:17:48.236708 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 17 12:17:48.236721 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 17 12:17:48.196497 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:17:48.196669 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:17:48.197421 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:17:48.242201 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:17:48.197953 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:17:48.304219 kernel: AES CTR mode by8 optimization enabled Jan 17 12:17:48.304260 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (461) Jan 17 12:17:48.304278 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (460) Jan 17 12:17:48.198115 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:17:48.198896 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:17:48.205906 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:17:48.228701 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 12:17:48.265531 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:17:48.309210 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 12:17:48.313418 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:17:48.318444 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 12:17:48.319163 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 12:17:48.337067 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:17:48.341872 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:17:48.352295 disk-uuid[531]: Primary Header is updated. Jan 17 12:17:48.352295 disk-uuid[531]: Secondary Entries is updated. Jan 17 12:17:48.352295 disk-uuid[531]: Secondary Header is updated. Jan 17 12:17:48.356926 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:17:48.391373 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:17:48.421624 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 17 12:17:48.448435 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 17 12:17:48.448626 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 17 12:17:48.448789 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 17 12:17:48.448946 kernel: hub 1-0:1.0: USB hub found Jan 17 12:17:48.449126 kernel: hub 1-0:1.0: 2 ports detected Jan 17 12:17:49.374632 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:17:49.375307 disk-uuid[534]: The operation has completed successfully. Jan 17 12:17:49.415339 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:17:49.415466 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:17:49.437952 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:17:49.442675 sh[564]: Success Jan 17 12:17:49.458630 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 12:17:49.510190 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:17:49.531861 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:17:49.534798 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:17:49.560661 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:17:49.560757 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:17:49.560776 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:17:49.560792 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:17:49.560818 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:17:49.571490 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:17:49.572759 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:17:49.577931 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:17:49.579552 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:17:49.595963 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:17:49.596038 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:17:49.597971 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:17:49.603661 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:17:49.617454 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:17:49.619871 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:17:49.625754 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:17:49.634985 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:17:49.744721 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:17:49.759096 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:17:49.797121 ignition[658]: Ignition 2.19.0 Jan 17 12:17:49.798005 ignition[658]: Stage: fetch-offline Jan 17 12:17:49.798692 ignition[658]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:17:49.799277 ignition[658]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:17:49.799557 systemd-networkd[748]: lo: Link UP Jan 17 12:17:49.799562 systemd-networkd[748]: lo: Gained carrier Jan 17 12:17:49.800707 ignition[658]: parsed url from cmdline: "" Jan 17 12:17:49.801949 systemd-networkd[748]: Enumeration completed Jan 17 12:17:49.800713 ignition[658]: no config URL provided Jan 17 12:17:49.802063 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:17:49.800723 ignition[658]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:17:49.802999 systemd[1]: Reached target network.target - Network. Jan 17 12:17:49.800737 ignition[658]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:17:49.803540 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 17 12:17:49.800747 ignition[658]: failed to fetch config: resource requires networking Jan 17 12:17:49.803544 systemd-networkd[748]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 17 12:17:49.801116 ignition[658]: Ignition finished successfully Jan 17 12:17:49.804896 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:17:49.804900 systemd-networkd[748]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:17:49.805614 systemd-networkd[748]: eth0: Link UP Jan 17 12:17:49.805618 systemd-networkd[748]: eth0: Gained carrier Jan 17 12:17:49.805626 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 17 12:17:49.806222 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:17:49.808913 systemd-networkd[748]: eth1: Link UP Jan 17 12:17:49.808917 systemd-networkd[748]: eth1: Gained carrier Jan 17 12:17:49.808927 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:17:49.813394 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:17:49.823705 systemd-networkd[748]: eth0: DHCPv4 address 209.38.138.250/19, gateway 209.38.128.1 acquired from 169.254.169.253 Jan 17 12:17:49.829735 systemd-networkd[748]: eth1: DHCPv4 address 10.124.0.12/20 acquired from 169.254.169.253 Jan 17 12:17:49.839131 ignition[755]: Ignition 2.19.0 Jan 17 12:17:49.839143 ignition[755]: Stage: fetch Jan 17 12:17:49.839391 ignition[755]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:17:49.839404 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:17:49.839566 ignition[755]: parsed url from cmdline: "" Jan 17 12:17:49.839571 ignition[755]: no config URL provided Jan 17 12:17:49.839577 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:17:49.839588 ignition[755]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:17:49.839635 ignition[755]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 17 12:17:49.855786 ignition[755]: GET result: OK Jan 17 12:17:49.856003 ignition[755]: parsing config with SHA512: fd82a64ecec3aa9f7ba7f59e5287f544b07f0c9e05e7a0c12ed6b693b68dec6305a9118f5cc84af33cd876f20a2dd59d0ef271971d2266df8ae0a15d54c8fdd4 Jan 17 12:17:49.861879 unknown[755]: fetched base config from "system" Jan 17 12:17:49.861891 unknown[755]: fetched base config from "system" Jan 17 12:17:49.862576 ignition[755]: fetch: fetch complete Jan 17 12:17:49.861901 unknown[755]: fetched user config from "digitalocean" Jan 17 12:17:49.862582 ignition[755]: fetch: fetch passed Jan 17 12:17:49.864566 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:17:49.862654 ignition[755]: Ignition finished successfully Jan 17 12:17:49.871909 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:17:49.891535 ignition[763]: Ignition 2.19.0 Jan 17 12:17:49.891551 ignition[763]: Stage: kargs Jan 17 12:17:49.891821 ignition[763]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:17:49.891837 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:17:49.894945 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:17:49.893088 ignition[763]: kargs: kargs passed Jan 17 12:17:49.893167 ignition[763]: Ignition finished successfully Jan 17 12:17:49.901877 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:17:49.924086 ignition[769]: Ignition 2.19.0 Jan 17 12:17:49.924100 ignition[769]: Stage: disks Jan 17 12:17:49.924343 ignition[769]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:17:49.927196 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:17:49.924362 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:17:49.928838 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:17:49.925410 ignition[769]: disks: disks passed Jan 17 12:17:49.934874 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:17:49.925462 ignition[769]: Ignition finished successfully Jan 17 12:17:49.936327 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:17:49.937472 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:17:49.938509 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:17:49.945849 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:17:49.963164 systemd-fsck[778]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 12:17:49.967081 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:17:49.974818 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:17:50.094666 kernel: EXT4-fs (vda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:17:50.096035 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:17:50.097281 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:17:50.113859 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:17:50.116749 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:17:50.120797 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 17 12:17:50.124134 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 12:17:50.127435 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (786) Jan 17 12:17:50.127553 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:17:50.128783 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:17:50.134015 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:17:50.134074 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:17:50.135683 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:17:50.138666 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:17:50.143341 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:17:50.147617 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:17:50.152109 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:17:50.234724 coreos-metadata[789]: Jan 17 12:17:50.234 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:17:50.239285 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:17:50.241918 coreos-metadata[788]: Jan 17 12:17:50.241 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:17:50.246763 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:17:50.247946 coreos-metadata[789]: Jan 17 12:17:50.247 INFO Fetch successful Jan 17 12:17:50.252683 coreos-metadata[788]: Jan 17 12:17:50.252 INFO Fetch successful Jan 17 12:17:50.258925 coreos-metadata[789]: Jan 17 12:17:50.255 INFO wrote hostname ci-4081.3.0-f-fd30d73867 to /sysroot/etc/hostname Jan 17 12:17:50.258393 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:17:50.263451 initrd-setup-root[830]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:17:50.261762 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 17 12:17:50.261865 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 17 12:17:50.269144 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:17:50.381903 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:17:50.389826 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:17:50.391815 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:17:50.403633 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:17:50.432991 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:17:50.445356 ignition[906]: INFO : Ignition 2.19.0 Jan 17 12:17:50.445356 ignition[906]: INFO : Stage: mount Jan 17 12:17:50.445356 ignition[906]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:17:50.445356 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:17:50.445356 ignition[906]: INFO : mount: mount passed Jan 17 12:17:50.445356 ignition[906]: INFO : Ignition finished successfully Jan 17 12:17:50.446268 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:17:50.453792 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:17:50.554847 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:17:50.572029 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:17:50.587091 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (918) Jan 17 12:17:50.587190 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:17:50.588647 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:17:50.590018 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:17:50.594657 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:17:50.597837 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:17:50.629576 ignition[935]: INFO : Ignition 2.19.0 Jan 17 12:17:50.629576 ignition[935]: INFO : Stage: files Jan 17 12:17:50.631558 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:17:50.631558 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:17:50.631558 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:17:50.636048 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:17:50.636048 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:17:50.639062 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:17:50.640535 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:17:50.640535 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:17:50.639548 unknown[935]: wrote ssh authorized keys file for user: core Jan 17 12:17:50.643939 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:17:50.643939 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:17:50.692529 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 12:17:50.756650 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:17:50.756650 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:17:50.760064 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:17:50.760064 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:17:50.760064 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:17:50.760064 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:17:50.760064 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:17:50.760064 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:17:50.760064 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:17:50.760064 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:17:50.760064 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:17:50.760064 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 17 12:17:50.760064 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 17 12:17:50.760064 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 17 12:17:50.760064 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 17 12:17:51.137009 systemd-networkd[748]: eth1: Gained IPv6LL Jan 17 12:17:51.231379 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 12:17:51.457024 systemd-networkd[748]: eth0: Gained IPv6LL Jan 17 12:17:51.570933 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 17 12:17:51.570933 ignition[935]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 12:17:51.573435 ignition[935]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:17:51.573435 ignition[935]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:17:51.573435 ignition[935]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 12:17:51.573435 ignition[935]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:17:51.573435 ignition[935]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:17:51.573435 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:17:51.573435 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:17:51.573435 ignition[935]: INFO : files: files passed Jan 17 12:17:51.573435 ignition[935]: INFO : Ignition finished successfully Jan 17 12:17:51.574860 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:17:51.586908 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:17:51.590868 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:17:51.592801 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:17:51.592942 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:17:51.622664 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:17:51.622664 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:17:51.625670 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:17:51.627835 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:17:51.630508 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:17:51.649969 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:17:51.694274 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:17:51.694652 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:17:51.696947 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:17:51.698328 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:17:51.699939 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:17:51.705883 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:17:51.725901 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:17:51.732893 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:17:51.747838 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:17:51.748679 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:17:51.749344 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:17:51.749940 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:17:51.750082 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:17:51.752004 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:17:51.752755 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:17:51.753692 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:17:51.755698 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:17:51.756863 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:17:51.757890 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:17:51.759940 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:17:51.761287 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:17:51.762663 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:17:51.764068 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:17:51.765048 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:17:51.765218 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:17:51.766840 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:17:51.767568 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:17:51.768659 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:17:51.769013 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:17:51.770078 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:17:51.770260 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:17:51.771999 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:17:51.772130 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:17:51.774868 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:17:51.775031 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:17:51.776037 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 12:17:51.776196 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:17:51.784005 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:17:51.784742 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:17:51.785002 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:17:51.788820 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:17:51.789357 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:17:51.789531 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:17:51.790406 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:17:51.790568 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:17:51.800463 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:17:51.800623 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:17:51.814120 ignition[987]: INFO : Ignition 2.19.0 Jan 17 12:17:51.815924 ignition[987]: INFO : Stage: umount Jan 17 12:17:51.815924 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:17:51.815924 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:17:51.819686 ignition[987]: INFO : umount: umount passed Jan 17 12:17:51.819686 ignition[987]: INFO : Ignition finished successfully Jan 17 12:17:51.821041 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:17:51.821170 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:17:51.823048 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:17:51.823155 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:17:51.849913 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:17:51.850021 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:17:51.867978 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:17:51.868089 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:17:51.869121 systemd[1]: Stopped target network.target - Network. Jan 17 12:17:51.870570 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:17:51.870725 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:17:51.872166 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:17:51.875375 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:17:51.878718 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:17:51.879963 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:17:51.912062 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:17:51.912932 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:17:51.913005 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:17:51.914204 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:17:51.914269 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:17:51.916325 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:17:51.916431 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:17:51.917630 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:17:51.917711 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:17:51.919475 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:17:51.920875 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:17:51.923488 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:17:51.923675 systemd-networkd[748]: eth0: DHCPv6 lease lost Jan 17 12:17:51.927683 systemd-networkd[748]: eth1: DHCPv6 lease lost Jan 17 12:17:51.929836 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:17:51.929962 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:17:51.932187 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:17:51.932270 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:17:51.939813 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:17:51.940396 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:17:51.940480 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:17:51.943698 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:17:51.945448 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:17:51.946708 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:17:51.960913 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:17:51.961070 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:17:51.966302 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:17:51.966562 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:17:51.968053 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:17:51.968116 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:17:51.971120 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:17:51.971326 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:17:51.973135 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:17:51.973322 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:17:51.974150 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:17:51.974249 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:17:51.977908 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:17:51.977992 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:17:51.979519 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:17:51.979578 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:17:51.980581 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:17:51.980725 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:17:51.983515 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:17:51.983619 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:17:51.984853 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:17:51.984996 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:17:51.986931 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:17:51.986996 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:17:51.997009 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:17:51.998953 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:17:51.999065 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:17:51.999731 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:17:51.999866 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:17:52.005685 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:17:52.005824 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:17:52.007859 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:17:52.024126 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:17:52.036257 systemd[1]: Switching root. Jan 17 12:17:52.111978 systemd-journald[183]: Journal stopped Jan 17 12:17:53.377376 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 17 12:17:53.377447 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:17:53.377468 kernel: SELinux: policy capability open_perms=1 Jan 17 12:17:53.377483 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:17:53.377500 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:17:53.377525 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:17:53.377543 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:17:53.377562 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:17:53.377749 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:17:53.377767 systemd[1]: Successfully loaded SELinux policy in 43.403ms. Jan 17 12:17:53.377791 kernel: audit: type=1403 audit(1737116272.301:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:17:53.377808 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.555ms. Jan 17 12:17:53.377822 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:17:53.377834 systemd[1]: Detected virtualization kvm. Jan 17 12:17:53.377846 systemd[1]: Detected architecture x86-64. Jan 17 12:17:53.377857 systemd[1]: Detected first boot. Jan 17 12:17:53.377869 systemd[1]: Hostname set to . Jan 17 12:17:53.377880 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:17:53.377892 zram_generator::config[1030]: No configuration found. Jan 17 12:17:53.377913 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:17:53.381674 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 12:17:53.381712 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 12:17:53.381728 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 12:17:53.381744 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:17:53.381765 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:17:53.381777 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:17:53.381788 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:17:53.381804 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:17:53.381816 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:17:53.381832 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:17:53.381859 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:17:53.381871 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:17:53.381883 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:17:53.381920 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:17:53.381932 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:17:53.381944 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:17:53.381960 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:17:53.381971 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:17:53.381983 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:17:53.381994 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 12:17:53.382007 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 12:17:53.382019 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 12:17:53.382033 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:17:53.382045 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:17:53.382057 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:17:53.382068 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:17:53.382080 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:17:53.382091 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:17:53.382104 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:17:53.382115 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:17:53.382127 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:17:53.382155 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:17:53.382167 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:17:53.382179 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:17:53.382198 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:17:53.382209 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:17:53.382220 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:53.382233 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:17:53.382245 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:17:53.382257 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:17:53.382273 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:17:53.382285 systemd[1]: Reached target machines.target - Containers. Jan 17 12:17:53.382296 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:17:53.382313 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:17:53.382343 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:17:53.382363 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:17:53.382380 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:17:53.382399 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:17:53.382422 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:17:53.382440 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:17:53.382452 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:17:53.382464 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:17:53.382476 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 12:17:53.382487 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 12:17:53.382498 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 12:17:53.382520 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 12:17:53.382532 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:17:53.382547 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:17:53.382558 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:17:53.382570 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:17:53.382669 systemd-journald[1110]: Collecting audit messages is disabled. Jan 17 12:17:53.382708 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:17:53.382724 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 12:17:53.382735 systemd[1]: Stopped verity-setup.service. Jan 17 12:17:53.382748 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:53.382764 systemd-journald[1110]: Journal started Jan 17 12:17:53.382788 systemd-journald[1110]: Runtime Journal (/run/log/journal/bb482d9a3cd74ceb8fb3ef30998bd54e) is 4.9M, max 39.3M, 34.4M free. Jan 17 12:17:53.065105 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:17:53.088360 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 12:17:53.088908 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 12:17:53.385719 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:17:53.389846 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:17:53.391852 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:17:53.392516 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:17:53.393829 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:17:53.395197 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:17:53.395955 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:17:53.421625 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:17:53.423105 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:17:53.423590 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:17:53.426043 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:17:53.426242 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:17:53.427192 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:17:53.427343 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:17:53.428657 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:17:53.437005 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:17:53.438628 kernel: ACPI: bus type drm_connector registered Jan 17 12:17:53.444426 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:17:53.445807 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:17:53.446825 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:17:53.456651 kernel: loop: module loaded Jan 17 12:17:53.461102 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:17:53.461300 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:17:53.463099 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:17:53.472849 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:17:53.473924 kernel: fuse: init (API version 7.39) Jan 17 12:17:53.473800 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:17:53.473844 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:17:53.477538 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:17:53.483874 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:17:53.492955 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:17:53.494901 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:17:53.500836 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:17:53.510928 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:17:53.511861 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:17:53.515870 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:17:53.516946 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:17:53.525862 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:17:53.533836 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:17:53.539988 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:17:53.540931 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:17:53.541705 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:17:53.542556 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:17:53.544672 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:17:53.568761 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:17:53.583874 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:17:53.589231 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:17:53.628744 systemd-journald[1110]: Time spent on flushing to /var/log/journal/bb482d9a3cd74ceb8fb3ef30998bd54e is 121.763ms for 985 entries. Jan 17 12:17:53.628744 systemd-journald[1110]: System Journal (/var/log/journal/bb482d9a3cd74ceb8fb3ef30998bd54e) is 8.0M, max 195.6M, 187.6M free. Jan 17 12:17:53.804841 systemd-journald[1110]: Received client request to flush runtime journal. Jan 17 12:17:53.804907 kernel: loop0: detected capacity change from 0 to 205544 Jan 17 12:17:53.804925 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:17:53.804939 kernel: loop1: detected capacity change from 0 to 142488 Jan 17 12:17:53.675610 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:17:53.680804 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:17:53.690867 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:17:53.701346 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:17:53.739811 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:17:53.753915 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:17:53.763835 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:17:53.766268 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:17:53.808695 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:17:53.816278 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:17:53.827890 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:17:53.847574 udevadm[1164]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 12:17:53.858958 kernel: loop2: detected capacity change from 0 to 140768 Jan 17 12:17:53.930640 kernel: loop3: detected capacity change from 0 to 8 Jan 17 12:17:53.947038 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Jan 17 12:17:53.947060 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Jan 17 12:17:53.977127 kernel: loop4: detected capacity change from 0 to 205544 Jan 17 12:17:53.974940 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:17:54.007000 kernel: loop5: detected capacity change from 0 to 142488 Jan 17 12:17:54.045712 kernel: loop6: detected capacity change from 0 to 140768 Jan 17 12:17:54.078637 kernel: loop7: detected capacity change from 0 to 8 Jan 17 12:17:54.088127 (sd-merge)[1174]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 17 12:17:54.089139 (sd-merge)[1174]: Merged extensions into '/usr'. Jan 17 12:17:54.115837 systemd[1]: Reloading requested from client PID 1147 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:17:54.115863 systemd[1]: Reloading... Jan 17 12:17:54.368674 zram_generator::config[1202]: No configuration found. Jan 17 12:17:54.629427 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:17:54.691667 ldconfig[1142]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:17:54.734028 systemd[1]: Reloading finished in 617 ms. Jan 17 12:17:54.768239 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:17:54.770286 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:17:54.787270 systemd[1]: Starting ensure-sysext.service... Jan 17 12:17:54.802027 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:17:54.842274 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:17:54.842497 systemd[1]: Reloading... Jan 17 12:17:54.911662 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:17:54.912282 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:17:54.916201 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:17:54.916859 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jan 17 12:17:54.916963 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jan 17 12:17:54.922112 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:17:54.922128 systemd-tmpfiles[1246]: Skipping /boot Jan 17 12:17:54.943946 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:17:54.943958 systemd-tmpfiles[1246]: Skipping /boot Jan 17 12:17:55.006503 zram_generator::config[1270]: No configuration found. Jan 17 12:17:55.348167 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:17:55.424263 systemd[1]: Reloading finished in 580 ms. Jan 17 12:17:55.447506 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:17:55.465246 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:17:55.482027 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:17:55.503255 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:17:55.508920 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:17:55.524109 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:17:55.534006 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:17:55.547007 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:17:55.561693 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:55.561986 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:17:55.576100 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:17:55.581804 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:17:55.596123 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:17:55.597329 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:17:55.599151 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:55.616077 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:17:55.621464 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:55.622463 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:17:55.622760 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:17:55.622906 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:55.630335 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:55.630862 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:17:55.642289 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:17:55.644311 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:17:55.644502 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:55.651424 systemd[1]: Finished ensure-sysext.service. Jan 17 12:17:55.659852 systemd-udevd[1326]: Using default interface naming scheme 'v255'. Jan 17 12:17:55.667971 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 12:17:55.697323 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:17:55.725719 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:17:55.727727 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:17:55.729949 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:17:55.745095 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:17:55.745551 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:17:55.751649 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:17:55.752149 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:17:55.753552 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:17:55.763309 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:17:55.764753 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:17:55.802975 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:17:55.817396 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:17:55.818507 augenrules[1353]: No rules Jan 17 12:17:55.829693 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:17:55.833707 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:17:55.836458 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:17:55.839265 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:17:55.845093 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:17:55.866031 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:17:55.896981 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:17:56.099761 systemd-resolved[1323]: Positive Trust Anchors: Jan 17 12:17:56.099783 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:17:56.099831 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:17:56.110159 systemd-resolved[1323]: Using system hostname 'ci-4081.3.0-f-fd30d73867'. Jan 17 12:17:56.113279 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:17:56.114547 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:17:56.150483 systemd-networkd[1366]: lo: Link UP Jan 17 12:17:56.151250 systemd-networkd[1366]: lo: Gained carrier Jan 17 12:17:56.153866 systemd-networkd[1366]: Enumeration completed Jan 17 12:17:56.154649 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:17:56.156918 systemd[1]: Reached target network.target - Network. Jan 17 12:17:56.166958 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:17:56.168348 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 12:17:56.170073 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:17:56.297211 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 12:17:56.332807 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 17 12:17:56.333797 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:56.334028 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:17:56.344011 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:17:56.372653 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1378) Jan 17 12:17:56.370955 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:17:56.376209 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:17:56.377821 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:17:56.377887 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:17:56.377909 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:56.382579 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:17:56.383737 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:17:56.399214 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:17:56.399495 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:17:56.400818 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:17:56.409683 kernel: ISO 9660 Extensions: RRIP_1991A Jan 17 12:17:56.413913 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 17 12:17:56.428267 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:17:56.431684 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:17:56.450687 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:17:56.457812 systemd-networkd[1366]: eth0: Configuring with /run/systemd/network/10-02:9c:ab:79:3a:eb.network. Jan 17 12:17:56.460930 systemd-networkd[1366]: eth0: Link UP Jan 17 12:17:56.461471 systemd-networkd[1366]: eth0: Gained carrier Jan 17 12:17:56.469763 systemd-timesyncd[1339]: Network configuration changed, trying to establish connection. Jan 17 12:17:56.520661 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 17 12:17:56.539630 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 12:17:56.553639 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:17:56.576831 systemd-networkd[1366]: eth1: Configuring with /run/systemd/network/10-16:62:6b:82:d3:e3.network. Jan 17 12:17:56.578280 systemd-networkd[1366]: eth1: Link UP Jan 17 12:17:56.578293 systemd-networkd[1366]: eth1: Gained carrier Jan 17 12:17:56.588637 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 12:17:56.653644 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 17 12:17:56.653747 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 17 12:17:56.662643 kernel: Console: switching to colour dummy device 80x25 Jan 17 12:17:56.662745 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 17 12:17:56.662820 kernel: [drm] features: -context_init Jan 17 12:17:56.662845 kernel: [drm] number of scanouts: 1 Jan 17 12:17:56.662506 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:17:56.664897 kernel: [drm] number of cap sets: 0 Jan 17 12:17:56.664943 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 17 12:17:56.667836 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:17:56.672903 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:17:56.681683 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 17 12:17:56.681784 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 12:17:56.692666 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 17 12:17:57.102255 systemd-timesyncd[1339]: Contacted time server 64.79.100.196:123 (0.flatcar.pool.ntp.org). Jan 17 12:17:57.102336 systemd-timesyncd[1339]: Initial clock synchronization to Fri 2025-01-17 12:17:57.102090 UTC. Jan 17 12:17:57.102416 systemd-resolved[1323]: Clock change detected. Flushing caches. Jan 17 12:17:57.110153 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:17:57.125964 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:17:57.126304 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:17:57.142203 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:17:57.142803 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:17:57.169372 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:17:57.169731 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:17:57.195384 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:17:57.364090 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:17:57.408569 kernel: EDAC MC: Ver: 3.0.0 Jan 17 12:17:57.445283 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:17:57.454182 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:17:57.499835 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:17:57.540115 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:17:57.542856 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:17:57.543042 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:17:57.543393 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:17:57.543549 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:17:57.544292 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:17:57.548986 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:17:57.549281 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:17:57.549393 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:17:57.549497 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:17:57.549651 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:17:57.554031 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:17:57.562503 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:17:57.573335 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:17:57.579012 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:17:57.580755 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:17:57.583670 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:17:57.588989 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:17:57.589994 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:17:57.590045 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:17:57.599061 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:17:57.602027 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:17:57.613332 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 12:17:57.620146 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:17:57.631119 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:17:57.642067 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:17:57.644662 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:17:57.652073 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:17:57.664061 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:17:57.676174 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:17:57.683459 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:17:57.695462 jq[1435]: false Jan 17 12:17:57.706285 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:17:57.710454 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:17:57.714411 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:17:57.726703 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:17:57.754007 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:17:57.765801 coreos-metadata[1433]: Jan 17 12:17:57.756 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:17:57.763555 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:17:57.770103 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:17:57.770550 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:17:57.782573 coreos-metadata[1433]: Jan 17 12:17:57.782 INFO Fetch successful Jan 17 12:17:57.832925 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:17:57.833292 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:17:57.835702 jq[1446]: true Jan 17 12:17:57.879549 update_engine[1444]: I20250117 12:17:57.879403 1444 main.cc:92] Flatcar Update Engine starting Jan 17 12:17:57.893467 extend-filesystems[1438]: Found loop4 Jan 17 12:17:57.903883 extend-filesystems[1438]: Found loop5 Jan 17 12:17:57.903883 extend-filesystems[1438]: Found loop6 Jan 17 12:17:57.903883 extend-filesystems[1438]: Found loop7 Jan 17 12:17:57.903883 extend-filesystems[1438]: Found vda Jan 17 12:17:57.903883 extend-filesystems[1438]: Found vda1 Jan 17 12:17:57.903883 extend-filesystems[1438]: Found vda2 Jan 17 12:17:57.903883 extend-filesystems[1438]: Found vda3 Jan 17 12:17:57.903883 extend-filesystems[1438]: Found usr Jan 17 12:17:57.903883 extend-filesystems[1438]: Found vda4 Jan 17 12:17:57.903883 extend-filesystems[1438]: Found vda6 Jan 17 12:17:57.903883 extend-filesystems[1438]: Found vda7 Jan 17 12:17:57.903883 extend-filesystems[1438]: Found vda9 Jan 17 12:17:57.903883 extend-filesystems[1438]: Checking size of /dev/vda9 Jan 17 12:17:57.940268 tar[1450]: linux-amd64/helm Jan 17 12:17:57.913221 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:17:57.912917 dbus-daemon[1434]: [system] SELinux support is enabled Jan 17 12:17:57.942151 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:17:57.942210 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:17:57.949389 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:17:57.949539 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 17 12:17:57.949569 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:17:57.958123 jq[1458]: true Jan 17 12:17:57.979313 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:17:57.987434 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:17:57.987696 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:17:57.991102 update_engine[1444]: I20250117 12:17:57.990204 1444 update_check_scheduler.cc:74] Next update check in 6m10s Jan 17 12:17:58.001520 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:17:58.023657 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:17:58.027494 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 12:17:58.029518 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:17:58.052321 extend-filesystems[1438]: Resized partition /dev/vda9 Jan 17 12:17:58.069057 systemd-networkd[1366]: eth1: Gained IPv6LL Jan 17 12:17:58.086790 extend-filesystems[1485]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:17:58.114462 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:17:58.128045 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 17 12:17:58.127296 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:17:58.142081 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:17:58.164232 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:17:58.220566 systemd-logind[1443]: New seat seat0. Jan 17 12:17:58.224127 systemd-logind[1443]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 12:17:58.224154 systemd-logind[1443]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:17:58.230578 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:17:58.309891 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1382) Jan 17 12:17:58.418761 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:17:58.506714 bash[1498]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:17:58.510970 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:17:58.538298 systemd[1]: Starting sshkeys.service... Jan 17 12:17:58.590612 locksmithd[1477]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:17:58.595686 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 12:17:58.616134 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 12:17:58.645624 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 17 12:17:58.663637 systemd-networkd[1366]: eth0: Gained IPv6LL Jan 17 12:17:58.702511 extend-filesystems[1485]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 12:17:58.702511 extend-filesystems[1485]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 17 12:17:58.702511 extend-filesystems[1485]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 17 12:17:58.722614 extend-filesystems[1438]: Resized filesystem in /dev/vda9 Jan 17 12:17:58.722614 extend-filesystems[1438]: Found vdb Jan 17 12:17:58.705847 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:17:58.735173 coreos-metadata[1515]: Jan 17 12:17:58.734 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:17:58.706236 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:17:58.758919 coreos-metadata[1515]: Jan 17 12:17:58.756 INFO Fetch successful Jan 17 12:17:58.780935 unknown[1515]: wrote ssh authorized keys file for user: core Jan 17 12:17:58.909007 update-ssh-keys[1524]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:17:58.914243 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 12:17:58.924375 systemd[1]: Finished sshkeys.service. Jan 17 12:17:58.995287 sshd_keygen[1456]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:17:59.108247 containerd[1459]: time="2025-01-17T12:17:59.104064084Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:17:59.164811 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:17:59.179143 containerd[1459]: time="2025-01-17T12:17:59.174099632Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:59.181236 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:17:59.187179 containerd[1459]: time="2025-01-17T12:17:59.181489090Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:17:59.187179 containerd[1459]: time="2025-01-17T12:17:59.181547218Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:17:59.187179 containerd[1459]: time="2025-01-17T12:17:59.181577384Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:17:59.187179 containerd[1459]: time="2025-01-17T12:17:59.181870905Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:17:59.187179 containerd[1459]: time="2025-01-17T12:17:59.181897578Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:59.187179 containerd[1459]: time="2025-01-17T12:17:59.181992056Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:17:59.187179 containerd[1459]: time="2025-01-17T12:17:59.182013878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:59.187179 containerd[1459]: time="2025-01-17T12:17:59.182281061Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:17:59.187179 containerd[1459]: time="2025-01-17T12:17:59.182309302Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:59.187179 containerd[1459]: time="2025-01-17T12:17:59.182333352Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:17:59.187179 containerd[1459]: time="2025-01-17T12:17:59.182353117Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:59.188986 containerd[1459]: time="2025-01-17T12:17:59.182483010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:59.188986 containerd[1459]: time="2025-01-17T12:17:59.182888487Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:59.188986 containerd[1459]: time="2025-01-17T12:17:59.186122049Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:17:59.188986 containerd[1459]: time="2025-01-17T12:17:59.186173563Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:17:59.188986 containerd[1459]: time="2025-01-17T12:17:59.186393408Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:17:59.188986 containerd[1459]: time="2025-01-17T12:17:59.186465070Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:17:59.207368 containerd[1459]: time="2025-01-17T12:17:59.206098045Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:17:59.207368 containerd[1459]: time="2025-01-17T12:17:59.206203518Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:17:59.207368 containerd[1459]: time="2025-01-17T12:17:59.206230919Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:17:59.207368 containerd[1459]: time="2025-01-17T12:17:59.206335332Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:17:59.207368 containerd[1459]: time="2025-01-17T12:17:59.206363329Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:17:59.207368 containerd[1459]: time="2025-01-17T12:17:59.206596543Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:17:59.207368 containerd[1459]: time="2025-01-17T12:17:59.207124780Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:17:59.207368 containerd[1459]: time="2025-01-17T12:17:59.207291004Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:17:59.207368 containerd[1459]: time="2025-01-17T12:17:59.207372854Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:17:59.207948 containerd[1459]: time="2025-01-17T12:17:59.207397091Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:17:59.207948 containerd[1459]: time="2025-01-17T12:17:59.207424767Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:17:59.207948 containerd[1459]: time="2025-01-17T12:17:59.207452458Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:17:59.207948 containerd[1459]: time="2025-01-17T12:17:59.207477283Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:17:59.207948 containerd[1459]: time="2025-01-17T12:17:59.207507314Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:17:59.207948 containerd[1459]: time="2025-01-17T12:17:59.207532675Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:17:59.207948 containerd[1459]: time="2025-01-17T12:17:59.207555667Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:17:59.207948 containerd[1459]: time="2025-01-17T12:17:59.207578045Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:17:59.207948 containerd[1459]: time="2025-01-17T12:17:59.207599602Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:17:59.207948 containerd[1459]: time="2025-01-17T12:17:59.207654862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:17:59.207948 containerd[1459]: time="2025-01-17T12:17:59.207684638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:17:59.207948 containerd[1459]: time="2025-01-17T12:17:59.207706850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:17:59.207948 containerd[1459]: time="2025-01-17T12:17:59.207729827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:17:59.207948 containerd[1459]: time="2025-01-17T12:17:59.207804144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:17:59.209209 containerd[1459]: time="2025-01-17T12:17:59.207828567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:17:59.209209 containerd[1459]: time="2025-01-17T12:17:59.207851083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:17:59.209209 containerd[1459]: time="2025-01-17T12:17:59.207873206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:17:59.209209 containerd[1459]: time="2025-01-17T12:17:59.207898303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:17:59.209209 containerd[1459]: time="2025-01-17T12:17:59.207923336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:17:59.209209 containerd[1459]: time="2025-01-17T12:17:59.207945519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:17:59.209209 containerd[1459]: time="2025-01-17T12:17:59.207967424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:17:59.209209 containerd[1459]: time="2025-01-17T12:17:59.207993189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:17:59.209209 containerd[1459]: time="2025-01-17T12:17:59.208061668Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:17:59.209209 containerd[1459]: time="2025-01-17T12:17:59.208097553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:17:59.209209 containerd[1459]: time="2025-01-17T12:17:59.208115907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:17:59.209209 containerd[1459]: time="2025-01-17T12:17:59.208487961Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:17:59.212041 containerd[1459]: time="2025-01-17T12:17:59.211929169Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:17:59.212041 containerd[1459]: time="2025-01-17T12:17:59.212006997Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:17:59.212041 containerd[1459]: time="2025-01-17T12:17:59.212029417Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:17:59.212189 containerd[1459]: time="2025-01-17T12:17:59.212051245Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:17:59.212189 containerd[1459]: time="2025-01-17T12:17:59.212069604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:17:59.212189 containerd[1459]: time="2025-01-17T12:17:59.212091844Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:17:59.212189 containerd[1459]: time="2025-01-17T12:17:59.212115527Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:17:59.212189 containerd[1459]: time="2025-01-17T12:17:59.212133726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:17:59.213108 containerd[1459]: time="2025-01-17T12:17:59.212575201Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:17:59.213108 containerd[1459]: time="2025-01-17T12:17:59.212696597Z" level=info msg="Connect containerd service" Jan 17 12:17:59.213108 containerd[1459]: time="2025-01-17T12:17:59.212789138Z" level=info msg="using legacy CRI server" Jan 17 12:17:59.213108 containerd[1459]: time="2025-01-17T12:17:59.212804526Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:17:59.213108 containerd[1459]: time="2025-01-17T12:17:59.213015160Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:17:59.217843 containerd[1459]: time="2025-01-17T12:17:59.217553015Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:17:59.217843 containerd[1459]: time="2025-01-17T12:17:59.217755725Z" level=info msg="Start subscribing containerd event" Jan 17 12:17:59.217843 containerd[1459]: time="2025-01-17T12:17:59.217839746Z" level=info msg="Start recovering state" Jan 17 12:17:59.218046 containerd[1459]: time="2025-01-17T12:17:59.217938646Z" level=info msg="Start event monitor" Jan 17 12:17:59.218046 containerd[1459]: time="2025-01-17T12:17:59.217954522Z" level=info msg="Start snapshots syncer" Jan 17 12:17:59.218046 containerd[1459]: time="2025-01-17T12:17:59.217968338Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:17:59.218046 containerd[1459]: time="2025-01-17T12:17:59.217981765Z" level=info msg="Start streaming server" Jan 17 12:17:59.223386 containerd[1459]: time="2025-01-17T12:17:59.221108837Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:17:59.223386 containerd[1459]: time="2025-01-17T12:17:59.221233838Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:17:59.223386 containerd[1459]: time="2025-01-17T12:17:59.221325714Z" level=info msg="containerd successfully booted in 0.118996s" Jan 17 12:17:59.221643 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:17:59.239977 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:17:59.240268 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:17:59.263094 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:17:59.320714 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:17:59.340526 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:17:59.354721 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:17:59.370089 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:17:59.759256 tar[1450]: linux-amd64/LICENSE Jan 17 12:17:59.759256 tar[1450]: linux-amd64/README.md Jan 17 12:17:59.781960 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:18:00.513207 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:18:00.526471 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:18:00.527487 (kubelet)[1557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:18:00.531864 systemd[1]: Startup finished in 1.351s (kernel) + 5.566s (initrd) + 7.870s (userspace) = 14.789s. Jan 17 12:18:01.778686 kubelet[1557]: E0117 12:18:01.778543 1557 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:18:01.783804 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:18:01.784037 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:18:01.787858 systemd[1]: kubelet.service: Consumed 1.755s CPU time. Jan 17 12:18:07.009231 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:18:07.024345 systemd[1]: Started sshd@0-209.38.138.250:22-139.178.68.195:45426.service - OpenSSH per-connection server daemon (139.178.68.195:45426). Jan 17 12:18:07.103208 sshd[1569]: Accepted publickey for core from 139.178.68.195 port 45426 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:18:07.105914 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:07.119444 systemd-logind[1443]: New session 1 of user core. Jan 17 12:18:07.121157 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:18:07.134566 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:18:07.152266 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:18:07.160316 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:18:07.175109 (systemd)[1573]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:18:07.303464 systemd[1573]: Queued start job for default target default.target. Jan 17 12:18:07.314660 systemd[1573]: Created slice app.slice - User Application Slice. Jan 17 12:18:07.314719 systemd[1573]: Reached target paths.target - Paths. Jan 17 12:18:07.314768 systemd[1573]: Reached target timers.target - Timers. Jan 17 12:18:07.317161 systemd[1573]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:18:07.334151 systemd[1573]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:18:07.334314 systemd[1573]: Reached target sockets.target - Sockets. Jan 17 12:18:07.334330 systemd[1573]: Reached target basic.target - Basic System. Jan 17 12:18:07.334401 systemd[1573]: Reached target default.target - Main User Target. Jan 17 12:18:07.334439 systemd[1573]: Startup finished in 146ms. Jan 17 12:18:07.335101 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:18:07.345161 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:18:07.422149 systemd[1]: Started sshd@1-209.38.138.250:22-139.178.68.195:45434.service - OpenSSH per-connection server daemon (139.178.68.195:45434). Jan 17 12:18:07.468262 sshd[1584]: Accepted publickey for core from 139.178.68.195 port 45434 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:18:07.471293 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:07.477379 systemd-logind[1443]: New session 2 of user core. Jan 17 12:18:07.486132 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:18:07.552030 sshd[1584]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:07.570106 systemd[1]: sshd@1-209.38.138.250:22-139.178.68.195:45434.service: Deactivated successfully. Jan 17 12:18:07.572546 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:18:07.574704 systemd-logind[1443]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:18:07.582259 systemd[1]: Started sshd@2-209.38.138.250:22-139.178.68.195:45442.service - OpenSSH per-connection server daemon (139.178.68.195:45442). Jan 17 12:18:07.584597 systemd-logind[1443]: Removed session 2. Jan 17 12:18:07.629586 sshd[1591]: Accepted publickey for core from 139.178.68.195 port 45442 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:18:07.632078 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:07.639528 systemd-logind[1443]: New session 3 of user core. Jan 17 12:18:07.651116 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:18:07.713308 sshd[1591]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:07.725912 systemd[1]: sshd@2-209.38.138.250:22-139.178.68.195:45442.service: Deactivated successfully. Jan 17 12:18:07.728067 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:18:07.731063 systemd-logind[1443]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:18:07.736348 systemd[1]: Started sshd@3-209.38.138.250:22-139.178.68.195:45454.service - OpenSSH per-connection server daemon (139.178.68.195:45454). Jan 17 12:18:07.738232 systemd-logind[1443]: Removed session 3. Jan 17 12:18:07.787021 sshd[1598]: Accepted publickey for core from 139.178.68.195 port 45454 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:18:07.789265 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:07.797794 systemd-logind[1443]: New session 4 of user core. Jan 17 12:18:07.804095 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:18:07.869685 sshd[1598]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:07.884114 systemd[1]: sshd@3-209.38.138.250:22-139.178.68.195:45454.service: Deactivated successfully. Jan 17 12:18:07.886375 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:18:07.889008 systemd-logind[1443]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:18:07.895257 systemd[1]: Started sshd@4-209.38.138.250:22-139.178.68.195:45460.service - OpenSSH per-connection server daemon (139.178.68.195:45460). Jan 17 12:18:07.898066 systemd-logind[1443]: Removed session 4. Jan 17 12:18:07.941185 sshd[1605]: Accepted publickey for core from 139.178.68.195 port 45460 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:18:07.943405 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:07.951981 systemd-logind[1443]: New session 5 of user core. Jan 17 12:18:07.960148 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:18:08.037482 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:18:08.037943 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:18:08.053310 sudo[1608]: pam_unix(sudo:session): session closed for user root Jan 17 12:18:08.058366 sshd[1605]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:08.071590 systemd[1]: sshd@4-209.38.138.250:22-139.178.68.195:45460.service: Deactivated successfully. Jan 17 12:18:08.073862 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:18:08.074903 systemd-logind[1443]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:18:08.082354 systemd[1]: Started sshd@5-209.38.138.250:22-139.178.68.195:45468.service - OpenSSH per-connection server daemon (139.178.68.195:45468). Jan 17 12:18:08.084994 systemd-logind[1443]: Removed session 5. Jan 17 12:18:08.142284 sshd[1613]: Accepted publickey for core from 139.178.68.195 port 45468 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:18:08.145178 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:08.151077 systemd-logind[1443]: New session 6 of user core. Jan 17 12:18:08.162089 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:18:08.225505 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:18:08.226303 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:18:08.232033 sudo[1617]: pam_unix(sudo:session): session closed for user root Jan 17 12:18:08.241186 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:18:08.242087 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:18:08.266692 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:18:08.268836 auditctl[1620]: No rules Jan 17 12:18:08.269443 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:18:08.269761 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:18:08.283681 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:18:08.318814 augenrules[1638]: No rules Jan 17 12:18:08.321170 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:18:08.322988 sudo[1616]: pam_unix(sudo:session): session closed for user root Jan 17 12:18:08.326802 sshd[1613]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:08.334708 systemd[1]: sshd@5-209.38.138.250:22-139.178.68.195:45468.service: Deactivated successfully. Jan 17 12:18:08.337713 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:18:08.339599 systemd-logind[1443]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:18:08.344179 systemd[1]: Started sshd@6-209.38.138.250:22-139.178.68.195:45484.service - OpenSSH per-connection server daemon (139.178.68.195:45484). Jan 17 12:18:08.347345 systemd-logind[1443]: Removed session 6. Jan 17 12:18:08.404860 sshd[1646]: Accepted publickey for core from 139.178.68.195 port 45484 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:18:08.406096 sshd[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:08.412066 systemd-logind[1443]: New session 7 of user core. Jan 17 12:18:08.423121 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:18:08.487516 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:18:08.487905 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:18:09.105660 (dockerd)[1664]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:18:09.106350 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:18:09.671921 dockerd[1664]: time="2025-01-17T12:18:09.671840267Z" level=info msg="Starting up" Jan 17 12:18:09.838112 dockerd[1664]: time="2025-01-17T12:18:09.838038321Z" level=info msg="Loading containers: start." Jan 17 12:18:09.980802 kernel: Initializing XFRM netlink socket Jan 17 12:18:10.107370 systemd-networkd[1366]: docker0: Link UP Jan 17 12:18:10.133680 dockerd[1664]: time="2025-01-17T12:18:10.133631418Z" level=info msg="Loading containers: done." Jan 17 12:18:10.163775 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1297974285-merged.mount: Deactivated successfully. Jan 17 12:18:10.164416 dockerd[1664]: time="2025-01-17T12:18:10.163734310Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:18:10.164723 dockerd[1664]: time="2025-01-17T12:18:10.164685310Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:18:10.165430 dockerd[1664]: time="2025-01-17T12:18:10.165010007Z" level=info msg="Daemon has completed initialization" Jan 17 12:18:10.222035 dockerd[1664]: time="2025-01-17T12:18:10.221931481Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:18:10.222575 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:18:11.199767 containerd[1459]: time="2025-01-17T12:18:11.199196361Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 17 12:18:11.838996 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:18:11.847773 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:18:11.880540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount267884289.mount: Deactivated successfully. Jan 17 12:18:12.039034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:18:12.049771 (kubelet)[1823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:18:12.159055 kubelet[1823]: E0117 12:18:12.158873 1823 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:18:12.167087 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:18:12.167294 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:18:13.557045 containerd[1459]: time="2025-01-17T12:18:13.556947363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:13.558806 containerd[1459]: time="2025-01-17T12:18:13.558691254Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976721" Jan 17 12:18:13.559801 containerd[1459]: time="2025-01-17T12:18:13.559553674Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:13.564855 containerd[1459]: time="2025-01-17T12:18:13.564761095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:13.568039 containerd[1459]: time="2025-01-17T12:18:13.567941373Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 2.368665922s" Jan 17 12:18:13.568039 containerd[1459]: time="2025-01-17T12:18:13.568028723Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 17 12:18:13.570689 containerd[1459]: time="2025-01-17T12:18:13.570468071Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 17 12:18:15.383884 containerd[1459]: time="2025-01-17T12:18:15.383808171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:15.385533 containerd[1459]: time="2025-01-17T12:18:15.385446600Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701143" Jan 17 12:18:15.386324 containerd[1459]: time="2025-01-17T12:18:15.386240362Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:15.392774 containerd[1459]: time="2025-01-17T12:18:15.391125221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:15.392960 containerd[1459]: time="2025-01-17T12:18:15.392844050Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 1.821959026s" Jan 17 12:18:15.392960 containerd[1459]: time="2025-01-17T12:18:15.392899030Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 17 12:18:15.394453 containerd[1459]: time="2025-01-17T12:18:15.394415843Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 17 12:18:17.132980 containerd[1459]: time="2025-01-17T12:18:17.132880729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:17.135990 containerd[1459]: time="2025-01-17T12:18:17.135913079Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652053" Jan 17 12:18:17.138799 containerd[1459]: time="2025-01-17T12:18:17.137444505Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:17.140901 containerd[1459]: time="2025-01-17T12:18:17.140848105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:17.142655 containerd[1459]: time="2025-01-17T12:18:17.142604951Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 1.748022913s" Jan 17 12:18:17.142774 containerd[1459]: time="2025-01-17T12:18:17.142666035Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 17 12:18:17.143659 containerd[1459]: time="2025-01-17T12:18:17.143630196Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 17 12:18:17.339981 systemd-resolved[1323]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 17 12:18:18.416031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount837754219.mount: Deactivated successfully. Jan 17 12:18:19.285428 containerd[1459]: time="2025-01-17T12:18:19.285370205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:19.286882 containerd[1459]: time="2025-01-17T12:18:19.286817992Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 17 12:18:19.287788 containerd[1459]: time="2025-01-17T12:18:19.287664889Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:19.290722 containerd[1459]: time="2025-01-17T12:18:19.290672177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:19.292278 containerd[1459]: time="2025-01-17T12:18:19.291586107Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 2.147918699s" Jan 17 12:18:19.292278 containerd[1459]: time="2025-01-17T12:18:19.291637641Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 17 12:18:19.292626 containerd[1459]: time="2025-01-17T12:18:19.292590184Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:18:19.880142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2296370437.mount: Deactivated successfully. Jan 17 12:18:20.403067 systemd-resolved[1323]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 17 12:18:21.718059 containerd[1459]: time="2025-01-17T12:18:21.716978865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:21.720521 containerd[1459]: time="2025-01-17T12:18:21.720169748Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 17 12:18:21.724778 containerd[1459]: time="2025-01-17T12:18:21.724272103Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:21.734042 containerd[1459]: time="2025-01-17T12:18:21.733839780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:21.738450 containerd[1459]: time="2025-01-17T12:18:21.738297602Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.445652604s" Jan 17 12:18:21.738450 containerd[1459]: time="2025-01-17T12:18:21.738389707Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 17 12:18:21.739581 containerd[1459]: time="2025-01-17T12:18:21.739172792Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 12:18:22.309858 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:18:22.323061 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:18:22.350463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1796001514.mount: Deactivated successfully. Jan 17 12:18:22.362422 containerd[1459]: time="2025-01-17T12:18:22.362349265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:22.365052 containerd[1459]: time="2025-01-17T12:18:22.364960066Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 17 12:18:22.366955 containerd[1459]: time="2025-01-17T12:18:22.366905945Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:22.397584 containerd[1459]: time="2025-01-17T12:18:22.397511341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:22.399572 containerd[1459]: time="2025-01-17T12:18:22.398657904Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 659.427154ms" Jan 17 12:18:22.402055 containerd[1459]: time="2025-01-17T12:18:22.401966244Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 17 12:18:22.403084 containerd[1459]: time="2025-01-17T12:18:22.403030619Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 17 12:18:22.547374 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:18:22.567731 (kubelet)[1950]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:18:22.677662 kubelet[1950]: E0117 12:18:22.677599 1950 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:18:22.682765 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:18:22.683064 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:18:23.154677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4124505443.mount: Deactivated successfully. Jan 17 12:18:26.424798 containerd[1459]: time="2025-01-17T12:18:26.424107345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:26.426803 containerd[1459]: time="2025-01-17T12:18:26.426697774Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 17 12:18:26.429258 containerd[1459]: time="2025-01-17T12:18:26.429140739Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:26.434163 containerd[1459]: time="2025-01-17T12:18:26.434075801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:26.437701 containerd[1459]: time="2025-01-17T12:18:26.437192911Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.034107668s" Jan 17 12:18:26.437701 containerd[1459]: time="2025-01-17T12:18:26.437263652Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 17 12:18:29.586564 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:18:29.600434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:18:29.653115 systemd[1]: Reloading requested from client PID 2039 ('systemctl') (unit session-7.scope)... Jan 17 12:18:29.653143 systemd[1]: Reloading... Jan 17 12:18:29.798865 zram_generator::config[2075]: No configuration found. Jan 17 12:18:29.971810 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:18:30.089190 systemd[1]: Reloading finished in 435 ms. Jan 17 12:18:30.154490 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:18:30.154939 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:18:30.155478 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:18:30.169632 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:18:30.330033 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:18:30.342573 (kubelet)[2131]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:18:30.418270 kubelet[2131]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:18:30.418270 kubelet[2131]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:18:30.418270 kubelet[2131]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:18:30.420143 kubelet[2131]: I0117 12:18:30.420017 2131 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:18:31.064181 kubelet[2131]: I0117 12:18:31.064114 2131 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 17 12:18:31.064181 kubelet[2131]: I0117 12:18:31.064159 2131 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:18:31.064575 kubelet[2131]: I0117 12:18:31.064525 2131 server.go:929] "Client rotation is on, will bootstrap in background" Jan 17 12:18:31.100775 kubelet[2131]: E0117 12:18:31.100683 2131 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://209.38.138.250:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 209.38.138.250:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:18:31.102773 kubelet[2131]: I0117 12:18:31.102397 2131 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:18:31.112877 kubelet[2131]: E0117 12:18:31.112832 2131 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 12:18:31.112877 kubelet[2131]: I0117 12:18:31.112872 2131 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 12:18:31.118428 kubelet[2131]: I0117 12:18:31.118361 2131 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:18:31.120511 kubelet[2131]: I0117 12:18:31.120410 2131 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 17 12:18:31.121050 kubelet[2131]: I0117 12:18:31.120981 2131 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:18:31.121380 kubelet[2131]: I0117 12:18:31.121046 2131 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-f-fd30d73867","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 12:18:31.121380 kubelet[2131]: I0117 12:18:31.121363 2131 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:18:31.121380 kubelet[2131]: I0117 12:18:31.121379 2131 container_manager_linux.go:300] "Creating device plugin manager" Jan 17 12:18:31.121628 kubelet[2131]: I0117 12:18:31.121597 2131 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:18:31.124727 kubelet[2131]: I0117 12:18:31.123951 2131 kubelet.go:408] "Attempting to sync node with API server" Jan 17 12:18:31.124727 kubelet[2131]: I0117 12:18:31.124007 2131 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:18:31.124727 kubelet[2131]: I0117 12:18:31.124061 2131 kubelet.go:314] "Adding apiserver pod source" Jan 17 12:18:31.124727 kubelet[2131]: I0117 12:18:31.124086 2131 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:18:31.126890 kubelet[2131]: W0117 12:18:31.126816 2131 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://209.38.138.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-f-fd30d73867&limit=500&resourceVersion=0": dial tcp 209.38.138.250:6443: connect: connection refused Jan 17 12:18:31.127175 kubelet[2131]: E0117 12:18:31.127144 2131 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://209.38.138.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-f-fd30d73867&limit=500&resourceVersion=0\": dial tcp 209.38.138.250:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:18:31.130146 kubelet[2131]: W0117 12:18:31.130081 2131 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://209.38.138.250:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 209.38.138.250:6443: connect: connection refused Jan 17 12:18:31.130146 kubelet[2131]: E0117 12:18:31.130153 2131 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://209.38.138.250:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 209.38.138.250:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:18:31.130947 kubelet[2131]: I0117 12:18:31.130895 2131 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:18:31.133981 kubelet[2131]: I0117 12:18:31.133527 2131 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:18:31.135198 kubelet[2131]: W0117 12:18:31.134370 2131 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:18:31.136986 kubelet[2131]: I0117 12:18:31.136946 2131 server.go:1269] "Started kubelet" Jan 17 12:18:31.138817 kubelet[2131]: I0117 12:18:31.138765 2131 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:18:31.140515 kubelet[2131]: I0117 12:18:31.140482 2131 server.go:460] "Adding debug handlers to kubelet server" Jan 17 12:18:31.149189 kubelet[2131]: I0117 12:18:31.146623 2131 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:18:31.149189 kubelet[2131]: I0117 12:18:31.147092 2131 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:18:31.151015 kubelet[2131]: E0117 12:18:31.147613 2131 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://209.38.138.250:6443/api/v1/namespaces/default/events\": dial tcp 209.38.138.250:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-f-fd30d73867.181b7a14d2da4cf8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-f-fd30d73867,UID:ci-4081.3.0-f-fd30d73867,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-f-fd30d73867,},FirstTimestamp:2025-01-17 12:18:31.13691468 +0000 UTC m=+0.783556815,LastTimestamp:2025-01-17 12:18:31.13691468 +0000 UTC m=+0.783556815,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-f-fd30d73867,}" Jan 17 12:18:31.154694 kubelet[2131]: I0117 12:18:31.154650 2131 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:18:31.159788 kubelet[2131]: I0117 12:18:31.159761 2131 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 17 12:18:31.160022 kubelet[2131]: E0117 12:18:31.155769 2131 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:18:31.160542 kubelet[2131]: I0117 12:18:31.160518 2131 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 12:18:31.160636 kubelet[2131]: I0117 12:18:31.155928 2131 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 12:18:31.160859 kubelet[2131]: I0117 12:18:31.160848 2131 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:18:31.162054 kubelet[2131]: W0117 12:18:31.161931 2131 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://209.38.138.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 209.38.138.250:6443: connect: connection refused Jan 17 12:18:31.162054 kubelet[2131]: E0117 12:18:31.162009 2131 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://209.38.138.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 209.38.138.250:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:18:31.163773 kubelet[2131]: E0117 12:18:31.162923 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-f-fd30d73867\" not found" Jan 17 12:18:31.165932 kubelet[2131]: E0117 12:18:31.165891 2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.138.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-f-fd30d73867?timeout=10s\": dial tcp 209.38.138.250:6443: connect: connection refused" interval="200ms" Jan 17 12:18:31.166427 kubelet[2131]: I0117 12:18:31.166406 2131 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:18:31.168915 kubelet[2131]: I0117 12:18:31.168792 2131 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:18:31.169051 kubelet[2131]: I0117 12:18:31.169040 2131 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:18:31.191617 kubelet[2131]: I0117 12:18:31.190764 2131 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:18:31.192460 kubelet[2131]: I0117 12:18:31.192411 2131 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:18:31.192585 kubelet[2131]: I0117 12:18:31.192474 2131 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:18:31.192585 kubelet[2131]: I0117 12:18:31.192538 2131 kubelet.go:2321] "Starting kubelet main sync loop" Jan 17 12:18:31.192659 kubelet[2131]: E0117 12:18:31.192607 2131 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:18:31.208011 kubelet[2131]: W0117 12:18:31.207309 2131 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://209.38.138.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 209.38.138.250:6443: connect: connection refused Jan 17 12:18:31.208011 kubelet[2131]: E0117 12:18:31.207417 2131 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://209.38.138.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 209.38.138.250:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:18:31.211606 kubelet[2131]: I0117 12:18:31.211215 2131 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:18:31.211606 kubelet[2131]: I0117 12:18:31.211239 2131 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:18:31.211606 kubelet[2131]: I0117 12:18:31.211260 2131 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:18:31.215871 kubelet[2131]: I0117 12:18:31.215321 2131 policy_none.go:49] "None policy: Start" Jan 17 12:18:31.217037 kubelet[2131]: I0117 12:18:31.216609 2131 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:18:31.217037 kubelet[2131]: I0117 12:18:31.216647 2131 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:18:31.224954 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 12:18:31.243945 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 12:18:31.249287 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 12:18:31.263174 kubelet[2131]: E0117 12:18:31.263111 2131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-f-fd30d73867\" not found" Jan 17 12:18:31.268807 kubelet[2131]: I0117 12:18:31.267338 2131 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:18:31.268807 kubelet[2131]: I0117 12:18:31.267548 2131 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 12:18:31.268807 kubelet[2131]: I0117 12:18:31.267561 2131 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:18:31.268807 kubelet[2131]: I0117 12:18:31.268543 2131 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:18:31.271824 kubelet[2131]: E0117 12:18:31.271601 2131 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-f-fd30d73867\" not found" Jan 17 12:18:31.306042 systemd[1]: Created slice kubepods-burstable-pod7233f41317051ab42e46ae083feed619.slice - libcontainer container kubepods-burstable-pod7233f41317051ab42e46ae083feed619.slice. Jan 17 12:18:31.334140 systemd[1]: Created slice kubepods-burstable-pod6a253aeb1da1690fe7214b0d25291172.slice - libcontainer container kubepods-burstable-pod6a253aeb1da1690fe7214b0d25291172.slice. Jan 17 12:18:31.355444 systemd[1]: Created slice kubepods-burstable-podfe14243ae54268194a86772f3c04127d.slice - libcontainer container kubepods-burstable-podfe14243ae54268194a86772f3c04127d.slice. Jan 17 12:18:31.366847 kubelet[2131]: E0117 12:18:31.366726 2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.138.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-f-fd30d73867?timeout=10s\": dial tcp 209.38.138.250:6443: connect: connection refused" interval="400ms" Jan 17 12:18:31.370520 kubelet[2131]: I0117 12:18:31.370441 2131 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-f-fd30d73867" Jan 17 12:18:31.371086 kubelet[2131]: E0117 12:18:31.371024 2131 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://209.38.138.250:6443/api/v1/nodes\": dial tcp 209.38.138.250:6443: connect: connection refused" node="ci-4081.3.0-f-fd30d73867" Jan 17 12:18:31.462855 kubelet[2131]: I0117 12:18:31.462769 2131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7233f41317051ab42e46ae083feed619-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-f-fd30d73867\" (UID: \"7233f41317051ab42e46ae083feed619\") " pod="kube-system/kube-apiserver-ci-4081.3.0-f-fd30d73867" Jan 17 12:18:31.462855 kubelet[2131]: I0117 12:18:31.462844 2131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6a253aeb1da1690fe7214b0d25291172-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-f-fd30d73867\" (UID: \"6a253aeb1da1690fe7214b0d25291172\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-fd30d73867" Jan 17 12:18:31.462855 kubelet[2131]: I0117 12:18:31.462881 2131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6a253aeb1da1690fe7214b0d25291172-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-f-fd30d73867\" (UID: \"6a253aeb1da1690fe7214b0d25291172\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-fd30d73867" Jan 17 12:18:31.463508 kubelet[2131]: I0117 12:18:31.462908 2131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7233f41317051ab42e46ae083feed619-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-f-fd30d73867\" (UID: \"7233f41317051ab42e46ae083feed619\") " pod="kube-system/kube-apiserver-ci-4081.3.0-f-fd30d73867" Jan 17 12:18:31.463508 kubelet[2131]: I0117 12:18:31.462934 2131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7233f41317051ab42e46ae083feed619-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-f-fd30d73867\" (UID: \"7233f41317051ab42e46ae083feed619\") " pod="kube-system/kube-apiserver-ci-4081.3.0-f-fd30d73867" Jan 17 12:18:31.463508 kubelet[2131]: I0117 12:18:31.462956 2131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6a253aeb1da1690fe7214b0d25291172-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-f-fd30d73867\" (UID: \"6a253aeb1da1690fe7214b0d25291172\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-fd30d73867" Jan 17 12:18:31.463508 kubelet[2131]: I0117 12:18:31.462982 2131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6a253aeb1da1690fe7214b0d25291172-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-f-fd30d73867\" (UID: \"6a253aeb1da1690fe7214b0d25291172\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-fd30d73867" Jan 17 12:18:31.463508 kubelet[2131]: I0117 12:18:31.463008 2131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6a253aeb1da1690fe7214b0d25291172-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-f-fd30d73867\" (UID: \"6a253aeb1da1690fe7214b0d25291172\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-fd30d73867" Jan 17 12:18:31.463691 kubelet[2131]: I0117 12:18:31.463026 2131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe14243ae54268194a86772f3c04127d-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-f-fd30d73867\" (UID: \"fe14243ae54268194a86772f3c04127d\") " pod="kube-system/kube-scheduler-ci-4081.3.0-f-fd30d73867" Jan 17 12:18:31.573284 kubelet[2131]: I0117 12:18:31.573053 2131 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-f-fd30d73867" Jan 17 12:18:31.573529 kubelet[2131]: E0117 12:18:31.573496 2131 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://209.38.138.250:6443/api/v1/nodes\": dial tcp 209.38.138.250:6443: connect: connection refused" node="ci-4081.3.0-f-fd30d73867" Jan 17 12:18:31.630711 kubelet[2131]: E0117 12:18:31.630440 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:31.632523 containerd[1459]: time="2025-01-17T12:18:31.632188939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-f-fd30d73867,Uid:7233f41317051ab42e46ae083feed619,Namespace:kube-system,Attempt:0,}" Jan 17 12:18:31.636156 systemd-resolved[1323]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Jan 17 12:18:31.648901 kubelet[2131]: E0117 12:18:31.648813 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:31.649949 containerd[1459]: time="2025-01-17T12:18:31.649541723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-f-fd30d73867,Uid:6a253aeb1da1690fe7214b0d25291172,Namespace:kube-system,Attempt:0,}" Jan 17 12:18:31.660038 kubelet[2131]: E0117 12:18:31.659954 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:31.661278 containerd[1459]: time="2025-01-17T12:18:31.661193061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-f-fd30d73867,Uid:fe14243ae54268194a86772f3c04127d,Namespace:kube-system,Attempt:0,}" Jan 17 12:18:31.767732 kubelet[2131]: E0117 12:18:31.767475 2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.138.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-f-fd30d73867?timeout=10s\": dial tcp 209.38.138.250:6443: connect: connection refused" interval="800ms" Jan 17 12:18:31.975415 kubelet[2131]: I0117 12:18:31.975347 2131 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-f-fd30d73867" Jan 17 12:18:31.975972 kubelet[2131]: E0117 12:18:31.975920 2131 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://209.38.138.250:6443/api/v1/nodes\": dial tcp 209.38.138.250:6443: connect: connection refused" node="ci-4081.3.0-f-fd30d73867" Jan 17 12:18:32.263612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount815882141.mount: Deactivated successfully. Jan 17 12:18:32.273520 containerd[1459]: time="2025-01-17T12:18:32.272485812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:18:32.275309 containerd[1459]: time="2025-01-17T12:18:32.275204809Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 12:18:32.278797 containerd[1459]: time="2025-01-17T12:18:32.278669554Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:18:32.280792 containerd[1459]: time="2025-01-17T12:18:32.280171343Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:18:32.280792 containerd[1459]: time="2025-01-17T12:18:32.280787174Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:18:32.282505 containerd[1459]: time="2025-01-17T12:18:32.282340751Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:18:32.282505 containerd[1459]: time="2025-01-17T12:18:32.282465810Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:18:32.286774 containerd[1459]: time="2025-01-17T12:18:32.286045809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:18:32.286906 containerd[1459]: time="2025-01-17T12:18:32.286866851Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 625.569021ms" Jan 17 12:18:32.288976 containerd[1459]: time="2025-01-17T12:18:32.288925825Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 639.281031ms" Jan 17 12:18:32.293775 containerd[1459]: time="2025-01-17T12:18:32.293693605Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 661.161798ms" Jan 17 12:18:32.425828 kubelet[2131]: W0117 12:18:32.425676 2131 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://209.38.138.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 209.38.138.250:6443: connect: connection refused Jan 17 12:18:32.425828 kubelet[2131]: E0117 12:18:32.425778 2131 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://209.38.138.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 209.38.138.250:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:18:32.520098 containerd[1459]: time="2025-01-17T12:18:32.518880175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:32.520098 containerd[1459]: time="2025-01-17T12:18:32.518946394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:32.520098 containerd[1459]: time="2025-01-17T12:18:32.518975767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:32.520098 containerd[1459]: time="2025-01-17T12:18:32.519069157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:32.520717 containerd[1459]: time="2025-01-17T12:18:32.516383539Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:32.520717 containerd[1459]: time="2025-01-17T12:18:32.516474364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:32.520717 containerd[1459]: time="2025-01-17T12:18:32.516492263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:32.520717 containerd[1459]: time="2025-01-17T12:18:32.516597359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:32.530902 containerd[1459]: time="2025-01-17T12:18:32.530780646Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:32.530902 containerd[1459]: time="2025-01-17T12:18:32.530850072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:32.531119 containerd[1459]: time="2025-01-17T12:18:32.530866665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:32.531119 containerd[1459]: time="2025-01-17T12:18:32.530958564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:32.560172 systemd[1]: Started cri-containerd-b499fc5d782b06c66e83db6c8bc54d6886a4e261a323736f4f305e43b161ee58.scope - libcontainer container b499fc5d782b06c66e83db6c8bc54d6886a4e261a323736f4f305e43b161ee58. Jan 17 12:18:32.569329 kubelet[2131]: E0117 12:18:32.569049 2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.138.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-f-fd30d73867?timeout=10s\": dial tcp 209.38.138.250:6443: connect: connection refused" interval="1.6s" Jan 17 12:18:32.578015 systemd[1]: Started cri-containerd-a7f50722cf4707419a0f5eedc0707dcf016275e9cea2f3da055e5db4f3ee4210.scope - libcontainer container a7f50722cf4707419a0f5eedc0707dcf016275e9cea2f3da055e5db4f3ee4210. Jan 17 12:18:32.578722 kubelet[2131]: W0117 12:18:32.578661 2131 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://209.38.138.250:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 209.38.138.250:6443: connect: connection refused Jan 17 12:18:32.578810 kubelet[2131]: E0117 12:18:32.578733 2131 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://209.38.138.250:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 209.38.138.250:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:18:32.594991 systemd[1]: Started cri-containerd-c16b27204cd23546cf2c58466d2378086b73e158240570b7083f1c08de580729.scope - libcontainer container c16b27204cd23546cf2c58466d2378086b73e158240570b7083f1c08de580729. Jan 17 12:18:32.662633 containerd[1459]: time="2025-01-17T12:18:32.662586705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-f-fd30d73867,Uid:7233f41317051ab42e46ae083feed619,Namespace:kube-system,Attempt:0,} returns sandbox id \"b499fc5d782b06c66e83db6c8bc54d6886a4e261a323736f4f305e43b161ee58\"" Jan 17 12:18:32.679177 kubelet[2131]: E0117 12:18:32.679137 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:32.684698 containerd[1459]: time="2025-01-17T12:18:32.684542654Z" level=info msg="CreateContainer within sandbox \"b499fc5d782b06c66e83db6c8bc54d6886a4e261a323736f4f305e43b161ee58\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:18:32.695637 kubelet[2131]: W0117 12:18:32.695566 2131 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://209.38.138.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-f-fd30d73867&limit=500&resourceVersion=0": dial tcp 209.38.138.250:6443: connect: connection refused Jan 17 12:18:32.695637 kubelet[2131]: E0117 12:18:32.695640 2131 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://209.38.138.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-f-fd30d73867&limit=500&resourceVersion=0\": dial tcp 209.38.138.250:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:18:32.697931 containerd[1459]: time="2025-01-17T12:18:32.697878841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-f-fd30d73867,Uid:6a253aeb1da1690fe7214b0d25291172,Namespace:kube-system,Attempt:0,} returns sandbox id \"c16b27204cd23546cf2c58466d2378086b73e158240570b7083f1c08de580729\"" Jan 17 12:18:32.699613 kubelet[2131]: E0117 12:18:32.699241 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:32.708552 containerd[1459]: time="2025-01-17T12:18:32.708489821Z" level=info msg="CreateContainer within sandbox \"c16b27204cd23546cf2c58466d2378086b73e158240570b7083f1c08de580729\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:18:32.717170 containerd[1459]: time="2025-01-17T12:18:32.717088458Z" level=info msg="CreateContainer within sandbox \"b499fc5d782b06c66e83db6c8bc54d6886a4e261a323736f4f305e43b161ee58\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3bf02407d67c40ed97a858e4bde9237eaecb33fe02a2a80b6c5418e277a77b12\"" Jan 17 12:18:32.729638 kubelet[2131]: W0117 12:18:32.729591 2131 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://209.38.138.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 209.38.138.250:6443: connect: connection refused Jan 17 12:18:32.729822 kubelet[2131]: E0117 12:18:32.729673 2131 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://209.38.138.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 209.38.138.250:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:18:32.734988 containerd[1459]: time="2025-01-17T12:18:32.734904654Z" level=info msg="StartContainer for \"3bf02407d67c40ed97a858e4bde9237eaecb33fe02a2a80b6c5418e277a77b12\"" Jan 17 12:18:32.736856 containerd[1459]: time="2025-01-17T12:18:32.736818504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-f-fd30d73867,Uid:fe14243ae54268194a86772f3c04127d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7f50722cf4707419a0f5eedc0707dcf016275e9cea2f3da055e5db4f3ee4210\"" Jan 17 12:18:32.738707 kubelet[2131]: E0117 12:18:32.738627 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:32.740949 containerd[1459]: time="2025-01-17T12:18:32.740822294Z" level=info msg="CreateContainer within sandbox \"c16b27204cd23546cf2c58466d2378086b73e158240570b7083f1c08de580729\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"65bba81f3cf6e9706d8372cf91445d9bfc6c90f3ea20c202fb23a60541f707a9\"" Jan 17 12:18:32.743231 containerd[1459]: time="2025-01-17T12:18:32.743094269Z" level=info msg="StartContainer for \"65bba81f3cf6e9706d8372cf91445d9bfc6c90f3ea20c202fb23a60541f707a9\"" Jan 17 12:18:32.745984 containerd[1459]: time="2025-01-17T12:18:32.745826743Z" level=info msg="CreateContainer within sandbox \"a7f50722cf4707419a0f5eedc0707dcf016275e9cea2f3da055e5db4f3ee4210\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:18:32.765288 containerd[1459]: time="2025-01-17T12:18:32.765183329Z" level=info msg="CreateContainer within sandbox \"a7f50722cf4707419a0f5eedc0707dcf016275e9cea2f3da055e5db4f3ee4210\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4536f1a408f6ea0cc65d8f84335adeb34ec1961ae8ed2d0c7e1976f850c7bbf5\"" Jan 17 12:18:32.767024 containerd[1459]: time="2025-01-17T12:18:32.766261545Z" level=info msg="StartContainer for \"4536f1a408f6ea0cc65d8f84335adeb34ec1961ae8ed2d0c7e1976f850c7bbf5\"" Jan 17 12:18:32.777954 kubelet[2131]: I0117 12:18:32.777817 2131 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-f-fd30d73867" Jan 17 12:18:32.779265 kubelet[2131]: E0117 12:18:32.779218 2131 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://209.38.138.250:6443/api/v1/nodes\": dial tcp 209.38.138.250:6443: connect: connection refused" node="ci-4081.3.0-f-fd30d73867" Jan 17 12:18:32.802333 systemd[1]: Started cri-containerd-3bf02407d67c40ed97a858e4bde9237eaecb33fe02a2a80b6c5418e277a77b12.scope - libcontainer container 3bf02407d67c40ed97a858e4bde9237eaecb33fe02a2a80b6c5418e277a77b12. Jan 17 12:18:32.837159 systemd[1]: Started cri-containerd-4536f1a408f6ea0cc65d8f84335adeb34ec1961ae8ed2d0c7e1976f850c7bbf5.scope - libcontainer container 4536f1a408f6ea0cc65d8f84335adeb34ec1961ae8ed2d0c7e1976f850c7bbf5. Jan 17 12:18:32.839539 systemd[1]: Started cri-containerd-65bba81f3cf6e9706d8372cf91445d9bfc6c90f3ea20c202fb23a60541f707a9.scope - libcontainer container 65bba81f3cf6e9706d8372cf91445d9bfc6c90f3ea20c202fb23a60541f707a9. Jan 17 12:18:32.928317 containerd[1459]: time="2025-01-17T12:18:32.928238551Z" level=info msg="StartContainer for \"3bf02407d67c40ed97a858e4bde9237eaecb33fe02a2a80b6c5418e277a77b12\" returns successfully" Jan 17 12:18:32.935798 containerd[1459]: time="2025-01-17T12:18:32.935630637Z" level=info msg="StartContainer for \"65bba81f3cf6e9706d8372cf91445d9bfc6c90f3ea20c202fb23a60541f707a9\" returns successfully" Jan 17 12:18:32.999613 containerd[1459]: time="2025-01-17T12:18:32.999416855Z" level=info msg="StartContainer for \"4536f1a408f6ea0cc65d8f84335adeb34ec1961ae8ed2d0c7e1976f850c7bbf5\" returns successfully" Jan 17 12:18:33.215834 kubelet[2131]: E0117 12:18:33.215797 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:33.220156 kubelet[2131]: E0117 12:18:33.219778 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:33.222332 kubelet[2131]: E0117 12:18:33.222288 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:34.225605 kubelet[2131]: E0117 12:18:34.225569 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:34.382842 kubelet[2131]: I0117 12:18:34.382276 2131 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-f-fd30d73867" Jan 17 12:18:35.076766 kubelet[2131]: E0117 12:18:35.076710 2131 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-f-fd30d73867\" not found" node="ci-4081.3.0-f-fd30d73867" Jan 17 12:18:35.130014 kubelet[2131]: I0117 12:18:35.129962 2131 apiserver.go:52] "Watching apiserver" Jan 17 12:18:35.161158 kubelet[2131]: I0117 12:18:35.161053 2131 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 12:18:35.243976 kubelet[2131]: I0117 12:18:35.243858 2131 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.0-f-fd30d73867" Jan 17 12:18:35.243976 kubelet[2131]: E0117 12:18:35.243959 2131 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.3.0-f-fd30d73867\": node \"ci-4081.3.0-f-fd30d73867\" not found" Jan 17 12:18:36.345719 kubelet[2131]: W0117 12:18:36.345662 2131 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:18:36.346695 kubelet[2131]: E0117 12:18:36.346596 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:37.230344 kubelet[2131]: E0117 12:18:37.230175 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:37.331596 systemd[1]: Reloading requested from client PID 2401 ('systemctl') (unit session-7.scope)... Jan 17 12:18:37.331620 systemd[1]: Reloading... Jan 17 12:18:37.431883 zram_generator::config[2439]: No configuration found. Jan 17 12:18:37.676545 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:18:37.784191 systemd[1]: Reloading finished in 452 ms. Jan 17 12:18:37.832023 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:18:37.847716 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:18:37.848262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:18:37.848614 systemd[1]: kubelet.service: Consumed 1.308s CPU time, 111.2M memory peak, 0B memory swap peak. Jan 17 12:18:37.859281 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:18:38.058171 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:18:38.072268 (kubelet)[2491]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:18:38.182911 kubelet[2491]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:18:38.182911 kubelet[2491]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:18:38.182911 kubelet[2491]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:18:38.187542 kubelet[2491]: I0117 12:18:38.187390 2491 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:18:38.199297 kubelet[2491]: I0117 12:18:38.198989 2491 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 17 12:18:38.199297 kubelet[2491]: I0117 12:18:38.199038 2491 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:18:38.200021 kubelet[2491]: I0117 12:18:38.199991 2491 server.go:929] "Client rotation is on, will bootstrap in background" Jan 17 12:18:38.202584 kubelet[2491]: I0117 12:18:38.202543 2491 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:18:38.213146 kubelet[2491]: I0117 12:18:38.212694 2491 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:18:38.218156 kubelet[2491]: E0117 12:18:38.218097 2491 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 12:18:38.218156 kubelet[2491]: I0117 12:18:38.218141 2491 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 12:18:38.223460 kubelet[2491]: I0117 12:18:38.223397 2491 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:18:38.223884 kubelet[2491]: I0117 12:18:38.223585 2491 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 17 12:18:38.224620 kubelet[2491]: I0117 12:18:38.223984 2491 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:18:38.224620 kubelet[2491]: I0117 12:18:38.224090 2491 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-f-fd30d73867","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 12:18:38.224620 kubelet[2491]: I0117 12:18:38.224504 2491 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:18:38.224620 kubelet[2491]: I0117 12:18:38.224519 2491 container_manager_linux.go:300] "Creating device plugin manager" Jan 17 12:18:38.224969 kubelet[2491]: I0117 12:18:38.224591 2491 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:18:38.224969 kubelet[2491]: I0117 12:18:38.224853 2491 kubelet.go:408] "Attempting to sync node with API server" Jan 17 12:18:38.224969 kubelet[2491]: I0117 12:18:38.224869 2491 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:18:38.225562 kubelet[2491]: I0117 12:18:38.225538 2491 kubelet.go:314] "Adding apiserver pod source" Jan 17 12:18:38.228037 kubelet[2491]: I0117 12:18:38.227979 2491 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:18:38.232645 kubelet[2491]: I0117 12:18:38.232583 2491 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:18:38.233110 kubelet[2491]: I0117 12:18:38.233088 2491 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:18:38.233540 kubelet[2491]: I0117 12:18:38.233517 2491 server.go:1269] "Started kubelet" Jan 17 12:18:38.241566 kubelet[2491]: I0117 12:18:38.241501 2491 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:18:38.250859 kubelet[2491]: I0117 12:18:38.244948 2491 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:18:38.250859 kubelet[2491]: I0117 12:18:38.246595 2491 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:18:38.250859 kubelet[2491]: I0117 12:18:38.247002 2491 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:18:38.250859 kubelet[2491]: I0117 12:18:38.247679 2491 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 12:18:38.251524 kubelet[2491]: E0117 12:18:38.251490 2491 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-f-fd30d73867\" not found" Jan 17 12:18:38.251602 kubelet[2491]: I0117 12:18:38.251543 2491 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 17 12:18:38.254232 kubelet[2491]: I0117 12:18:38.253943 2491 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 12:18:38.254232 kubelet[2491]: I0117 12:18:38.254135 2491 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:18:38.264872 kubelet[2491]: I0117 12:18:38.264833 2491 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:18:38.265383 kubelet[2491]: I0117 12:18:38.264948 2491 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:18:38.270797 kubelet[2491]: I0117 12:18:38.269969 2491 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:18:38.271340 kubelet[2491]: I0117 12:18:38.271311 2491 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:18:38.271421 kubelet[2491]: I0117 12:18:38.271354 2491 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:18:38.271421 kubelet[2491]: I0117 12:18:38.271376 2491 kubelet.go:2321] "Starting kubelet main sync loop" Jan 17 12:18:38.271482 kubelet[2491]: E0117 12:18:38.271421 2491 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:18:38.273783 kubelet[2491]: I0117 12:18:38.272082 2491 server.go:460] "Adding debug handlers to kubelet server" Jan 17 12:18:38.288776 kubelet[2491]: I0117 12:18:38.287590 2491 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:18:38.289256 kubelet[2491]: E0117 12:18:38.289111 2491 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:18:38.359552 kubelet[2491]: I0117 12:18:38.358280 2491 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:18:38.359715 kubelet[2491]: I0117 12:18:38.359692 2491 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:18:38.359794 kubelet[2491]: I0117 12:18:38.359786 2491 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:18:38.360045 kubelet[2491]: I0117 12:18:38.360027 2491 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:18:38.360126 kubelet[2491]: I0117 12:18:38.360105 2491 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:18:38.360166 kubelet[2491]: I0117 12:18:38.360161 2491 policy_none.go:49] "None policy: Start" Jan 17 12:18:38.361429 kubelet[2491]: I0117 12:18:38.361405 2491 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:18:38.361628 kubelet[2491]: I0117 12:18:38.361618 2491 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:18:38.366370 kubelet[2491]: I0117 12:18:38.366333 2491 state_mem.go:75] "Updated machine memory state" Jan 17 12:18:38.372494 kubelet[2491]: E0117 12:18:38.372295 2491 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:18:38.378357 kubelet[2491]: I0117 12:18:38.377112 2491 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:18:38.378357 kubelet[2491]: I0117 12:18:38.377320 2491 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 12:18:38.378357 kubelet[2491]: I0117 12:18:38.377331 2491 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:18:38.378357 kubelet[2491]: I0117 12:18:38.378219 2491 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:18:38.496698 kubelet[2491]: I0117 12:18:38.496656 2491 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-f-fd30d73867" Jan 17 12:18:38.513876 kubelet[2491]: I0117 12:18:38.512665 2491 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.0-f-fd30d73867" Jan 17 12:18:38.514240 kubelet[2491]: I0117 12:18:38.514158 2491 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.0-f-fd30d73867" Jan 17 12:18:38.588258 kubelet[2491]: W0117 12:18:38.588217 2491 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:18:38.593860 kubelet[2491]: W0117 12:18:38.592917 2491 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:18:38.593860 kubelet[2491]: W0117 12:18:38.593760 2491 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:18:38.594192 kubelet[2491]: E0117 12:18:38.594116 2491 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.0-f-fd30d73867\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.0-f-fd30d73867" Jan 17 12:18:38.658332 kubelet[2491]: I0117 12:18:38.657800 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6a253aeb1da1690fe7214b0d25291172-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-f-fd30d73867\" (UID: \"6a253aeb1da1690fe7214b0d25291172\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-fd30d73867" Jan 17 12:18:38.658332 kubelet[2491]: I0117 12:18:38.657879 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe14243ae54268194a86772f3c04127d-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-f-fd30d73867\" (UID: \"fe14243ae54268194a86772f3c04127d\") " pod="kube-system/kube-scheduler-ci-4081.3.0-f-fd30d73867" Jan 17 12:18:38.658332 kubelet[2491]: I0117 12:18:38.657931 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7233f41317051ab42e46ae083feed619-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-f-fd30d73867\" (UID: \"7233f41317051ab42e46ae083feed619\") " pod="kube-system/kube-apiserver-ci-4081.3.0-f-fd30d73867" Jan 17 12:18:38.658332 kubelet[2491]: I0117 12:18:38.657961 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6a253aeb1da1690fe7214b0d25291172-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-f-fd30d73867\" (UID: \"6a253aeb1da1690fe7214b0d25291172\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-fd30d73867" Jan 17 12:18:38.658332 kubelet[2491]: I0117 12:18:38.658012 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7233f41317051ab42e46ae083feed619-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-f-fd30d73867\" (UID: \"7233f41317051ab42e46ae083feed619\") " pod="kube-system/kube-apiserver-ci-4081.3.0-f-fd30d73867" Jan 17 12:18:38.658953 kubelet[2491]: I0117 12:18:38.658051 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7233f41317051ab42e46ae083feed619-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-f-fd30d73867\" (UID: \"7233f41317051ab42e46ae083feed619\") " pod="kube-system/kube-apiserver-ci-4081.3.0-f-fd30d73867" Jan 17 12:18:38.658953 kubelet[2491]: I0117 12:18:38.658077 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6a253aeb1da1690fe7214b0d25291172-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-f-fd30d73867\" (UID: \"6a253aeb1da1690fe7214b0d25291172\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-fd30d73867" Jan 17 12:18:38.658953 kubelet[2491]: I0117 12:18:38.658105 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6a253aeb1da1690fe7214b0d25291172-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-f-fd30d73867\" (UID: \"6a253aeb1da1690fe7214b0d25291172\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-fd30d73867" Jan 17 12:18:38.658953 kubelet[2491]: I0117 12:18:38.658131 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6a253aeb1da1690fe7214b0d25291172-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-f-fd30d73867\" (UID: \"6a253aeb1da1690fe7214b0d25291172\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-fd30d73867" Jan 17 12:18:38.890595 kubelet[2491]: E0117 12:18:38.890504 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:38.894409 kubelet[2491]: E0117 12:18:38.894284 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:38.894409 kubelet[2491]: E0117 12:18:38.894406 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:39.228921 kubelet[2491]: I0117 12:18:39.228865 2491 apiserver.go:52] "Watching apiserver" Jan 17 12:18:39.254963 kubelet[2491]: I0117 12:18:39.254907 2491 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 12:18:39.326722 kubelet[2491]: E0117 12:18:39.325453 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:39.326722 kubelet[2491]: E0117 12:18:39.326295 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:39.326722 kubelet[2491]: E0117 12:18:39.326474 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:39.509841 kubelet[2491]: I0117 12:18:39.509631 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-f-fd30d73867" podStartSLOduration=3.509605728 podStartE2EDuration="3.509605728s" podCreationTimestamp="2025-01-17 12:18:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:18:39.432542433 +0000 UTC m=+1.343045028" watchObservedRunningTime="2025-01-17 12:18:39.509605728 +0000 UTC m=+1.420108327" Jan 17 12:18:39.542765 kubelet[2491]: I0117 12:18:39.542648 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-f-fd30d73867" podStartSLOduration=1.542627606 podStartE2EDuration="1.542627606s" podCreationTimestamp="2025-01-17 12:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:18:39.511350448 +0000 UTC m=+1.421853047" watchObservedRunningTime="2025-01-17 12:18:39.542627606 +0000 UTC m=+1.453130212" Jan 17 12:18:40.329864 kubelet[2491]: E0117 12:18:40.329352 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:42.747727 update_engine[1444]: I20250117 12:18:42.747573 1444 update_attempter.cc:509] Updating boot flags... Jan 17 12:18:42.790505 kubelet[2491]: E0117 12:18:42.788191 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:42.885793 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2556) Jan 17 12:18:42.885999 kubelet[2491]: I0117 12:18:42.883166 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-f-fd30d73867" podStartSLOduration=4.883116105 podStartE2EDuration="4.883116105s" podCreationTimestamp="2025-01-17 12:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:18:39.544392538 +0000 UTC m=+1.454895136" watchObservedRunningTime="2025-01-17 12:18:42.883116105 +0000 UTC m=+4.793618698" Jan 17 12:18:42.948787 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2557) Jan 17 12:18:43.341258 kubelet[2491]: E0117 12:18:43.341209 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:43.342491 kubelet[2491]: I0117 12:18:43.342134 2491 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:18:43.344284 containerd[1459]: time="2025-01-17T12:18:43.344188760Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:18:43.345227 kubelet[2491]: I0117 12:18:43.344667 2491 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:18:44.269330 systemd[1]: Created slice kubepods-besteffort-podf1204897_5a0c_4bd0_bd95_cb30e43df35e.slice - libcontainer container kubepods-besteffort-podf1204897_5a0c_4bd0_bd95_cb30e43df35e.slice. Jan 17 12:18:44.303863 kubelet[2491]: I0117 12:18:44.301615 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f1204897-5a0c-4bd0-bd95-cb30e43df35e-kube-proxy\") pod \"kube-proxy-2gh2s\" (UID: \"f1204897-5a0c-4bd0-bd95-cb30e43df35e\") " pod="kube-system/kube-proxy-2gh2s" Jan 17 12:18:44.303863 kubelet[2491]: I0117 12:18:44.301667 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1204897-5a0c-4bd0-bd95-cb30e43df35e-lib-modules\") pod \"kube-proxy-2gh2s\" (UID: \"f1204897-5a0c-4bd0-bd95-cb30e43df35e\") " pod="kube-system/kube-proxy-2gh2s" Jan 17 12:18:44.303863 kubelet[2491]: I0117 12:18:44.301698 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1204897-5a0c-4bd0-bd95-cb30e43df35e-xtables-lock\") pod \"kube-proxy-2gh2s\" (UID: \"f1204897-5a0c-4bd0-bd95-cb30e43df35e\") " pod="kube-system/kube-proxy-2gh2s" Jan 17 12:18:44.303863 kubelet[2491]: I0117 12:18:44.301727 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2kt5\" (UniqueName: \"kubernetes.io/projected/f1204897-5a0c-4bd0-bd95-cb30e43df35e-kube-api-access-m2kt5\") pod \"kube-proxy-2gh2s\" (UID: \"f1204897-5a0c-4bd0-bd95-cb30e43df35e\") " pod="kube-system/kube-proxy-2gh2s" Jan 17 12:18:44.333120 sudo[1649]: pam_unix(sudo:session): session closed for user root Jan 17 12:18:44.348233 sshd[1646]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:44.362876 systemd[1]: sshd@6-209.38.138.250:22-139.178.68.195:45484.service: Deactivated successfully. Jan 17 12:18:44.375519 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:18:44.377898 systemd[1]: session-7.scope: Consumed 5.876s CPU time, 148.6M memory peak, 0B memory swap peak. Jan 17 12:18:44.395092 systemd-logind[1443]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:18:44.398736 systemd-logind[1443]: Removed session 7. Jan 17 12:18:44.601246 kubelet[2491]: E0117 12:18:44.600975 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:44.604485 containerd[1459]: time="2025-01-17T12:18:44.603659763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2gh2s,Uid:f1204897-5a0c-4bd0-bd95-cb30e43df35e,Namespace:kube-system,Attempt:0,}" Jan 17 12:18:44.625848 systemd[1]: Created slice kubepods-besteffort-podfc339706_4626_42f4_a49d_5517369d90f2.slice - libcontainer container kubepods-besteffort-podfc339706_4626_42f4_a49d_5517369d90f2.slice. Jan 17 12:18:44.672263 containerd[1459]: time="2025-01-17T12:18:44.671519622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:44.672263 containerd[1459]: time="2025-01-17T12:18:44.671635468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:44.672263 containerd[1459]: time="2025-01-17T12:18:44.671655865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:44.672263 containerd[1459]: time="2025-01-17T12:18:44.671829304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:44.707538 kubelet[2491]: I0117 12:18:44.707465 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fc339706-4626-42f4-a49d-5517369d90f2-var-lib-calico\") pod \"tigera-operator-76c4976dd7-w7c5r\" (UID: \"fc339706-4626-42f4-a49d-5517369d90f2\") " pod="tigera-operator/tigera-operator-76c4976dd7-w7c5r" Jan 17 12:18:44.708131 kubelet[2491]: I0117 12:18:44.708037 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4trm9\" (UniqueName: \"kubernetes.io/projected/fc339706-4626-42f4-a49d-5517369d90f2-kube-api-access-4trm9\") pod \"tigera-operator-76c4976dd7-w7c5r\" (UID: \"fc339706-4626-42f4-a49d-5517369d90f2\") " pod="tigera-operator/tigera-operator-76c4976dd7-w7c5r" Jan 17 12:18:44.739629 systemd[1]: Started cri-containerd-d4ac56940711495a4d5918c602dd24bf3cbcd0836210a6bf2d7412425da631a2.scope - libcontainer container d4ac56940711495a4d5918c602dd24bf3cbcd0836210a6bf2d7412425da631a2. Jan 17 12:18:44.801425 containerd[1459]: time="2025-01-17T12:18:44.801240986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2gh2s,Uid:f1204897-5a0c-4bd0-bd95-cb30e43df35e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4ac56940711495a4d5918c602dd24bf3cbcd0836210a6bf2d7412425da631a2\"" Jan 17 12:18:44.805866 kubelet[2491]: E0117 12:18:44.804207 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:44.813238 containerd[1459]: time="2025-01-17T12:18:44.813130280Z" level=info msg="CreateContainer within sandbox \"d4ac56940711495a4d5918c602dd24bf3cbcd0836210a6bf2d7412425da631a2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:18:44.853559 containerd[1459]: time="2025-01-17T12:18:44.853294280Z" level=info msg="CreateContainer within sandbox \"d4ac56940711495a4d5918c602dd24bf3cbcd0836210a6bf2d7412425da631a2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"31ff7b359fe85ecda435e477ca2402e57b0cb32928891f67365e953a9c7b92bb\"" Jan 17 12:18:44.856231 containerd[1459]: time="2025-01-17T12:18:44.856028591Z" level=info msg="StartContainer for \"31ff7b359fe85ecda435e477ca2402e57b0cb32928891f67365e953a9c7b92bb\"" Jan 17 12:18:44.924709 systemd[1]: Started cri-containerd-31ff7b359fe85ecda435e477ca2402e57b0cb32928891f67365e953a9c7b92bb.scope - libcontainer container 31ff7b359fe85ecda435e477ca2402e57b0cb32928891f67365e953a9c7b92bb. Jan 17 12:18:44.962635 containerd[1459]: time="2025-01-17T12:18:44.962489594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-w7c5r,Uid:fc339706-4626-42f4-a49d-5517369d90f2,Namespace:tigera-operator,Attempt:0,}" Jan 17 12:18:44.994604 containerd[1459]: time="2025-01-17T12:18:44.994519809Z" level=info msg="StartContainer for \"31ff7b359fe85ecda435e477ca2402e57b0cb32928891f67365e953a9c7b92bb\" returns successfully" Jan 17 12:18:45.033141 containerd[1459]: time="2025-01-17T12:18:45.032899022Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:45.033141 containerd[1459]: time="2025-01-17T12:18:45.032998428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:45.033141 containerd[1459]: time="2025-01-17T12:18:45.033013367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:45.036606 containerd[1459]: time="2025-01-17T12:18:45.035980516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:45.073240 systemd[1]: Started cri-containerd-d786bd76393b38e3e70a68bef23b0126e3e3b4f164ee27ffb1b61b2d27e4fd3e.scope - libcontainer container d786bd76393b38e3e70a68bef23b0126e3e3b4f164ee27ffb1b61b2d27e4fd3e. Jan 17 12:18:45.189913 containerd[1459]: time="2025-01-17T12:18:45.189828095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-w7c5r,Uid:fc339706-4626-42f4-a49d-5517369d90f2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d786bd76393b38e3e70a68bef23b0126e3e3b4f164ee27ffb1b61b2d27e4fd3e\"" Jan 17 12:18:45.197874 containerd[1459]: time="2025-01-17T12:18:45.197808060Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 17 12:18:45.380965 kubelet[2491]: E0117 12:18:45.380845 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:45.411777 kubelet[2491]: I0117 12:18:45.411690 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2gh2s" podStartSLOduration=1.411667732 podStartE2EDuration="1.411667732s" podCreationTimestamp="2025-01-17 12:18:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:18:45.411505351 +0000 UTC m=+7.322007974" watchObservedRunningTime="2025-01-17 12:18:45.411667732 +0000 UTC m=+7.322170325" Jan 17 12:18:45.463279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1613488604.mount: Deactivated successfully. Jan 17 12:18:45.635837 kubelet[2491]: E0117 12:18:45.633653 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:46.384802 kubelet[2491]: E0117 12:18:46.384062 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:47.395835 kubelet[2491]: E0117 12:18:47.395781 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:47.461196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3425474972.mount: Deactivated successfully. Jan 17 12:18:48.346822 kubelet[2491]: E0117 12:18:48.345145 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:48.399524 kubelet[2491]: E0117 12:18:48.398976 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:49.572221 containerd[1459]: time="2025-01-17T12:18:49.571819278Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:49.619388 containerd[1459]: time="2025-01-17T12:18:49.619202751Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764345" Jan 17 12:18:49.660194 containerd[1459]: time="2025-01-17T12:18:49.660091670Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:49.666726 containerd[1459]: time="2025-01-17T12:18:49.666608066Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:49.668788 containerd[1459]: time="2025-01-17T12:18:49.668195410Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 4.470318039s" Jan 17 12:18:49.668788 containerd[1459]: time="2025-01-17T12:18:49.668261719Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 17 12:18:49.701469 containerd[1459]: time="2025-01-17T12:18:49.701302327Z" level=info msg="CreateContainer within sandbox \"d786bd76393b38e3e70a68bef23b0126e3e3b4f164ee27ffb1b61b2d27e4fd3e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 12:18:49.720441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3844240697.mount: Deactivated successfully. Jan 17 12:18:49.730006 containerd[1459]: time="2025-01-17T12:18:49.729918337Z" level=info msg="CreateContainer within sandbox \"d786bd76393b38e3e70a68bef23b0126e3e3b4f164ee27ffb1b61b2d27e4fd3e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cb18aee88e0bcf80a3d2e9dd071ea4f097806888d7780ba2cc0944cf4fccb7b4\"" Jan 17 12:18:49.731844 containerd[1459]: time="2025-01-17T12:18:49.731428717Z" level=info msg="StartContainer for \"cb18aee88e0bcf80a3d2e9dd071ea4f097806888d7780ba2cc0944cf4fccb7b4\"" Jan 17 12:18:49.796054 systemd[1]: Started cri-containerd-cb18aee88e0bcf80a3d2e9dd071ea4f097806888d7780ba2cc0944cf4fccb7b4.scope - libcontainer container cb18aee88e0bcf80a3d2e9dd071ea4f097806888d7780ba2cc0944cf4fccb7b4. Jan 17 12:18:49.838789 containerd[1459]: time="2025-01-17T12:18:49.838265434Z" level=info msg="StartContainer for \"cb18aee88e0bcf80a3d2e9dd071ea4f097806888d7780ba2cc0944cf4fccb7b4\" returns successfully" Jan 17 12:18:53.728774 kubelet[2491]: I0117 12:18:53.728678 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-w7c5r" podStartSLOduration=5.253517587 podStartE2EDuration="9.728653327s" podCreationTimestamp="2025-01-17 12:18:44 +0000 UTC" firstStartedPulling="2025-01-17 12:18:45.19488816 +0000 UTC m=+7.105390749" lastFinishedPulling="2025-01-17 12:18:49.670023899 +0000 UTC m=+11.580526489" observedRunningTime="2025-01-17 12:18:50.450272177 +0000 UTC m=+12.360774776" watchObservedRunningTime="2025-01-17 12:18:53.728653327 +0000 UTC m=+15.639155921" Jan 17 12:18:53.742148 systemd[1]: Created slice kubepods-besteffort-pod0c94f622_80de_4abd_b2f4_f05253e01f5a.slice - libcontainer container kubepods-besteffort-pod0c94f622_80de_4abd_b2f4_f05253e01f5a.slice. Jan 17 12:18:53.776116 kubelet[2491]: I0117 12:18:53.776052 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c94f622-80de-4abd-b2f4-f05253e01f5a-tigera-ca-bundle\") pod \"calico-typha-6f4594b88c-hzdmq\" (UID: \"0c94f622-80de-4abd-b2f4-f05253e01f5a\") " pod="calico-system/calico-typha-6f4594b88c-hzdmq" Jan 17 12:18:53.776116 kubelet[2491]: I0117 12:18:53.776102 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp9vp\" (UniqueName: \"kubernetes.io/projected/0c94f622-80de-4abd-b2f4-f05253e01f5a-kube-api-access-tp9vp\") pod \"calico-typha-6f4594b88c-hzdmq\" (UID: \"0c94f622-80de-4abd-b2f4-f05253e01f5a\") " pod="calico-system/calico-typha-6f4594b88c-hzdmq" Jan 17 12:18:53.776454 kubelet[2491]: I0117 12:18:53.776136 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0c94f622-80de-4abd-b2f4-f05253e01f5a-typha-certs\") pod \"calico-typha-6f4594b88c-hzdmq\" (UID: \"0c94f622-80de-4abd-b2f4-f05253e01f5a\") " pod="calico-system/calico-typha-6f4594b88c-hzdmq" Jan 17 12:18:53.920840 systemd[1]: Created slice kubepods-besteffort-pod77bacb2f_b10c_4b7c_824b_6ba816dc5586.slice - libcontainer container kubepods-besteffort-pod77bacb2f_b10c_4b7c_824b_6ba816dc5586.slice. Jan 17 12:18:53.978303 kubelet[2491]: I0117 12:18:53.977906 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77bacb2f-b10c-4b7c-824b-6ba816dc5586-tigera-ca-bundle\") pod \"calico-node-7np4g\" (UID: \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\") " pod="calico-system/calico-node-7np4g" Jan 17 12:18:53.978303 kubelet[2491]: I0117 12:18:53.977964 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-lib-modules\") pod \"calico-node-7np4g\" (UID: \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\") " pod="calico-system/calico-node-7np4g" Jan 17 12:18:53.978303 kubelet[2491]: I0117 12:18:53.977980 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-var-lib-calico\") pod \"calico-node-7np4g\" (UID: \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\") " pod="calico-system/calico-node-7np4g" Jan 17 12:18:53.978303 kubelet[2491]: I0117 12:18:53.978003 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-flexvol-driver-host\") pod \"calico-node-7np4g\" (UID: \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\") " pod="calico-system/calico-node-7np4g" Jan 17 12:18:53.978303 kubelet[2491]: I0117 12:18:53.978022 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-var-run-calico\") pod \"calico-node-7np4g\" (UID: \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\") " pod="calico-system/calico-node-7np4g" Jan 17 12:18:53.978577 kubelet[2491]: I0117 12:18:53.978039 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8tg4\" (UniqueName: \"kubernetes.io/projected/77bacb2f-b10c-4b7c-824b-6ba816dc5586-kube-api-access-h8tg4\") pod \"calico-node-7np4g\" (UID: \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\") " pod="calico-system/calico-node-7np4g" Jan 17 12:18:53.978577 kubelet[2491]: I0117 12:18:53.978055 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-xtables-lock\") pod \"calico-node-7np4g\" (UID: \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\") " pod="calico-system/calico-node-7np4g" Jan 17 12:18:53.978577 kubelet[2491]: I0117 12:18:53.978070 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-cni-net-dir\") pod \"calico-node-7np4g\" (UID: \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\") " pod="calico-system/calico-node-7np4g" Jan 17 12:18:53.978577 kubelet[2491]: I0117 12:18:53.978086 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-cni-log-dir\") pod \"calico-node-7np4g\" (UID: \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\") " pod="calico-system/calico-node-7np4g" Jan 17 12:18:53.978577 kubelet[2491]: I0117 12:18:53.978105 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/77bacb2f-b10c-4b7c-824b-6ba816dc5586-node-certs\") pod \"calico-node-7np4g\" (UID: \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\") " pod="calico-system/calico-node-7np4g" Jan 17 12:18:53.978717 kubelet[2491]: I0117 12:18:53.978132 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-cni-bin-dir\") pod \"calico-node-7np4g\" (UID: \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\") " pod="calico-system/calico-node-7np4g" Jan 17 12:18:53.978717 kubelet[2491]: I0117 12:18:53.978157 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-policysync\") pod \"calico-node-7np4g\" (UID: \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\") " pod="calico-system/calico-node-7np4g" Jan 17 12:18:54.049712 kubelet[2491]: E0117 12:18:54.049060 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:54.053644 containerd[1459]: time="2025-01-17T12:18:54.050327546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f4594b88c-hzdmq,Uid:0c94f622-80de-4abd-b2f4-f05253e01f5a,Namespace:calico-system,Attempt:0,}" Jan 17 12:18:54.113331 kubelet[2491]: E0117 12:18:54.113204 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.113331 kubelet[2491]: W0117 12:18:54.113245 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.113331 kubelet[2491]: E0117 12:18:54.113275 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.117426 kubelet[2491]: E0117 12:18:54.117364 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h55hv" podUID="b99954fd-00d0-4234-8172-969ac6f807eb" Jan 17 12:18:54.124419 containerd[1459]: time="2025-01-17T12:18:54.123717799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:54.124419 containerd[1459]: time="2025-01-17T12:18:54.123816344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:54.124419 containerd[1459]: time="2025-01-17T12:18:54.123833198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:54.126914 containerd[1459]: time="2025-01-17T12:18:54.125281231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:54.160043 kubelet[2491]: E0117 12:18:54.159903 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.160043 kubelet[2491]: W0117 12:18:54.159948 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.160043 kubelet[2491]: E0117 12:18:54.159985 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.177275 systemd[1]: Started cri-containerd-1dbe89695f75be8d8f665a121ac442543ad505e3de9fb43098dcf34cc7fd8179.scope - libcontainer container 1dbe89695f75be8d8f665a121ac442543ad505e3de9fb43098dcf34cc7fd8179. Jan 17 12:18:54.178766 kubelet[2491]: E0117 12:18:54.177786 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.178766 kubelet[2491]: W0117 12:18:54.177825 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.178766 kubelet[2491]: E0117 12:18:54.177860 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.179816 kubelet[2491]: E0117 12:18:54.179502 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.179816 kubelet[2491]: W0117 12:18:54.179526 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.179816 kubelet[2491]: E0117 12:18:54.179639 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.181516 kubelet[2491]: E0117 12:18:54.181278 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.181516 kubelet[2491]: W0117 12:18:54.181299 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.181516 kubelet[2491]: E0117 12:18:54.181321 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.183598 kubelet[2491]: E0117 12:18:54.183383 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.183598 kubelet[2491]: W0117 12:18:54.183404 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.183598 kubelet[2491]: E0117 12:18:54.183424 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.185369 kubelet[2491]: E0117 12:18:54.185306 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.186032 kubelet[2491]: W0117 12:18:54.185815 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.186032 kubelet[2491]: E0117 12:18:54.185852 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.188063 kubelet[2491]: E0117 12:18:54.188041 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.188416 kubelet[2491]: W0117 12:18:54.188207 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.188416 kubelet[2491]: E0117 12:18:54.188245 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.190780 kubelet[2491]: E0117 12:18:54.189989 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.190780 kubelet[2491]: W0117 12:18:54.190008 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.190780 kubelet[2491]: E0117 12:18:54.190029 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.191661 kubelet[2491]: E0117 12:18:54.191447 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.191661 kubelet[2491]: W0117 12:18:54.191485 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.191661 kubelet[2491]: E0117 12:18:54.191504 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.191939 kubelet[2491]: E0117 12:18:54.191873 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.191939 kubelet[2491]: W0117 12:18:54.191886 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.191939 kubelet[2491]: E0117 12:18:54.191898 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.192601 kubelet[2491]: E0117 12:18:54.192170 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.192601 kubelet[2491]: W0117 12:18:54.192179 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.192601 kubelet[2491]: E0117 12:18:54.192190 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.192991 kubelet[2491]: E0117 12:18:54.192978 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.193107 kubelet[2491]: W0117 12:18:54.193048 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.193107 kubelet[2491]: E0117 12:18:54.193062 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.193716 kubelet[2491]: E0117 12:18:54.193516 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.193716 kubelet[2491]: W0117 12:18:54.193532 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.193716 kubelet[2491]: E0117 12:18:54.193544 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.194337 kubelet[2491]: E0117 12:18:54.194262 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.194337 kubelet[2491]: W0117 12:18:54.194277 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.194337 kubelet[2491]: E0117 12:18:54.194289 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.194972 kubelet[2491]: E0117 12:18:54.194727 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.194972 kubelet[2491]: W0117 12:18:54.194850 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.194972 kubelet[2491]: E0117 12:18:54.194867 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.195402 kubelet[2491]: E0117 12:18:54.195388 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.195510 kubelet[2491]: W0117 12:18:54.195452 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.195510 kubelet[2491]: E0117 12:18:54.195466 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.195857 kubelet[2491]: E0117 12:18:54.195771 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.195857 kubelet[2491]: W0117 12:18:54.195782 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.195857 kubelet[2491]: E0117 12:18:54.195792 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.196599 kubelet[2491]: E0117 12:18:54.196216 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.196599 kubelet[2491]: W0117 12:18:54.196228 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.196599 kubelet[2491]: E0117 12:18:54.196239 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.197075 kubelet[2491]: E0117 12:18:54.196918 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.197075 kubelet[2491]: W0117 12:18:54.196930 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.197075 kubelet[2491]: E0117 12:18:54.196953 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.197385 kubelet[2491]: E0117 12:18:54.197267 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.197385 kubelet[2491]: W0117 12:18:54.197279 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.197385 kubelet[2491]: E0117 12:18:54.197290 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.197814 kubelet[2491]: E0117 12:18:54.197774 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.197993 kubelet[2491]: W0117 12:18:54.197920 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.197993 kubelet[2491]: E0117 12:18:54.197941 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.198505 kubelet[2491]: E0117 12:18:54.198484 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.198505 kubelet[2491]: W0117 12:18:54.198504 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.198574 kubelet[2491]: E0117 12:18:54.198516 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.198574 kubelet[2491]: I0117 12:18:54.198549 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b99954fd-00d0-4234-8172-969ac6f807eb-kubelet-dir\") pod \"csi-node-driver-h55hv\" (UID: \"b99954fd-00d0-4234-8172-969ac6f807eb\") " pod="calico-system/csi-node-driver-h55hv" Jan 17 12:18:54.199066 kubelet[2491]: E0117 12:18:54.198792 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.199066 kubelet[2491]: W0117 12:18:54.198846 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.199066 kubelet[2491]: E0117 12:18:54.198858 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.199066 kubelet[2491]: I0117 12:18:54.198876 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b99954fd-00d0-4234-8172-969ac6f807eb-socket-dir\") pod \"csi-node-driver-h55hv\" (UID: \"b99954fd-00d0-4234-8172-969ac6f807eb\") " pod="calico-system/csi-node-driver-h55hv" Jan 17 12:18:54.199608 kubelet[2491]: E0117 12:18:54.199321 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.199608 kubelet[2491]: W0117 12:18:54.199342 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.199608 kubelet[2491]: E0117 12:18:54.199354 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.199608 kubelet[2491]: I0117 12:18:54.199373 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b99954fd-00d0-4234-8172-969ac6f807eb-varrun\") pod \"csi-node-driver-h55hv\" (UID: \"b99954fd-00d0-4234-8172-969ac6f807eb\") " pod="calico-system/csi-node-driver-h55hv" Jan 17 12:18:54.199978 kubelet[2491]: E0117 12:18:54.199806 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.199978 kubelet[2491]: W0117 12:18:54.199822 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.199978 kubelet[2491]: E0117 12:18:54.199843 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.200501 kubelet[2491]: E0117 12:18:54.200361 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.200501 kubelet[2491]: W0117 12:18:54.200376 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.200501 kubelet[2491]: E0117 12:18:54.200404 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.200960 kubelet[2491]: E0117 12:18:54.200874 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.200960 kubelet[2491]: W0117 12:18:54.200887 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.201440 kubelet[2491]: E0117 12:18:54.201287 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.201440 kubelet[2491]: E0117 12:18:54.201318 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.201440 kubelet[2491]: W0117 12:18:54.201330 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.201697 kubelet[2491]: E0117 12:18:54.201644 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.201697 kubelet[2491]: I0117 12:18:54.201679 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkjmc\" (UniqueName: \"kubernetes.io/projected/b99954fd-00d0-4234-8172-969ac6f807eb-kube-api-access-qkjmc\") pod \"csi-node-driver-h55hv\" (UID: \"b99954fd-00d0-4234-8172-969ac6f807eb\") " pod="calico-system/csi-node-driver-h55hv" Jan 17 12:18:54.202353 kubelet[2491]: E0117 12:18:54.201904 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.202495 kubelet[2491]: W0117 12:18:54.202402 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.202864 kubelet[2491]: E0117 12:18:54.202734 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.203145 kubelet[2491]: W0117 12:18:54.202961 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.203145 kubelet[2491]: E0117 12:18:54.202984 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.203145 kubelet[2491]: E0117 12:18:54.202715 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.203652 kubelet[2491]: E0117 12:18:54.203550 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.203652 kubelet[2491]: W0117 12:18:54.203563 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.203652 kubelet[2491]: E0117 12:18:54.203586 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.204257 kubelet[2491]: E0117 12:18:54.204024 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.204257 kubelet[2491]: W0117 12:18:54.204040 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.204257 kubelet[2491]: E0117 12:18:54.204051 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.205077 kubelet[2491]: E0117 12:18:54.204798 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.205077 kubelet[2491]: W0117 12:18:54.204811 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.205077 kubelet[2491]: E0117 12:18:54.204839 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.205077 kubelet[2491]: I0117 12:18:54.204967 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b99954fd-00d0-4234-8172-969ac6f807eb-registration-dir\") pod \"csi-node-driver-h55hv\" (UID: \"b99954fd-00d0-4234-8172-969ac6f807eb\") " pod="calico-system/csi-node-driver-h55hv" Jan 17 12:18:54.206003 kubelet[2491]: E0117 12:18:54.205684 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.206003 kubelet[2491]: W0117 12:18:54.205701 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.206003 kubelet[2491]: E0117 12:18:54.205718 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.206651 kubelet[2491]: E0117 12:18:54.206412 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.206651 kubelet[2491]: W0117 12:18:54.206425 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.206651 kubelet[2491]: E0117 12:18:54.206437 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.206651 kubelet[2491]: E0117 12:18:54.206581 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.206651 kubelet[2491]: W0117 12:18:54.206589 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.206651 kubelet[2491]: E0117 12:18:54.206597 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.224900 kubelet[2491]: E0117 12:18:54.224849 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:54.226007 containerd[1459]: time="2025-01-17T12:18:54.225565476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7np4g,Uid:77bacb2f-b10c-4b7c-824b-6ba816dc5586,Namespace:calico-system,Attempt:0,}" Jan 17 12:18:54.293075 containerd[1459]: time="2025-01-17T12:18:54.292149774Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:54.293075 containerd[1459]: time="2025-01-17T12:18:54.292519125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:54.293075 containerd[1459]: time="2025-01-17T12:18:54.292538960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:54.293075 containerd[1459]: time="2025-01-17T12:18:54.292677658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:54.311011 kubelet[2491]: E0117 12:18:54.308149 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.311011 kubelet[2491]: W0117 12:18:54.308182 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.311011 kubelet[2491]: E0117 12:18:54.308213 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.313214 kubelet[2491]: E0117 12:18:54.312953 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.313214 kubelet[2491]: W0117 12:18:54.312984 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.313214 kubelet[2491]: E0117 12:18:54.313025 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.316696 kubelet[2491]: E0117 12:18:54.316666 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.317087 kubelet[2491]: W0117 12:18:54.316878 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.318001 kubelet[2491]: E0117 12:18:54.317094 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.318001 kubelet[2491]: E0117 12:18:54.317949 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.318001 kubelet[2491]: W0117 12:18:54.317969 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.318437 kubelet[2491]: E0117 12:18:54.318284 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.319119 kubelet[2491]: E0117 12:18:54.319006 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.319493 kubelet[2491]: W0117 12:18:54.319325 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.319862 kubelet[2491]: E0117 12:18:54.319646 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.320228 kubelet[2491]: E0117 12:18:54.320129 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.320505 kubelet[2491]: W0117 12:18:54.320400 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.320721 kubelet[2491]: E0117 12:18:54.320642 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.321761 kubelet[2491]: E0117 12:18:54.321257 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.321761 kubelet[2491]: W0117 12:18:54.321272 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.321984 kubelet[2491]: E0117 12:18:54.321939 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.323157 kubelet[2491]: E0117 12:18:54.322877 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.323157 kubelet[2491]: W0117 12:18:54.322892 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.323429 kubelet[2491]: E0117 12:18:54.323356 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.324073 kubelet[2491]: E0117 12:18:54.323876 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.324073 kubelet[2491]: W0117 12:18:54.323889 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.325053 kubelet[2491]: E0117 12:18:54.324910 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.325490 kubelet[2491]: E0117 12:18:54.325285 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.325490 kubelet[2491]: W0117 12:18:54.325302 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.326422 kubelet[2491]: E0117 12:18:54.325963 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.327476 kubelet[2491]: E0117 12:18:54.327347 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.327476 kubelet[2491]: W0117 12:18:54.327363 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.328437 kubelet[2491]: E0117 12:18:54.328065 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.329077 kubelet[2491]: E0117 12:18:54.329055 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.329262 kubelet[2491]: W0117 12:18:54.329181 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.330243 kubelet[2491]: E0117 12:18:54.330021 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.330243 kubelet[2491]: W0117 12:18:54.330040 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.331489 kubelet[2491]: E0117 12:18:54.330887 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.331489 kubelet[2491]: W0117 12:18:54.330901 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.331897 kubelet[2491]: E0117 12:18:54.331716 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.331897 kubelet[2491]: W0117 12:18:54.331736 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.332267 kubelet[2491]: E0117 12:18:54.332161 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.333297 kubelet[2491]: W0117 12:18:54.332826 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.333297 kubelet[2491]: E0117 12:18:54.332880 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.333297 kubelet[2491]: E0117 12:18:54.333137 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.333297 kubelet[2491]: W0117 12:18:54.333174 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.333297 kubelet[2491]: E0117 12:18:54.333192 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.334116 kubelet[2491]: E0117 12:18:54.333823 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.334116 kubelet[2491]: W0117 12:18:54.333835 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.334116 kubelet[2491]: E0117 12:18:54.333849 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.334116 kubelet[2491]: E0117 12:18:54.333883 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.334216 kubelet[2491]: E0117 12:18:54.334119 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.334216 kubelet[2491]: W0117 12:18:54.334150 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.334216 kubelet[2491]: E0117 12:18:54.334162 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.334804 kubelet[2491]: E0117 12:18:54.334427 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.334804 kubelet[2491]: W0117 12:18:54.334444 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.334804 kubelet[2491]: E0117 12:18:54.334481 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.334804 kubelet[2491]: E0117 12:18:54.334505 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.334804 kubelet[2491]: E0117 12:18:54.334803 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.335128 kubelet[2491]: W0117 12:18:54.334815 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.335128 kubelet[2491]: E0117 12:18:54.334824 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.335128 kubelet[2491]: E0117 12:18:54.334843 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.337138 systemd[1]: Started cri-containerd-745bc54d77845bf03da2fc2e227c5365cab88e8ffb32a347777d2a84ff5a86dc.scope - libcontainer container 745bc54d77845bf03da2fc2e227c5365cab88e8ffb32a347777d2a84ff5a86dc. Jan 17 12:18:54.338502 kubelet[2491]: E0117 12:18:54.337438 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.338502 kubelet[2491]: W0117 12:18:54.337453 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.338502 kubelet[2491]: E0117 12:18:54.337485 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.338502 kubelet[2491]: E0117 12:18:54.337853 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.339771 kubelet[2491]: E0117 12:18:54.339718 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.339862 kubelet[2491]: W0117 12:18:54.339792 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.339862 kubelet[2491]: E0117 12:18:54.339818 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.340117 kubelet[2491]: E0117 12:18:54.340060 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.340117 kubelet[2491]: W0117 12:18:54.340092 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.340117 kubelet[2491]: E0117 12:18:54.340103 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.340979 kubelet[2491]: E0117 12:18:54.340941 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.340979 kubelet[2491]: W0117 12:18:54.340959 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.340979 kubelet[2491]: E0117 12:18:54.340971 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.360315 kubelet[2491]: E0117 12:18:54.360178 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:54.360988 kubelet[2491]: W0117 12:18:54.360724 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:54.360988 kubelet[2491]: E0117 12:18:54.360865 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:54.390703 containerd[1459]: time="2025-01-17T12:18:54.390626627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f4594b88c-hzdmq,Uid:0c94f622-80de-4abd-b2f4-f05253e01f5a,Namespace:calico-system,Attempt:0,} returns sandbox id \"1dbe89695f75be8d8f665a121ac442543ad505e3de9fb43098dcf34cc7fd8179\"" Jan 17 12:18:54.392821 kubelet[2491]: E0117 12:18:54.392150 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:54.396965 containerd[1459]: time="2025-01-17T12:18:54.396904673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 17 12:18:54.443884 containerd[1459]: time="2025-01-17T12:18:54.443816932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7np4g,Uid:77bacb2f-b10c-4b7c-824b-6ba816dc5586,Namespace:calico-system,Attempt:0,} returns sandbox id \"745bc54d77845bf03da2fc2e227c5365cab88e8ffb32a347777d2a84ff5a86dc\"" Jan 17 12:18:54.446186 kubelet[2491]: E0117 12:18:54.446140 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:55.822397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount323849580.mount: Deactivated successfully. Jan 17 12:18:56.271940 kubelet[2491]: E0117 12:18:56.271858 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h55hv" podUID="b99954fd-00d0-4234-8172-969ac6f807eb" Jan 17 12:18:56.758264 containerd[1459]: time="2025-01-17T12:18:56.758130934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:56.759879 containerd[1459]: time="2025-01-17T12:18:56.759782315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 17 12:18:56.760633 containerd[1459]: time="2025-01-17T12:18:56.760483584Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:56.764101 containerd[1459]: time="2025-01-17T12:18:56.764034855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:56.765847 containerd[1459]: time="2025-01-17T12:18:56.765792498Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.368838642s" Jan 17 12:18:56.766045 containerd[1459]: time="2025-01-17T12:18:56.766018389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 17 12:18:56.767642 containerd[1459]: time="2025-01-17T12:18:56.767573507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 17 12:18:56.794913 containerd[1459]: time="2025-01-17T12:18:56.794807358Z" level=info msg="CreateContainer within sandbox \"1dbe89695f75be8d8f665a121ac442543ad505e3de9fb43098dcf34cc7fd8179\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 12:18:56.868209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1359715940.mount: Deactivated successfully. Jan 17 12:18:56.870608 containerd[1459]: time="2025-01-17T12:18:56.870506936Z" level=info msg="CreateContainer within sandbox \"1dbe89695f75be8d8f665a121ac442543ad505e3de9fb43098dcf34cc7fd8179\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6731aa64cd6cd84c442d1383ebd9b4f199beba007b2a3a048c808436e7f7325f\"" Jan 17 12:18:56.873250 containerd[1459]: time="2025-01-17T12:18:56.873201395Z" level=info msg="StartContainer for \"6731aa64cd6cd84c442d1383ebd9b4f199beba007b2a3a048c808436e7f7325f\"" Jan 17 12:18:56.927198 systemd[1]: Started cri-containerd-6731aa64cd6cd84c442d1383ebd9b4f199beba007b2a3a048c808436e7f7325f.scope - libcontainer container 6731aa64cd6cd84c442d1383ebd9b4f199beba007b2a3a048c808436e7f7325f. Jan 17 12:18:57.034071 containerd[1459]: time="2025-01-17T12:18:57.033892593Z" level=info msg="StartContainer for \"6731aa64cd6cd84c442d1383ebd9b4f199beba007b2a3a048c808436e7f7325f\" returns successfully" Jan 17 12:18:57.469843 kubelet[2491]: E0117 12:18:57.468456 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:57.528053 kubelet[2491]: E0117 12:18:57.528005 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.528640 kubelet[2491]: W0117 12:18:57.528420 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.528640 kubelet[2491]: E0117 12:18:57.528463 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.529232 kubelet[2491]: E0117 12:18:57.529070 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.529232 kubelet[2491]: W0117 12:18:57.529093 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.529232 kubelet[2491]: E0117 12:18:57.529133 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.529963 kubelet[2491]: E0117 12:18:57.529700 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.529963 kubelet[2491]: W0117 12:18:57.529718 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.529963 kubelet[2491]: E0117 12:18:57.529799 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.530329 kubelet[2491]: E0117 12:18:57.530200 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.530329 kubelet[2491]: W0117 12:18:57.530218 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.530329 kubelet[2491]: E0117 12:18:57.530252 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.530846 kubelet[2491]: E0117 12:18:57.530699 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.530846 kubelet[2491]: W0117 12:18:57.530715 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.530846 kubelet[2491]: E0117 12:18:57.530727 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.531459 kubelet[2491]: E0117 12:18:57.531257 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.531459 kubelet[2491]: W0117 12:18:57.531293 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.531459 kubelet[2491]: E0117 12:18:57.531313 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.531897 kubelet[2491]: E0117 12:18:57.531709 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.531897 kubelet[2491]: W0117 12:18:57.531720 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.531897 kubelet[2491]: E0117 12:18:57.531731 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.532197 kubelet[2491]: E0117 12:18:57.532140 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.532197 kubelet[2491]: W0117 12:18:57.532158 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.532197 kubelet[2491]: E0117 12:18:57.532173 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.532782 kubelet[2491]: E0117 12:18:57.532636 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.532782 kubelet[2491]: W0117 12:18:57.532653 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.532782 kubelet[2491]: E0117 12:18:57.532668 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.533373 kubelet[2491]: E0117 12:18:57.533188 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.533373 kubelet[2491]: W0117 12:18:57.533201 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.533373 kubelet[2491]: E0117 12:18:57.533215 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.533906 kubelet[2491]: E0117 12:18:57.533767 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.533906 kubelet[2491]: W0117 12:18:57.533781 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.533906 kubelet[2491]: E0117 12:18:57.533793 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.534318 kubelet[2491]: E0117 12:18:57.534207 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.534318 kubelet[2491]: W0117 12:18:57.534218 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.534318 kubelet[2491]: E0117 12:18:57.534229 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.534776 kubelet[2491]: E0117 12:18:57.534632 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.534776 kubelet[2491]: W0117 12:18:57.534644 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.534776 kubelet[2491]: E0117 12:18:57.534655 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.534930 kubelet[2491]: E0117 12:18:57.534920 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.534976 kubelet[2491]: W0117 12:18:57.534968 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.535066 kubelet[2491]: E0117 12:18:57.535020 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.535594 kubelet[2491]: E0117 12:18:57.535517 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.535594 kubelet[2491]: W0117 12:18:57.535530 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.535594 kubelet[2491]: E0117 12:18:57.535541 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.554809 kubelet[2491]: E0117 12:18:57.554478 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.554809 kubelet[2491]: W0117 12:18:57.554519 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.554809 kubelet[2491]: E0117 12:18:57.554546 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.555516 kubelet[2491]: E0117 12:18:57.555317 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.555516 kubelet[2491]: W0117 12:18:57.555338 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.555516 kubelet[2491]: E0117 12:18:57.555373 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.555855 kubelet[2491]: E0117 12:18:57.555841 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.556113 kubelet[2491]: W0117 12:18:57.555941 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.556113 kubelet[2491]: E0117 12:18:57.555973 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.556434 kubelet[2491]: E0117 12:18:57.556256 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.556533 kubelet[2491]: W0117 12:18:57.556517 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.556673 kubelet[2491]: E0117 12:18:57.556625 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.556934 kubelet[2491]: E0117 12:18:57.556921 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.557088 kubelet[2491]: W0117 12:18:57.556998 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.557088 kubelet[2491]: E0117 12:18:57.557066 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.557443 kubelet[2491]: E0117 12:18:57.557352 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.557443 kubelet[2491]: W0117 12:18:57.557366 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.557443 kubelet[2491]: E0117 12:18:57.557401 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.557688 kubelet[2491]: E0117 12:18:57.557673 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.557927 kubelet[2491]: W0117 12:18:57.557778 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.557927 kubelet[2491]: E0117 12:18:57.557809 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.558104 kubelet[2491]: E0117 12:18:57.558091 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.558161 kubelet[2491]: W0117 12:18:57.558153 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.558223 kubelet[2491]: E0117 12:18:57.558207 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.558680 kubelet[2491]: E0117 12:18:57.558645 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.558680 kubelet[2491]: W0117 12:18:57.558674 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.558855 kubelet[2491]: E0117 12:18:57.558702 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.559005 kubelet[2491]: E0117 12:18:57.558984 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.559059 kubelet[2491]: W0117 12:18:57.559006 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.559112 kubelet[2491]: E0117 12:18:57.559100 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.559343 kubelet[2491]: E0117 12:18:57.559317 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.559343 kubelet[2491]: W0117 12:18:57.559335 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.559441 kubelet[2491]: E0117 12:18:57.559425 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.559673 kubelet[2491]: E0117 12:18:57.559652 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.559673 kubelet[2491]: W0117 12:18:57.559667 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.559800 kubelet[2491]: E0117 12:18:57.559766 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.560710 kubelet[2491]: E0117 12:18:57.560679 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.560853 kubelet[2491]: W0117 12:18:57.560722 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.560853 kubelet[2491]: E0117 12:18:57.560791 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.561378 kubelet[2491]: E0117 12:18:57.561307 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.561378 kubelet[2491]: W0117 12:18:57.561355 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.561644 kubelet[2491]: E0117 12:18:57.561589 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.561770 kubelet[2491]: E0117 12:18:57.561735 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.561848 kubelet[2491]: W0117 12:18:57.561781 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.561926 kubelet[2491]: E0117 12:18:57.561905 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.562093 kubelet[2491]: E0117 12:18:57.562075 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.562146 kubelet[2491]: W0117 12:18:57.562093 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.562146 kubelet[2491]: E0117 12:18:57.562115 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.562495 kubelet[2491]: E0117 12:18:57.562471 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.562495 kubelet[2491]: W0117 12:18:57.562491 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.562610 kubelet[2491]: E0117 12:18:57.562507 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:57.563067 kubelet[2491]: E0117 12:18:57.563047 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:18:57.563067 kubelet[2491]: W0117 12:18:57.563067 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:18:57.563147 kubelet[2491]: E0117 12:18:57.563084 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:18:58.274374 kubelet[2491]: E0117 12:18:58.274124 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h55hv" podUID="b99954fd-00d0-4234-8172-969ac6f807eb" Jan 17 12:18:58.337486 containerd[1459]: time="2025-01-17T12:18:58.337397517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:58.341393 containerd[1459]: time="2025-01-17T12:18:58.340435952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 17 12:18:58.342789 containerd[1459]: time="2025-01-17T12:18:58.342316492Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:58.347121 containerd[1459]: time="2025-01-17T12:18:58.346783388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:58.348421 containerd[1459]: time="2025-01-17T12:18:58.348104063Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.580493181s" Jan 17 12:18:58.348421 containerd[1459]: time="2025-01-17T12:18:58.348160081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 17 12:18:58.353905 containerd[1459]: time="2025-01-17T12:18:58.353607979Z" level=info msg="CreateContainer within sandbox \"745bc54d77845bf03da2fc2e227c5365cab88e8ffb32a347777d2a84ff5a86dc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:18:58.379252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4174860703.mount: Deactivated successfully. Jan 17 12:18:58.382774 containerd[1459]: time="2025-01-17T12:18:58.382674128Z" level=info msg="CreateContainer within sandbox \"745bc54d77845bf03da2fc2e227c5365cab88e8ffb32a347777d2a84ff5a86dc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d8901e73224402e20e3012c4525fa7465ea58001b2dd1fd0e55b0c6776a600ce\"" Jan 17 12:18:58.384550 containerd[1459]: time="2025-01-17T12:18:58.384291963Z" level=info msg="StartContainer for \"d8901e73224402e20e3012c4525fa7465ea58001b2dd1fd0e55b0c6776a600ce\"" Jan 17 12:18:58.446001 systemd[1]: Started cri-containerd-d8901e73224402e20e3012c4525fa7465ea58001b2dd1fd0e55b0c6776a600ce.scope - libcontainer container d8901e73224402e20e3012c4525fa7465ea58001b2dd1fd0e55b0c6776a600ce. Jan 17 12:18:58.476677 kubelet[2491]: I0117 12:18:58.474851 2491 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:18:58.476677 kubelet[2491]: E0117 12:18:58.475297 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:58.495344 containerd[1459]: time="2025-01-17T12:18:58.494140078Z" level=info msg="StartContainer for \"d8901e73224402e20e3012c4525fa7465ea58001b2dd1fd0e55b0c6776a600ce\" returns successfully" Jan 17 12:18:58.526580 systemd[1]: cri-containerd-d8901e73224402e20e3012c4525fa7465ea58001b2dd1fd0e55b0c6776a600ce.scope: Deactivated successfully. Jan 17 12:18:58.578592 containerd[1459]: time="2025-01-17T12:18:58.576215023Z" level=info msg="shim disconnected" id=d8901e73224402e20e3012c4525fa7465ea58001b2dd1fd0e55b0c6776a600ce namespace=k8s.io Jan 17 12:18:58.579111 containerd[1459]: time="2025-01-17T12:18:58.579063963Z" level=warning msg="cleaning up after shim disconnected" id=d8901e73224402e20e3012c4525fa7465ea58001b2dd1fd0e55b0c6776a600ce namespace=k8s.io Jan 17 12:18:58.579248 containerd[1459]: time="2025-01-17T12:18:58.579223238Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:18:58.778407 systemd[1]: run-containerd-runc-k8s.io-d8901e73224402e20e3012c4525fa7465ea58001b2dd1fd0e55b0c6776a600ce-runc.ybmffh.mount: Deactivated successfully. Jan 17 12:18:58.778520 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8901e73224402e20e3012c4525fa7465ea58001b2dd1fd0e55b0c6776a600ce-rootfs.mount: Deactivated successfully. Jan 17 12:18:59.481731 kubelet[2491]: E0117 12:18:59.481673 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:18:59.485187 containerd[1459]: time="2025-01-17T12:18:59.485107104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 17 12:18:59.516960 kubelet[2491]: I0117 12:18:59.516840 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6f4594b88c-hzdmq" podStartSLOduration=4.144351828 podStartE2EDuration="6.516789128s" podCreationTimestamp="2025-01-17 12:18:53 +0000 UTC" firstStartedPulling="2025-01-17 12:18:54.394936277 +0000 UTC m=+16.305438862" lastFinishedPulling="2025-01-17 12:18:56.767373577 +0000 UTC m=+18.677876162" observedRunningTime="2025-01-17 12:18:57.489107633 +0000 UTC m=+19.399610232" watchObservedRunningTime="2025-01-17 12:18:59.516789128 +0000 UTC m=+21.427291735" Jan 17 12:19:00.274712 kubelet[2491]: E0117 12:19:00.272733 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h55hv" podUID="b99954fd-00d0-4234-8172-969ac6f807eb" Jan 17 12:19:02.275549 kubelet[2491]: E0117 12:19:02.275030 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h55hv" podUID="b99954fd-00d0-4234-8172-969ac6f807eb" Jan 17 12:19:04.274160 kubelet[2491]: E0117 12:19:04.273544 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h55hv" podUID="b99954fd-00d0-4234-8172-969ac6f807eb" Jan 17 12:19:04.989798 containerd[1459]: time="2025-01-17T12:19:04.989167345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:04.992006 containerd[1459]: time="2025-01-17T12:19:04.991442666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 17 12:19:04.994379 containerd[1459]: time="2025-01-17T12:19:04.994239984Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:04.998600 containerd[1459]: time="2025-01-17T12:19:04.998513764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:05.001704 containerd[1459]: time="2025-01-17T12:19:05.001494008Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.516317756s" Jan 17 12:19:05.001704 containerd[1459]: time="2025-01-17T12:19:05.001568786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 17 12:19:05.010494 containerd[1459]: time="2025-01-17T12:19:05.010424757Z" level=info msg="CreateContainer within sandbox \"745bc54d77845bf03da2fc2e227c5365cab88e8ffb32a347777d2a84ff5a86dc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:19:05.057375 containerd[1459]: time="2025-01-17T12:19:05.057274349Z" level=info msg="CreateContainer within sandbox \"745bc54d77845bf03da2fc2e227c5365cab88e8ffb32a347777d2a84ff5a86dc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ed6342248a4ebaddffe8ae5be6608f56418168bd1e8d313d690ffdc5b9e8b798\"" Jan 17 12:19:05.060991 containerd[1459]: time="2025-01-17T12:19:05.060924132Z" level=info msg="StartContainer for \"ed6342248a4ebaddffe8ae5be6608f56418168bd1e8d313d690ffdc5b9e8b798\"" Jan 17 12:19:05.203188 systemd[1]: Started cri-containerd-ed6342248a4ebaddffe8ae5be6608f56418168bd1e8d313d690ffdc5b9e8b798.scope - libcontainer container ed6342248a4ebaddffe8ae5be6608f56418168bd1e8d313d690ffdc5b9e8b798. Jan 17 12:19:05.293678 containerd[1459]: time="2025-01-17T12:19:05.291961713Z" level=info msg="StartContainer for \"ed6342248a4ebaddffe8ae5be6608f56418168bd1e8d313d690ffdc5b9e8b798\" returns successfully" Jan 17 12:19:05.514008 kubelet[2491]: E0117 12:19:05.513535 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:06.273221 kubelet[2491]: E0117 12:19:06.273144 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h55hv" podUID="b99954fd-00d0-4234-8172-969ac6f807eb" Jan 17 12:19:06.467946 systemd[1]: cri-containerd-ed6342248a4ebaddffe8ae5be6608f56418168bd1e8d313d690ffdc5b9e8b798.scope: Deactivated successfully. Jan 17 12:19:06.542571 kubelet[2491]: E0117 12:19:06.535710 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:06.587332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed6342248a4ebaddffe8ae5be6608f56418168bd1e8d313d690ffdc5b9e8b798-rootfs.mount: Deactivated successfully. Jan 17 12:19:06.603801 containerd[1459]: time="2025-01-17T12:19:06.603663999Z" level=info msg="shim disconnected" id=ed6342248a4ebaddffe8ae5be6608f56418168bd1e8d313d690ffdc5b9e8b798 namespace=k8s.io Jan 17 12:19:06.603801 containerd[1459]: time="2025-01-17T12:19:06.603795493Z" level=warning msg="cleaning up after shim disconnected" id=ed6342248a4ebaddffe8ae5be6608f56418168bd1e8d313d690ffdc5b9e8b798 namespace=k8s.io Jan 17 12:19:06.603801 containerd[1459]: time="2025-01-17T12:19:06.603811223Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:19:06.652020 kubelet[2491]: I0117 12:19:06.650784 2491 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 17 12:19:06.759846 kubelet[2491]: W0117 12:19:06.754376 2491 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4081.3.0-f-fd30d73867" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.3.0-f-fd30d73867' and this object Jan 17 12:19:06.759846 kubelet[2491]: E0117 12:19:06.754571 2491 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4081.3.0-f-fd30d73867\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081.3.0-f-fd30d73867' and this object" logger="UnhandledError" Jan 17 12:19:06.763212 systemd[1]: Created slice kubepods-burstable-pod6f470594_2379_4193_8b55_bd3e6a5996c1.slice - libcontainer container kubepods-burstable-pod6f470594_2379_4193_8b55_bd3e6a5996c1.slice. Jan 17 12:19:06.811090 systemd[1]: Created slice kubepods-besteffort-pod82477d9d_231e_4438_b265_cae0af210b64.slice - libcontainer container kubepods-besteffort-pod82477d9d_231e_4438_b265_cae0af210b64.slice. Jan 17 12:19:06.836628 systemd[1]: Created slice kubepods-besteffort-pod5e7faaed_af39_479f_9b85_c936c88dbeb7.slice - libcontainer container kubepods-besteffort-pod5e7faaed_af39_479f_9b85_c936c88dbeb7.slice. Jan 17 12:19:06.858209 systemd[1]: Created slice kubepods-besteffort-podd2d2e829_8efa_4f4c_b9c2_2cd87395f520.slice - libcontainer container kubepods-besteffort-podd2d2e829_8efa_4f4c_b9c2_2cd87395f520.slice. Jan 17 12:19:06.861523 kubelet[2491]: I0117 12:19:06.861453 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5lcr\" (UniqueName: \"kubernetes.io/projected/6f470594-2379-4193-8b55-bd3e6a5996c1-kube-api-access-f5lcr\") pod \"coredns-6f6b679f8f-zgmwb\" (UID: \"6f470594-2379-4193-8b55-bd3e6a5996c1\") " pod="kube-system/coredns-6f6b679f8f-zgmwb" Jan 17 12:19:06.862944 kubelet[2491]: I0117 12:19:06.862608 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f470594-2379-4193-8b55-bd3e6a5996c1-config-volume\") pod \"coredns-6f6b679f8f-zgmwb\" (UID: \"6f470594-2379-4193-8b55-bd3e6a5996c1\") " pod="kube-system/coredns-6f6b679f8f-zgmwb" Jan 17 12:19:06.871725 systemd[1]: Created slice kubepods-burstable-pod540c0bc8_bb65_4107_8514_8f6a7b04b667.slice - libcontainer container kubepods-burstable-pod540c0bc8_bb65_4107_8514_8f6a7b04b667.slice. Jan 17 12:19:06.963862 kubelet[2491]: I0117 12:19:06.963704 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5e7faaed-af39-479f-9b85-c936c88dbeb7-calico-apiserver-certs\") pod \"calico-apiserver-7b466f6854-xrf2v\" (UID: \"5e7faaed-af39-479f-9b85-c936c88dbeb7\") " pod="calico-apiserver/calico-apiserver-7b466f6854-xrf2v" Jan 17 12:19:06.964914 kubelet[2491]: I0117 12:19:06.964458 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82477d9d-231e-4438-b265-cae0af210b64-tigera-ca-bundle\") pod \"calico-kube-controllers-75f85c7775-l4kfg\" (UID: \"82477d9d-231e-4438-b265-cae0af210b64\") " pod="calico-system/calico-kube-controllers-75f85c7775-l4kfg" Jan 17 12:19:06.964914 kubelet[2491]: I0117 12:19:06.964534 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d2d2e829-8efa-4f4c-b9c2-2cd87395f520-calico-apiserver-certs\") pod \"calico-apiserver-7b466f6854-hrc5h\" (UID: \"d2d2e829-8efa-4f4c-b9c2-2cd87395f520\") " pod="calico-apiserver/calico-apiserver-7b466f6854-hrc5h" Jan 17 12:19:06.964914 kubelet[2491]: I0117 12:19:06.964563 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ckjv\" (UniqueName: \"kubernetes.io/projected/5e7faaed-af39-479f-9b85-c936c88dbeb7-kube-api-access-6ckjv\") pod \"calico-apiserver-7b466f6854-xrf2v\" (UID: \"5e7faaed-af39-479f-9b85-c936c88dbeb7\") " pod="calico-apiserver/calico-apiserver-7b466f6854-xrf2v" Jan 17 12:19:06.964914 kubelet[2491]: I0117 12:19:06.964655 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcqxs\" (UniqueName: \"kubernetes.io/projected/540c0bc8-bb65-4107-8514-8f6a7b04b667-kube-api-access-qcqxs\") pod \"coredns-6f6b679f8f-kks2v\" (UID: \"540c0bc8-bb65-4107-8514-8f6a7b04b667\") " pod="kube-system/coredns-6f6b679f8f-kks2v" Jan 17 12:19:06.964914 kubelet[2491]: I0117 12:19:06.964684 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st7s2\" (UniqueName: \"kubernetes.io/projected/82477d9d-231e-4438-b265-cae0af210b64-kube-api-access-st7s2\") pod \"calico-kube-controllers-75f85c7775-l4kfg\" (UID: \"82477d9d-231e-4438-b265-cae0af210b64\") " pod="calico-system/calico-kube-controllers-75f85c7775-l4kfg" Jan 17 12:19:06.965280 kubelet[2491]: I0117 12:19:06.964906 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l62f\" (UniqueName: \"kubernetes.io/projected/d2d2e829-8efa-4f4c-b9c2-2cd87395f520-kube-api-access-6l62f\") pod \"calico-apiserver-7b466f6854-hrc5h\" (UID: \"d2d2e829-8efa-4f4c-b9c2-2cd87395f520\") " pod="calico-apiserver/calico-apiserver-7b466f6854-hrc5h" Jan 17 12:19:06.965280 kubelet[2491]: I0117 12:19:06.964970 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/540c0bc8-bb65-4107-8514-8f6a7b04b667-config-volume\") pod \"coredns-6f6b679f8f-kks2v\" (UID: \"540c0bc8-bb65-4107-8514-8f6a7b04b667\") " pod="kube-system/coredns-6f6b679f8f-kks2v" Jan 17 12:19:07.158392 containerd[1459]: time="2025-01-17T12:19:07.149545089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b466f6854-xrf2v,Uid:5e7faaed-af39-479f-9b85-c936c88dbeb7,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:19:07.164113 containerd[1459]: time="2025-01-17T12:19:07.164008732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b466f6854-hrc5h,Uid:d2d2e829-8efa-4f4c-b9c2-2cd87395f520,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:19:07.434982 containerd[1459]: time="2025-01-17T12:19:07.434468317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75f85c7775-l4kfg,Uid:82477d9d-231e-4438-b265-cae0af210b64,Namespace:calico-system,Attempt:0,}" Jan 17 12:19:07.546032 kubelet[2491]: E0117 12:19:07.544922 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:07.560819 containerd[1459]: time="2025-01-17T12:19:07.559628662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 17 12:19:07.771778 containerd[1459]: time="2025-01-17T12:19:07.771329489Z" level=error msg="Failed to destroy network for sandbox \"e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:07.772732 containerd[1459]: time="2025-01-17T12:19:07.772676302Z" level=error msg="Failed to destroy network for sandbox \"ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:07.780380 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f-shm.mount: Deactivated successfully. Jan 17 12:19:07.780619 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386-shm.mount: Deactivated successfully. Jan 17 12:19:07.782056 containerd[1459]: time="2025-01-17T12:19:07.781983537Z" level=error msg="encountered an error cleaning up failed sandbox \"ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:07.782259 containerd[1459]: time="2025-01-17T12:19:07.782223272Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b466f6854-xrf2v,Uid:5e7faaed-af39-479f-9b85-c936c88dbeb7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:07.786558 containerd[1459]: time="2025-01-17T12:19:07.785561389Z" level=error msg="encountered an error cleaning up failed sandbox \"e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:07.786558 containerd[1459]: time="2025-01-17T12:19:07.785713951Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b466f6854-hrc5h,Uid:d2d2e829-8efa-4f4c-b9c2-2cd87395f520,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:07.797316 kubelet[2491]: E0117 12:19:07.794510 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:07.797316 kubelet[2491]: E0117 12:19:07.794612 2491 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b466f6854-hrc5h" Jan 17 12:19:07.797316 kubelet[2491]: E0117 12:19:07.794651 2491 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b466f6854-hrc5h" Jan 17 12:19:07.797619 kubelet[2491]: E0117 12:19:07.794715 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b466f6854-hrc5h_calico-apiserver(d2d2e829-8efa-4f4c-b9c2-2cd87395f520)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b466f6854-hrc5h_calico-apiserver(d2d2e829-8efa-4f4c-b9c2-2cd87395f520)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b466f6854-hrc5h" podUID="d2d2e829-8efa-4f4c-b9c2-2cd87395f520" Jan 17 12:19:07.797619 kubelet[2491]: E0117 12:19:07.795110 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:07.797619 kubelet[2491]: E0117 12:19:07.795157 2491 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b466f6854-xrf2v" Jan 17 12:19:07.797836 kubelet[2491]: E0117 12:19:07.795185 2491 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b466f6854-xrf2v" Jan 17 12:19:07.797836 kubelet[2491]: E0117 12:19:07.795226 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b466f6854-xrf2v_calico-apiserver(5e7faaed-af39-479f-9b85-c936c88dbeb7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b466f6854-xrf2v_calico-apiserver(5e7faaed-af39-479f-9b85-c936c88dbeb7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b466f6854-xrf2v" podUID="5e7faaed-af39-479f-9b85-c936c88dbeb7" Jan 17 12:19:07.800163 containerd[1459]: time="2025-01-17T12:19:07.800086338Z" level=error msg="Failed to destroy network for sandbox \"5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:07.801523 containerd[1459]: time="2025-01-17T12:19:07.801291764Z" level=error msg="encountered an error cleaning up failed sandbox \"5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:07.801523 containerd[1459]: time="2025-01-17T12:19:07.801398541Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75f85c7775-l4kfg,Uid:82477d9d-231e-4438-b265-cae0af210b64,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:07.803510 kubelet[2491]: E0117 12:19:07.802157 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:07.803510 kubelet[2491]: E0117 12:19:07.802326 2491 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75f85c7775-l4kfg" Jan 17 12:19:07.804478 kubelet[2491]: E0117 12:19:07.803948 2491 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75f85c7775-l4kfg" Jan 17 12:19:07.805496 kubelet[2491]: E0117 12:19:07.805221 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75f85c7775-l4kfg_calico-system(82477d9d-231e-4438-b265-cae0af210b64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75f85c7775-l4kfg_calico-system(82477d9d-231e-4438-b265-cae0af210b64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75f85c7775-l4kfg" podUID="82477d9d-231e-4438-b265-cae0af210b64" Jan 17 12:19:07.809422 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b-shm.mount: Deactivated successfully. Jan 17 12:19:07.966560 kubelet[2491]: E0117 12:19:07.965911 2491 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 17 12:19:07.966560 kubelet[2491]: E0117 12:19:07.966049 2491 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6f470594-2379-4193-8b55-bd3e6a5996c1-config-volume podName:6f470594-2379-4193-8b55-bd3e6a5996c1 nodeName:}" failed. No retries permitted until 2025-01-17 12:19:08.46601946 +0000 UTC m=+30.376522057 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6f470594-2379-4193-8b55-bd3e6a5996c1-config-volume") pod "coredns-6f6b679f8f-zgmwb" (UID: "6f470594-2379-4193-8b55-bd3e6a5996c1") : failed to sync configmap cache: timed out waiting for the condition Jan 17 12:19:08.081181 kubelet[2491]: E0117 12:19:08.075051 2491 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 17 12:19:08.081181 kubelet[2491]: E0117 12:19:08.075232 2491 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/540c0bc8-bb65-4107-8514-8f6a7b04b667-config-volume podName:540c0bc8-bb65-4107-8514-8f6a7b04b667 nodeName:}" failed. No retries permitted until 2025-01-17 12:19:08.575195733 +0000 UTC m=+30.485698322 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/540c0bc8-bb65-4107-8514-8f6a7b04b667-config-volume") pod "coredns-6f6b679f8f-kks2v" (UID: "540c0bc8-bb65-4107-8514-8f6a7b04b667") : failed to sync configmap cache: timed out waiting for the condition Jan 17 12:19:08.288878 systemd[1]: Created slice kubepods-besteffort-podb99954fd_00d0_4234_8172_969ac6f807eb.slice - libcontainer container kubepods-besteffort-podb99954fd_00d0_4234_8172_969ac6f807eb.slice. Jan 17 12:19:08.297220 containerd[1459]: time="2025-01-17T12:19:08.297117157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h55hv,Uid:b99954fd-00d0-4234-8172-969ac6f807eb,Namespace:calico-system,Attempt:0,}" Jan 17 12:19:08.501973 containerd[1459]: time="2025-01-17T12:19:08.501795466Z" level=error msg="Failed to destroy network for sandbox \"e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:08.502981 containerd[1459]: time="2025-01-17T12:19:08.502590513Z" level=error msg="encountered an error cleaning up failed sandbox \"e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:08.502981 containerd[1459]: time="2025-01-17T12:19:08.502813890Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h55hv,Uid:b99954fd-00d0-4234-8172-969ac6f807eb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:08.503364 kubelet[2491]: E0117 12:19:08.503231 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:08.503454 kubelet[2491]: E0117 12:19:08.503410 2491 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h55hv" Jan 17 12:19:08.503454 kubelet[2491]: E0117 12:19:08.503445 2491 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h55hv" Jan 17 12:19:08.503593 kubelet[2491]: E0117 12:19:08.503515 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h55hv_calico-system(b99954fd-00d0-4234-8172-969ac6f807eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h55hv_calico-system(b99954fd-00d0-4234-8172-969ac6f807eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h55hv" podUID="b99954fd-00d0-4234-8172-969ac6f807eb" Jan 17 12:19:08.561857 kubelet[2491]: I0117 12:19:08.561189 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" Jan 17 12:19:08.577320 kubelet[2491]: I0117 12:19:08.575872 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" Jan 17 12:19:08.589416 kubelet[2491]: E0117 12:19:08.582301 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:08.589619 containerd[1459]: time="2025-01-17T12:19:08.585484912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zgmwb,Uid:6f470594-2379-4193-8b55-bd3e6a5996c1,Namespace:kube-system,Attempt:0,}" Jan 17 12:19:08.591347 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9-shm.mount: Deactivated successfully. Jan 17 12:19:08.596677 containerd[1459]: time="2025-01-17T12:19:08.596526816Z" level=info msg="StopPodSandbox for \"5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b\"" Jan 17 12:19:08.601109 containerd[1459]: time="2025-01-17T12:19:08.601044934Z" level=info msg="StopPodSandbox for \"ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386\"" Jan 17 12:19:08.602092 containerd[1459]: time="2025-01-17T12:19:08.601373370Z" level=info msg="Ensure that sandbox 5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b in task-service has been cleanup successfully" Jan 17 12:19:08.613275 kubelet[2491]: I0117 12:19:08.612085 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" Jan 17 12:19:08.615513 containerd[1459]: time="2025-01-17T12:19:08.613605276Z" level=info msg="Ensure that sandbox ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386 in task-service has been cleanup successfully" Jan 17 12:19:08.617525 containerd[1459]: time="2025-01-17T12:19:08.617371993Z" level=info msg="StopPodSandbox for \"e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9\"" Jan 17 12:19:08.622338 containerd[1459]: time="2025-01-17T12:19:08.621594238Z" level=info msg="Ensure that sandbox e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9 in task-service has been cleanup successfully" Jan 17 12:19:08.633842 kubelet[2491]: I0117 12:19:08.633176 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" Jan 17 12:19:08.637870 containerd[1459]: time="2025-01-17T12:19:08.637254365Z" level=info msg="StopPodSandbox for \"e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f\"" Jan 17 12:19:08.639799 containerd[1459]: time="2025-01-17T12:19:08.639703141Z" level=info msg="Ensure that sandbox e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f in task-service has been cleanup successfully" Jan 17 12:19:08.678845 kubelet[2491]: E0117 12:19:08.678347 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:08.684025 containerd[1459]: time="2025-01-17T12:19:08.682044748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kks2v,Uid:540c0bc8-bb65-4107-8514-8f6a7b04b667,Namespace:kube-system,Attempt:0,}" Jan 17 12:19:08.867313 containerd[1459]: time="2025-01-17T12:19:08.866608849Z" level=error msg="StopPodSandbox for \"5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b\" failed" error="failed to destroy network for sandbox \"5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:08.876881 kubelet[2491]: E0117 12:19:08.867333 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" Jan 17 12:19:08.876881 kubelet[2491]: E0117 12:19:08.867424 2491 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b"} Jan 17 12:19:08.876881 kubelet[2491]: E0117 12:19:08.867531 2491 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"82477d9d-231e-4438-b265-cae0af210b64\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:19:08.876881 kubelet[2491]: E0117 12:19:08.867571 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"82477d9d-231e-4438-b265-cae0af210b64\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75f85c7775-l4kfg" podUID="82477d9d-231e-4438-b265-cae0af210b64" Jan 17 12:19:08.932590 containerd[1459]: time="2025-01-17T12:19:08.931676322Z" level=error msg="StopPodSandbox for \"e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9\" failed" error="failed to destroy network for sandbox \"e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:08.933444 kubelet[2491]: E0117 12:19:08.932485 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" Jan 17 12:19:08.933444 kubelet[2491]: E0117 12:19:08.932569 2491 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9"} Jan 17 12:19:08.933444 kubelet[2491]: E0117 12:19:08.932625 2491 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b99954fd-00d0-4234-8172-969ac6f807eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:19:08.933444 kubelet[2491]: E0117 12:19:08.932674 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b99954fd-00d0-4234-8172-969ac6f807eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h55hv" podUID="b99954fd-00d0-4234-8172-969ac6f807eb" Jan 17 12:19:08.957505 containerd[1459]: time="2025-01-17T12:19:08.957414407Z" level=error msg="StopPodSandbox for \"ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386\" failed" error="failed to destroy network for sandbox \"ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:08.958100 kubelet[2491]: E0117 12:19:08.957917 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" Jan 17 12:19:08.958100 kubelet[2491]: E0117 12:19:08.958044 2491 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386"} Jan 17 12:19:08.958100 kubelet[2491]: E0117 12:19:08.958086 2491 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5e7faaed-af39-479f-9b85-c936c88dbeb7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:19:08.958551 kubelet[2491]: E0117 12:19:08.958155 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5e7faaed-af39-479f-9b85-c936c88dbeb7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b466f6854-xrf2v" podUID="5e7faaed-af39-479f-9b85-c936c88dbeb7" Jan 17 12:19:08.959411 containerd[1459]: time="2025-01-17T12:19:08.959241892Z" level=error msg="StopPodSandbox for \"e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f\" failed" error="failed to destroy network for sandbox \"e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:08.959636 kubelet[2491]: E0117 12:19:08.959562 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" Jan 17 12:19:08.959766 kubelet[2491]: E0117 12:19:08.959647 2491 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f"} Jan 17 12:19:08.959766 kubelet[2491]: E0117 12:19:08.959689 2491 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d2d2e829-8efa-4f4c-b9c2-2cd87395f520\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:19:08.959766 kubelet[2491]: E0117 12:19:08.959714 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d2d2e829-8efa-4f4c-b9c2-2cd87395f520\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b466f6854-hrc5h" podUID="d2d2e829-8efa-4f4c-b9c2-2cd87395f520" Jan 17 12:19:09.028090 containerd[1459]: time="2025-01-17T12:19:09.027712249Z" level=error msg="Failed to destroy network for sandbox \"dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:09.030795 containerd[1459]: time="2025-01-17T12:19:09.029078973Z" level=error msg="encountered an error cleaning up failed sandbox \"dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:09.030795 containerd[1459]: time="2025-01-17T12:19:09.029244332Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zgmwb,Uid:6f470594-2379-4193-8b55-bd3e6a5996c1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:09.031043 kubelet[2491]: E0117 12:19:09.029589 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:09.031043 kubelet[2491]: E0117 12:19:09.029681 2491 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-zgmwb" Jan 17 12:19:09.031043 kubelet[2491]: E0117 12:19:09.029716 2491 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-zgmwb" Jan 17 12:19:09.031217 kubelet[2491]: E0117 12:19:09.029821 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-zgmwb_kube-system(6f470594-2379-4193-8b55-bd3e6a5996c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-zgmwb_kube-system(6f470594-2379-4193-8b55-bd3e6a5996c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-zgmwb" podUID="6f470594-2379-4193-8b55-bd3e6a5996c1" Jan 17 12:19:09.082918 containerd[1459]: time="2025-01-17T12:19:09.082791957Z" level=error msg="Failed to destroy network for sandbox \"e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:09.083705 containerd[1459]: time="2025-01-17T12:19:09.083629861Z" level=error msg="encountered an error cleaning up failed sandbox \"e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:09.083891 containerd[1459]: time="2025-01-17T12:19:09.083812230Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kks2v,Uid:540c0bc8-bb65-4107-8514-8f6a7b04b667,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:09.085331 kubelet[2491]: E0117 12:19:09.084469 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:09.085331 kubelet[2491]: E0117 12:19:09.084594 2491 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-kks2v" Jan 17 12:19:09.085331 kubelet[2491]: E0117 12:19:09.084633 2491 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-kks2v" Jan 17 12:19:09.087901 kubelet[2491]: E0117 12:19:09.084723 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-kks2v_kube-system(540c0bc8-bb65-4107-8514-8f6a7b04b667)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-kks2v_kube-system(540c0bc8-bb65-4107-8514-8f6a7b04b667)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-kks2v" podUID="540c0bc8-bb65-4107-8514-8f6a7b04b667" Jan 17 12:19:09.583871 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40-shm.mount: Deactivated successfully. Jan 17 12:19:09.584033 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a-shm.mount: Deactivated successfully. Jan 17 12:19:09.639809 kubelet[2491]: I0117 12:19:09.638226 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" Jan 17 12:19:09.645776 containerd[1459]: time="2025-01-17T12:19:09.644014743Z" level=info msg="StopPodSandbox for \"dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a\"" Jan 17 12:19:09.645776 containerd[1459]: time="2025-01-17T12:19:09.644582026Z" level=info msg="Ensure that sandbox dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a in task-service has been cleanup successfully" Jan 17 12:19:09.654137 kubelet[2491]: I0117 12:19:09.654073 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" Jan 17 12:19:09.667444 containerd[1459]: time="2025-01-17T12:19:09.667386168Z" level=info msg="StopPodSandbox for \"e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40\"" Jan 17 12:19:09.671024 containerd[1459]: time="2025-01-17T12:19:09.670933714Z" level=info msg="Ensure that sandbox e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40 in task-service has been cleanup successfully" Jan 17 12:19:09.823142 containerd[1459]: time="2025-01-17T12:19:09.823062141Z" level=error msg="StopPodSandbox for \"e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40\" failed" error="failed to destroy network for sandbox \"e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:09.823937 kubelet[2491]: E0117 12:19:09.823663 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" Jan 17 12:19:09.823937 kubelet[2491]: E0117 12:19:09.823840 2491 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40"} Jan 17 12:19:09.824411 kubelet[2491]: E0117 12:19:09.824275 2491 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"540c0bc8-bb65-4107-8514-8f6a7b04b667\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:19:09.824411 kubelet[2491]: E0117 12:19:09.824346 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"540c0bc8-bb65-4107-8514-8f6a7b04b667\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-kks2v" podUID="540c0bc8-bb65-4107-8514-8f6a7b04b667" Jan 17 12:19:09.830680 containerd[1459]: time="2025-01-17T12:19:09.830255890Z" level=error msg="StopPodSandbox for \"dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a\" failed" error="failed to destroy network for sandbox \"dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:09.831533 kubelet[2491]: E0117 12:19:09.831323 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" Jan 17 12:19:09.831533 kubelet[2491]: E0117 12:19:09.831388 2491 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a"} Jan 17 12:19:09.831533 kubelet[2491]: E0117 12:19:09.831426 2491 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6f470594-2379-4193-8b55-bd3e6a5996c1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:19:09.831533 kubelet[2491]: E0117 12:19:09.831487 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6f470594-2379-4193-8b55-bd3e6a5996c1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-zgmwb" podUID="6f470594-2379-4193-8b55-bd3e6a5996c1" Jan 17 12:19:15.781527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2540640494.mount: Deactivated successfully. Jan 17 12:19:15.845115 containerd[1459]: time="2025-01-17T12:19:15.837732007Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:15.856155 containerd[1459]: time="2025-01-17T12:19:15.856037758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 17 12:19:15.893983 containerd[1459]: time="2025-01-17T12:19:15.893837114Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:15.975333 containerd[1459]: time="2025-01-17T12:19:15.975260100Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:15.977866 containerd[1459]: time="2025-01-17T12:19:15.977791475Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.415190757s" Jan 17 12:19:15.977866 containerd[1459]: time="2025-01-17T12:19:15.977841187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 17 12:19:16.074593 containerd[1459]: time="2025-01-17T12:19:16.074101659Z" level=info msg="CreateContainer within sandbox \"745bc54d77845bf03da2fc2e227c5365cab88e8ffb32a347777d2a84ff5a86dc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:19:16.112051 containerd[1459]: time="2025-01-17T12:19:16.111959034Z" level=info msg="CreateContainer within sandbox \"745bc54d77845bf03da2fc2e227c5365cab88e8ffb32a347777d2a84ff5a86dc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c14e391e3459eee1b19097ecd76f8e601abb5f089c380022e6c96dcc093705f9\"" Jan 17 12:19:16.112827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount623703103.mount: Deactivated successfully. Jan 17 12:19:16.117634 containerd[1459]: time="2025-01-17T12:19:16.117571976Z" level=info msg="StartContainer for \"c14e391e3459eee1b19097ecd76f8e601abb5f089c380022e6c96dcc093705f9\"" Jan 17 12:19:16.209060 systemd[1]: Started cri-containerd-c14e391e3459eee1b19097ecd76f8e601abb5f089c380022e6c96dcc093705f9.scope - libcontainer container c14e391e3459eee1b19097ecd76f8e601abb5f089c380022e6c96dcc093705f9. Jan 17 12:19:16.281538 containerd[1459]: time="2025-01-17T12:19:16.281452613Z" level=info msg="StartContainer for \"c14e391e3459eee1b19097ecd76f8e601abb5f089c380022e6c96dcc093705f9\" returns successfully" Jan 17 12:19:16.402242 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 12:19:16.403846 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 12:19:16.739501 kubelet[2491]: E0117 12:19:16.739425 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:18.725092 kubelet[2491]: I0117 12:19:18.724957 2491 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:19:18.725736 kubelet[2491]: E0117 12:19:18.725486 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:18.740339 kubelet[2491]: E0117 12:19:18.740221 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:18.757782 kubelet[2491]: I0117 12:19:18.757374 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7np4g" podStartSLOduration=4.18285763 podStartE2EDuration="25.755943762s" podCreationTimestamp="2025-01-17 12:18:53 +0000 UTC" firstStartedPulling="2025-01-17 12:18:54.449280704 +0000 UTC m=+16.359783289" lastFinishedPulling="2025-01-17 12:19:16.022366851 +0000 UTC m=+37.932869421" observedRunningTime="2025-01-17 12:19:16.775922669 +0000 UTC m=+38.686425272" watchObservedRunningTime="2025-01-17 12:19:18.755943762 +0000 UTC m=+40.666446358" Jan 17 12:19:19.719789 kernel: bpftool[3776]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 12:19:20.039429 systemd-networkd[1366]: vxlan.calico: Link UP Jan 17 12:19:20.039442 systemd-networkd[1366]: vxlan.calico: Gained carrier Jan 17 12:19:20.282107 containerd[1459]: time="2025-01-17T12:19:20.281886143Z" level=info msg="StopPodSandbox for \"ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386\"" Jan 17 12:19:20.579159 containerd[1459]: 2025-01-17 12:19:20.388 [INFO][3829] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" Jan 17 12:19:20.579159 containerd[1459]: 2025-01-17 12:19:20.389 [INFO][3829] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" iface="eth0" netns="/var/run/netns/cni-a59c7830-554b-4adf-db82-a589663155cb" Jan 17 12:19:20.579159 containerd[1459]: 2025-01-17 12:19:20.390 [INFO][3829] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" iface="eth0" netns="/var/run/netns/cni-a59c7830-554b-4adf-db82-a589663155cb" Jan 17 12:19:20.579159 containerd[1459]: 2025-01-17 12:19:20.391 [INFO][3829] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" iface="eth0" netns="/var/run/netns/cni-a59c7830-554b-4adf-db82-a589663155cb" Jan 17 12:19:20.579159 containerd[1459]: 2025-01-17 12:19:20.391 [INFO][3829] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" Jan 17 12:19:20.579159 containerd[1459]: 2025-01-17 12:19:20.391 [INFO][3829] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" Jan 17 12:19:20.579159 containerd[1459]: 2025-01-17 12:19:20.550 [INFO][3842] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" HandleID="k8s-pod-network.ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--xrf2v-eth0" Jan 17 12:19:20.579159 containerd[1459]: 2025-01-17 12:19:20.551 [INFO][3842] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:20.579159 containerd[1459]: 2025-01-17 12:19:20.552 [INFO][3842] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:20.579159 containerd[1459]: 2025-01-17 12:19:20.571 [WARNING][3842] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" HandleID="k8s-pod-network.ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--xrf2v-eth0" Jan 17 12:19:20.579159 containerd[1459]: 2025-01-17 12:19:20.571 [INFO][3842] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" HandleID="k8s-pod-network.ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--xrf2v-eth0" Jan 17 12:19:20.579159 containerd[1459]: 2025-01-17 12:19:20.574 [INFO][3842] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:20.579159 containerd[1459]: 2025-01-17 12:19:20.576 [INFO][3829] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" Jan 17 12:19:20.585659 systemd[1]: run-netns-cni\x2da59c7830\x2d554b\x2d4adf\x2ddb82\x2da589663155cb.mount: Deactivated successfully. Jan 17 12:19:20.589571 containerd[1459]: time="2025-01-17T12:19:20.589385931Z" level=info msg="TearDown network for sandbox \"ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386\" successfully" Jan 17 12:19:20.589571 containerd[1459]: time="2025-01-17T12:19:20.589444541Z" level=info msg="StopPodSandbox for \"ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386\" returns successfully" Jan 17 12:19:20.596821 containerd[1459]: time="2025-01-17T12:19:20.596715038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b466f6854-xrf2v,Uid:5e7faaed-af39-479f-9b85-c936c88dbeb7,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:19:20.819154 systemd-networkd[1366]: cali2edcfdc5120: Link UP Jan 17 12:19:20.820905 systemd-networkd[1366]: cali2edcfdc5120: Gained carrier Jan 17 12:19:20.850318 containerd[1459]: 2025-01-17 12:19:20.677 [INFO][3874] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--xrf2v-eth0 calico-apiserver-7b466f6854- calico-apiserver 5e7faaed-af39-479f-9b85-c936c88dbeb7 856 0 2025-01-17 12:18:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b466f6854 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-f-fd30d73867 calico-apiserver-7b466f6854-xrf2v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2edcfdc5120 [] []}} ContainerID="77ff8ea28cd6538c912a02c940a3c1778b8ae1310ae75ca29a7d7f0f882233dc" Namespace="calico-apiserver" Pod="calico-apiserver-7b466f6854-xrf2v" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--xrf2v-" Jan 17 12:19:20.850318 containerd[1459]: 2025-01-17 12:19:20.677 [INFO][3874] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="77ff8ea28cd6538c912a02c940a3c1778b8ae1310ae75ca29a7d7f0f882233dc" Namespace="calico-apiserver" Pod="calico-apiserver-7b466f6854-xrf2v" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--xrf2v-eth0" Jan 17 12:19:20.850318 containerd[1459]: 2025-01-17 12:19:20.730 [INFO][3884] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="77ff8ea28cd6538c912a02c940a3c1778b8ae1310ae75ca29a7d7f0f882233dc" HandleID="k8s-pod-network.77ff8ea28cd6538c912a02c940a3c1778b8ae1310ae75ca29a7d7f0f882233dc" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--xrf2v-eth0" Jan 17 12:19:20.850318 containerd[1459]: 2025-01-17 12:19:20.748 [INFO][3884] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="77ff8ea28cd6538c912a02c940a3c1778b8ae1310ae75ca29a7d7f0f882233dc" HandleID="k8s-pod-network.77ff8ea28cd6538c912a02c940a3c1778b8ae1310ae75ca29a7d7f0f882233dc" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--xrf2v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002655d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-f-fd30d73867", "pod":"calico-apiserver-7b466f6854-xrf2v", "timestamp":"2025-01-17 12:19:20.730282138 +0000 UTC"}, Hostname:"ci-4081.3.0-f-fd30d73867", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:19:20.850318 containerd[1459]: 2025-01-17 12:19:20.748 [INFO][3884] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:20.850318 containerd[1459]: 2025-01-17 12:19:20.748 [INFO][3884] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:20.850318 containerd[1459]: 2025-01-17 12:19:20.748 [INFO][3884] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-f-fd30d73867' Jan 17 12:19:20.850318 containerd[1459]: 2025-01-17 12:19:20.753 [INFO][3884] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.77ff8ea28cd6538c912a02c940a3c1778b8ae1310ae75ca29a7d7f0f882233dc" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:20.850318 containerd[1459]: 2025-01-17 12:19:20.764 [INFO][3884] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:20.850318 containerd[1459]: 2025-01-17 12:19:20.774 [INFO][3884] ipam/ipam.go 489: Trying affinity for 192.168.52.128/26 host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:20.850318 containerd[1459]: 2025-01-17 12:19:20.778 [INFO][3884] ipam/ipam.go 155: Attempting to load block cidr=192.168.52.128/26 host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:20.850318 containerd[1459]: 2025-01-17 12:19:20.782 [INFO][3884] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:20.850318 containerd[1459]: 2025-01-17 12:19:20.782 [INFO][3884] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.77ff8ea28cd6538c912a02c940a3c1778b8ae1310ae75ca29a7d7f0f882233dc" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:20.850318 containerd[1459]: 2025-01-17 12:19:20.785 [INFO][3884] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.77ff8ea28cd6538c912a02c940a3c1778b8ae1310ae75ca29a7d7f0f882233dc Jan 17 12:19:20.850318 containerd[1459]: 2025-01-17 12:19:20.793 [INFO][3884] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.77ff8ea28cd6538c912a02c940a3c1778b8ae1310ae75ca29a7d7f0f882233dc" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:20.850318 containerd[1459]: 2025-01-17 12:19:20.808 [INFO][3884] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.52.129/26] block=192.168.52.128/26 handle="k8s-pod-network.77ff8ea28cd6538c912a02c940a3c1778b8ae1310ae75ca29a7d7f0f882233dc" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:20.850318 containerd[1459]: 2025-01-17 12:19:20.808 [INFO][3884] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.129/26] handle="k8s-pod-network.77ff8ea28cd6538c912a02c940a3c1778b8ae1310ae75ca29a7d7f0f882233dc" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:20.850318 containerd[1459]: 2025-01-17 12:19:20.808 [INFO][3884] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:20.850318 containerd[1459]: 2025-01-17 12:19:20.808 [INFO][3884] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.129/26] IPv6=[] ContainerID="77ff8ea28cd6538c912a02c940a3c1778b8ae1310ae75ca29a7d7f0f882233dc" HandleID="k8s-pod-network.77ff8ea28cd6538c912a02c940a3c1778b8ae1310ae75ca29a7d7f0f882233dc" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--xrf2v-eth0" Jan 17 12:19:20.852348 containerd[1459]: 2025-01-17 12:19:20.812 [INFO][3874] cni-plugin/k8s.go 386: Populated endpoint ContainerID="77ff8ea28cd6538c912a02c940a3c1778b8ae1310ae75ca29a7d7f0f882233dc" Namespace="calico-apiserver" Pod="calico-apiserver-7b466f6854-xrf2v" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--xrf2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--xrf2v-eth0", GenerateName:"calico-apiserver-7b466f6854-", Namespace:"calico-apiserver", SelfLink:"", UID:"5e7faaed-af39-479f-9b85-c936c88dbeb7", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b466f6854", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-fd30d73867", ContainerID:"", Pod:"calico-apiserver-7b466f6854-xrf2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2edcfdc5120", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:20.852348 containerd[1459]: 2025-01-17 12:19:20.813 [INFO][3874] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.52.129/32] ContainerID="77ff8ea28cd6538c912a02c940a3c1778b8ae1310ae75ca29a7d7f0f882233dc" Namespace="calico-apiserver" Pod="calico-apiserver-7b466f6854-xrf2v" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--xrf2v-eth0" Jan 17 12:19:20.852348 containerd[1459]: 2025-01-17 12:19:20.813 [INFO][3874] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2edcfdc5120 ContainerID="77ff8ea28cd6538c912a02c940a3c1778b8ae1310ae75ca29a7d7f0f882233dc" Namespace="calico-apiserver" Pod="calico-apiserver-7b466f6854-xrf2v" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--xrf2v-eth0" Jan 17 12:19:20.852348 containerd[1459]: 2025-01-17 12:19:20.818 [INFO][3874] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="77ff8ea28cd6538c912a02c940a3c1778b8ae1310ae75ca29a7d7f0f882233dc" Namespace="calico-apiserver" Pod="calico-apiserver-7b466f6854-xrf2v" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--xrf2v-eth0" Jan 17 12:19:20.852348 containerd[1459]: 2025-01-17 12:19:20.819 [INFO][3874] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="77ff8ea28cd6538c912a02c940a3c1778b8ae1310ae75ca29a7d7f0f882233dc" Namespace="calico-apiserver" Pod="calico-apiserver-7b466f6854-xrf2v" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--xrf2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--xrf2v-eth0", GenerateName:"calico-apiserver-7b466f6854-", Namespace:"calico-apiserver", SelfLink:"", UID:"5e7faaed-af39-479f-9b85-c936c88dbeb7", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b466f6854", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-fd30d73867", ContainerID:"77ff8ea28cd6538c912a02c940a3c1778b8ae1310ae75ca29a7d7f0f882233dc", Pod:"calico-apiserver-7b466f6854-xrf2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2edcfdc5120", MAC:"ca:8d:c3:67:56:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:20.852348 containerd[1459]: 2025-01-17 12:19:20.839 [INFO][3874] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="77ff8ea28cd6538c912a02c940a3c1778b8ae1310ae75ca29a7d7f0f882233dc" Namespace="calico-apiserver" Pod="calico-apiserver-7b466f6854-xrf2v" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--xrf2v-eth0" Jan 17 12:19:20.892489 containerd[1459]: time="2025-01-17T12:19:20.892129937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:20.892489 containerd[1459]: time="2025-01-17T12:19:20.892297478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:20.892489 containerd[1459]: time="2025-01-17T12:19:20.892316695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:20.893101 containerd[1459]: time="2025-01-17T12:19:20.892933343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:20.930430 systemd[1]: Started cri-containerd-77ff8ea28cd6538c912a02c940a3c1778b8ae1310ae75ca29a7d7f0f882233dc.scope - libcontainer container 77ff8ea28cd6538c912a02c940a3c1778b8ae1310ae75ca29a7d7f0f882233dc. Jan 17 12:19:21.051554 containerd[1459]: time="2025-01-17T12:19:21.051419991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b466f6854-xrf2v,Uid:5e7faaed-af39-479f-9b85-c936c88dbeb7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"77ff8ea28cd6538c912a02c940a3c1778b8ae1310ae75ca29a7d7f0f882233dc\"" Jan 17 12:19:21.064674 containerd[1459]: time="2025-01-17T12:19:21.063603245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:19:21.273813 containerd[1459]: time="2025-01-17T12:19:21.273707877Z" level=info msg="StopPodSandbox for \"e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40\"" Jan 17 12:19:21.275436 containerd[1459]: time="2025-01-17T12:19:21.275372775Z" level=info msg="StopPodSandbox for \"e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9\"" Jan 17 12:19:21.464782 containerd[1459]: 2025-01-17 12:19:21.364 [INFO][3977] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" Jan 17 12:19:21.464782 containerd[1459]: 2025-01-17 12:19:21.364 [INFO][3977] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" iface="eth0" netns="/var/run/netns/cni-38166b90-78c0-d1ea-141a-cb93de4e8888" Jan 17 12:19:21.464782 containerd[1459]: 2025-01-17 12:19:21.364 [INFO][3977] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" iface="eth0" netns="/var/run/netns/cni-38166b90-78c0-d1ea-141a-cb93de4e8888" Jan 17 12:19:21.464782 containerd[1459]: 2025-01-17 12:19:21.369 [INFO][3977] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" iface="eth0" netns="/var/run/netns/cni-38166b90-78c0-d1ea-141a-cb93de4e8888" Jan 17 12:19:21.464782 containerd[1459]: 2025-01-17 12:19:21.369 [INFO][3977] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" Jan 17 12:19:21.464782 containerd[1459]: 2025-01-17 12:19:21.369 [INFO][3977] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" Jan 17 12:19:21.464782 containerd[1459]: 2025-01-17 12:19:21.433 [INFO][3986] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" HandleID="k8s-pod-network.e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" Workload="ci--4081.3.0--f--fd30d73867-k8s-csi--node--driver--h55hv-eth0" Jan 17 12:19:21.464782 containerd[1459]: 2025-01-17 12:19:21.433 [INFO][3986] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:21.464782 containerd[1459]: 2025-01-17 12:19:21.433 [INFO][3986] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:21.464782 containerd[1459]: 2025-01-17 12:19:21.447 [WARNING][3986] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" HandleID="k8s-pod-network.e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" Workload="ci--4081.3.0--f--fd30d73867-k8s-csi--node--driver--h55hv-eth0" Jan 17 12:19:21.464782 containerd[1459]: 2025-01-17 12:19:21.447 [INFO][3986] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" HandleID="k8s-pod-network.e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" Workload="ci--4081.3.0--f--fd30d73867-k8s-csi--node--driver--h55hv-eth0" Jan 17 12:19:21.464782 containerd[1459]: 2025-01-17 12:19:21.452 [INFO][3986] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:21.464782 containerd[1459]: 2025-01-17 12:19:21.458 [INFO][3977] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" Jan 17 12:19:21.464782 containerd[1459]: time="2025-01-17T12:19:21.462671301Z" level=info msg="TearDown network for sandbox \"e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9\" successfully" Jan 17 12:19:21.464782 containerd[1459]: time="2025-01-17T12:19:21.462709891Z" level=info msg="StopPodSandbox for \"e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9\" returns successfully" Jan 17 12:19:21.467834 containerd[1459]: time="2025-01-17T12:19:21.466691644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h55hv,Uid:b99954fd-00d0-4234-8172-969ac6f807eb,Namespace:calico-system,Attempt:1,}" Jan 17 12:19:21.489911 containerd[1459]: 2025-01-17 12:19:21.395 [INFO][3970] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" Jan 17 12:19:21.489911 containerd[1459]: 2025-01-17 12:19:21.395 [INFO][3970] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" iface="eth0" netns="/var/run/netns/cni-41cdbdd8-1470-4bcf-1531-018854c248eb" Jan 17 12:19:21.489911 containerd[1459]: 2025-01-17 12:19:21.396 [INFO][3970] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" iface="eth0" netns="/var/run/netns/cni-41cdbdd8-1470-4bcf-1531-018854c248eb" Jan 17 12:19:21.489911 containerd[1459]: 2025-01-17 12:19:21.397 [INFO][3970] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" iface="eth0" netns="/var/run/netns/cni-41cdbdd8-1470-4bcf-1531-018854c248eb" Jan 17 12:19:21.489911 containerd[1459]: 2025-01-17 12:19:21.397 [INFO][3970] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" Jan 17 12:19:21.489911 containerd[1459]: 2025-01-17 12:19:21.397 [INFO][3970] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" Jan 17 12:19:21.489911 containerd[1459]: 2025-01-17 12:19:21.468 [INFO][3990] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" HandleID="k8s-pod-network.e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" Workload="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--kks2v-eth0" Jan 17 12:19:21.489911 containerd[1459]: 2025-01-17 12:19:21.469 [INFO][3990] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:21.489911 containerd[1459]: 2025-01-17 12:19:21.469 [INFO][3990] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:21.489911 containerd[1459]: 2025-01-17 12:19:21.481 [WARNING][3990] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" HandleID="k8s-pod-network.e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" Workload="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--kks2v-eth0" Jan 17 12:19:21.489911 containerd[1459]: 2025-01-17 12:19:21.481 [INFO][3990] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" HandleID="k8s-pod-network.e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" Workload="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--kks2v-eth0" Jan 17 12:19:21.489911 containerd[1459]: 2025-01-17 12:19:21.485 [INFO][3990] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:21.489911 containerd[1459]: 2025-01-17 12:19:21.487 [INFO][3970] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" Jan 17 12:19:21.493324 containerd[1459]: time="2025-01-17T12:19:21.493258119Z" level=info msg="TearDown network for sandbox \"e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40\" successfully" Jan 17 12:19:21.493324 containerd[1459]: time="2025-01-17T12:19:21.493315875Z" level=info msg="StopPodSandbox for \"e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40\" returns successfully" Jan 17 12:19:21.494829 kubelet[2491]: E0117 12:19:21.494170 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:21.502480 containerd[1459]: time="2025-01-17T12:19:21.501173161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kks2v,Uid:540c0bc8-bb65-4107-8514-8f6a7b04b667,Namespace:kube-system,Attempt:1,}" Jan 17 12:19:21.594623 systemd[1]: run-netns-cni\x2d41cdbdd8\x2d1470\x2d4bcf\x2d1531\x2d018854c248eb.mount: Deactivated successfully. Jan 17 12:19:21.594800 systemd[1]: run-netns-cni\x2d38166b90\x2d78c0\x2dd1ea\x2d141a\x2dcb93de4e8888.mount: Deactivated successfully. Jan 17 12:19:21.784196 systemd-networkd[1366]: cali258509708ee: Link UP Jan 17 12:19:21.789255 systemd-networkd[1366]: cali258509708ee: Gained carrier Jan 17 12:19:21.812467 containerd[1459]: 2025-01-17 12:19:21.592 [INFO][4001] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--f--fd30d73867-k8s-csi--node--driver--h55hv-eth0 csi-node-driver- calico-system b99954fd-00d0-4234-8172-969ac6f807eb 865 0 2025-01-17 12:18:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-f-fd30d73867 csi-node-driver-h55hv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali258509708ee [] []}} ContainerID="e06b43e58b7279337376ca4475ccc0dcecf049cb3a95ee19aeab290490321228" Namespace="calico-system" Pod="csi-node-driver-h55hv" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-csi--node--driver--h55hv-" Jan 17 12:19:21.812467 containerd[1459]: 2025-01-17 12:19:21.593 [INFO][4001] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e06b43e58b7279337376ca4475ccc0dcecf049cb3a95ee19aeab290490321228" Namespace="calico-system" Pod="csi-node-driver-h55hv" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-csi--node--driver--h55hv-eth0" Jan 17 12:19:21.812467 containerd[1459]: 2025-01-17 12:19:21.683 [INFO][4021] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e06b43e58b7279337376ca4475ccc0dcecf049cb3a95ee19aeab290490321228" HandleID="k8s-pod-network.e06b43e58b7279337376ca4475ccc0dcecf049cb3a95ee19aeab290490321228" Workload="ci--4081.3.0--f--fd30d73867-k8s-csi--node--driver--h55hv-eth0" Jan 17 12:19:21.812467 containerd[1459]: 2025-01-17 12:19:21.703 [INFO][4021] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e06b43e58b7279337376ca4475ccc0dcecf049cb3a95ee19aeab290490321228" HandleID="k8s-pod-network.e06b43e58b7279337376ca4475ccc0dcecf049cb3a95ee19aeab290490321228" Workload="ci--4081.3.0--f--fd30d73867-k8s-csi--node--driver--h55hv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042c9e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-f-fd30d73867", "pod":"csi-node-driver-h55hv", "timestamp":"2025-01-17 12:19:21.682952605 +0000 UTC"}, Hostname:"ci-4081.3.0-f-fd30d73867", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:19:21.812467 containerd[1459]: 2025-01-17 12:19:21.703 [INFO][4021] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:21.812467 containerd[1459]: 2025-01-17 12:19:21.703 [INFO][4021] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:21.812467 containerd[1459]: 2025-01-17 12:19:21.703 [INFO][4021] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-f-fd30d73867' Jan 17 12:19:21.812467 containerd[1459]: 2025-01-17 12:19:21.709 [INFO][4021] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e06b43e58b7279337376ca4475ccc0dcecf049cb3a95ee19aeab290490321228" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:21.812467 containerd[1459]: 2025-01-17 12:19:21.721 [INFO][4021] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:21.812467 containerd[1459]: 2025-01-17 12:19:21.734 [INFO][4021] ipam/ipam.go 489: Trying affinity for 192.168.52.128/26 host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:21.812467 containerd[1459]: 2025-01-17 12:19:21.739 [INFO][4021] ipam/ipam.go 155: Attempting to load block cidr=192.168.52.128/26 host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:21.812467 containerd[1459]: 2025-01-17 12:19:21.746 [INFO][4021] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:21.812467 containerd[1459]: 2025-01-17 12:19:21.746 [INFO][4021] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.e06b43e58b7279337376ca4475ccc0dcecf049cb3a95ee19aeab290490321228" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:21.812467 containerd[1459]: 2025-01-17 12:19:21.750 [INFO][4021] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e06b43e58b7279337376ca4475ccc0dcecf049cb3a95ee19aeab290490321228 Jan 17 12:19:21.812467 containerd[1459]: 2025-01-17 12:19:21.758 [INFO][4021] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.e06b43e58b7279337376ca4475ccc0dcecf049cb3a95ee19aeab290490321228" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:21.812467 containerd[1459]: 2025-01-17 12:19:21.772 [INFO][4021] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.52.130/26] block=192.168.52.128/26 handle="k8s-pod-network.e06b43e58b7279337376ca4475ccc0dcecf049cb3a95ee19aeab290490321228" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:21.812467 containerd[1459]: 2025-01-17 12:19:21.772 [INFO][4021] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.130/26] handle="k8s-pod-network.e06b43e58b7279337376ca4475ccc0dcecf049cb3a95ee19aeab290490321228" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:21.812467 containerd[1459]: 2025-01-17 12:19:21.772 [INFO][4021] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:21.812467 containerd[1459]: 2025-01-17 12:19:21.772 [INFO][4021] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.130/26] IPv6=[] ContainerID="e06b43e58b7279337376ca4475ccc0dcecf049cb3a95ee19aeab290490321228" HandleID="k8s-pod-network.e06b43e58b7279337376ca4475ccc0dcecf049cb3a95ee19aeab290490321228" Workload="ci--4081.3.0--f--fd30d73867-k8s-csi--node--driver--h55hv-eth0" Jan 17 12:19:21.813276 containerd[1459]: 2025-01-17 12:19:21.777 [INFO][4001] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e06b43e58b7279337376ca4475ccc0dcecf049cb3a95ee19aeab290490321228" Namespace="calico-system" Pod="csi-node-driver-h55hv" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-csi--node--driver--h55hv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--fd30d73867-k8s-csi--node--driver--h55hv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b99954fd-00d0-4234-8172-969ac6f807eb", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-fd30d73867", ContainerID:"", Pod:"csi-node-driver-h55hv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali258509708ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:21.813276 containerd[1459]: 2025-01-17 12:19:21.777 [INFO][4001] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.52.130/32] ContainerID="e06b43e58b7279337376ca4475ccc0dcecf049cb3a95ee19aeab290490321228" Namespace="calico-system" Pod="csi-node-driver-h55hv" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-csi--node--driver--h55hv-eth0" Jan 17 12:19:21.813276 containerd[1459]: 2025-01-17 12:19:21.777 [INFO][4001] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali258509708ee ContainerID="e06b43e58b7279337376ca4475ccc0dcecf049cb3a95ee19aeab290490321228" Namespace="calico-system" Pod="csi-node-driver-h55hv" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-csi--node--driver--h55hv-eth0" Jan 17 12:19:21.813276 containerd[1459]: 2025-01-17 12:19:21.786 [INFO][4001] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e06b43e58b7279337376ca4475ccc0dcecf049cb3a95ee19aeab290490321228" Namespace="calico-system" Pod="csi-node-driver-h55hv" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-csi--node--driver--h55hv-eth0" Jan 17 12:19:21.813276 containerd[1459]: 2025-01-17 12:19:21.787 [INFO][4001] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e06b43e58b7279337376ca4475ccc0dcecf049cb3a95ee19aeab290490321228" Namespace="calico-system" Pod="csi-node-driver-h55hv" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-csi--node--driver--h55hv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--fd30d73867-k8s-csi--node--driver--h55hv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b99954fd-00d0-4234-8172-969ac6f807eb", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-fd30d73867", ContainerID:"e06b43e58b7279337376ca4475ccc0dcecf049cb3a95ee19aeab290490321228", Pod:"csi-node-driver-h55hv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali258509708ee", MAC:"52:e7:d8:75:e3:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:21.813276 containerd[1459]: 2025-01-17 12:19:21.807 [INFO][4001] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e06b43e58b7279337376ca4475ccc0dcecf049cb3a95ee19aeab290490321228" Namespace="calico-system" Pod="csi-node-driver-h55hv" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-csi--node--driver--h55hv-eth0" Jan 17 12:19:21.875214 containerd[1459]: time="2025-01-17T12:19:21.874856792Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:21.875214 containerd[1459]: time="2025-01-17T12:19:21.874947725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:21.875214 containerd[1459]: time="2025-01-17T12:19:21.874965229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:21.876377 containerd[1459]: time="2025-01-17T12:19:21.876121122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:21.918889 systemd-networkd[1366]: calic939ee61ee3: Link UP Jan 17 12:19:21.921867 systemd-networkd[1366]: calic939ee61ee3: Gained carrier Jan 17 12:19:21.951101 kubelet[2491]: I0117 12:19:21.950467 2491 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:19:21.954880 kubelet[2491]: E0117 12:19:21.954385 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:21.953082 systemd[1]: Started cri-containerd-e06b43e58b7279337376ca4475ccc0dcecf049cb3a95ee19aeab290490321228.scope - libcontainer container e06b43e58b7279337376ca4475ccc0dcecf049cb3a95ee19aeab290490321228. Jan 17 12:19:21.984857 containerd[1459]: 2025-01-17 12:19:21.644 [INFO][4012] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--kks2v-eth0 coredns-6f6b679f8f- kube-system 540c0bc8-bb65-4107-8514-8f6a7b04b667 866 0 2025-01-17 12:18:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-f-fd30d73867 coredns-6f6b679f8f-kks2v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic939ee61ee3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="057fa003b811e3b09fc5565096863c5eb1d140925ae4787abcb420df525704f7" Namespace="kube-system" Pod="coredns-6f6b679f8f-kks2v" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--kks2v-" Jan 17 12:19:21.984857 containerd[1459]: 2025-01-17 12:19:21.644 [INFO][4012] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="057fa003b811e3b09fc5565096863c5eb1d140925ae4787abcb420df525704f7" Namespace="kube-system" Pod="coredns-6f6b679f8f-kks2v" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--kks2v-eth0" Jan 17 12:19:21.984857 containerd[1459]: 2025-01-17 12:19:21.721 [INFO][4028] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="057fa003b811e3b09fc5565096863c5eb1d140925ae4787abcb420df525704f7" HandleID="k8s-pod-network.057fa003b811e3b09fc5565096863c5eb1d140925ae4787abcb420df525704f7" Workload="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--kks2v-eth0" Jan 17 12:19:21.984857 containerd[1459]: 2025-01-17 12:19:21.738 [INFO][4028] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="057fa003b811e3b09fc5565096863c5eb1d140925ae4787abcb420df525704f7" HandleID="k8s-pod-network.057fa003b811e3b09fc5565096863c5eb1d140925ae4787abcb420df525704f7" Workload="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--kks2v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bdcb0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-f-fd30d73867", "pod":"coredns-6f6b679f8f-kks2v", "timestamp":"2025-01-17 12:19:21.721596982 +0000 UTC"}, Hostname:"ci-4081.3.0-f-fd30d73867", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:19:21.984857 containerd[1459]: 2025-01-17 12:19:21.739 [INFO][4028] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:21.984857 containerd[1459]: 2025-01-17 12:19:21.772 [INFO][4028] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:21.984857 containerd[1459]: 2025-01-17 12:19:21.773 [INFO][4028] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-f-fd30d73867' Jan 17 12:19:21.984857 containerd[1459]: 2025-01-17 12:19:21.810 [INFO][4028] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.057fa003b811e3b09fc5565096863c5eb1d140925ae4787abcb420df525704f7" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:21.984857 containerd[1459]: 2025-01-17 12:19:21.828 [INFO][4028] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:21.984857 containerd[1459]: 2025-01-17 12:19:21.842 [INFO][4028] ipam/ipam.go 489: Trying affinity for 192.168.52.128/26 host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:21.984857 containerd[1459]: 2025-01-17 12:19:21.852 [INFO][4028] ipam/ipam.go 155: Attempting to load block cidr=192.168.52.128/26 host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:21.984857 containerd[1459]: 2025-01-17 12:19:21.859 [INFO][4028] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:21.984857 containerd[1459]: 2025-01-17 12:19:21.859 [INFO][4028] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.057fa003b811e3b09fc5565096863c5eb1d140925ae4787abcb420df525704f7" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:21.984857 containerd[1459]: 2025-01-17 12:19:21.865 [INFO][4028] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.057fa003b811e3b09fc5565096863c5eb1d140925ae4787abcb420df525704f7 Jan 17 12:19:21.984857 containerd[1459]: 2025-01-17 12:19:21.881 [INFO][4028] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.057fa003b811e3b09fc5565096863c5eb1d140925ae4787abcb420df525704f7" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:21.984857 containerd[1459]: 2025-01-17 12:19:21.902 [INFO][4028] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.52.131/26] block=192.168.52.128/26 handle="k8s-pod-network.057fa003b811e3b09fc5565096863c5eb1d140925ae4787abcb420df525704f7" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:21.984857 containerd[1459]: 2025-01-17 12:19:21.903 [INFO][4028] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.131/26] handle="k8s-pod-network.057fa003b811e3b09fc5565096863c5eb1d140925ae4787abcb420df525704f7" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:21.984857 containerd[1459]: 2025-01-17 12:19:21.903 [INFO][4028] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:21.984857 containerd[1459]: 2025-01-17 12:19:21.903 [INFO][4028] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.131/26] IPv6=[] ContainerID="057fa003b811e3b09fc5565096863c5eb1d140925ae4787abcb420df525704f7" HandleID="k8s-pod-network.057fa003b811e3b09fc5565096863c5eb1d140925ae4787abcb420df525704f7" Workload="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--kks2v-eth0" Jan 17 12:19:21.985505 containerd[1459]: 2025-01-17 12:19:21.911 [INFO][4012] cni-plugin/k8s.go 386: Populated endpoint ContainerID="057fa003b811e3b09fc5565096863c5eb1d140925ae4787abcb420df525704f7" Namespace="kube-system" Pod="coredns-6f6b679f8f-kks2v" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--kks2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--kks2v-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"540c0bc8-bb65-4107-8514-8f6a7b04b667", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-fd30d73867", ContainerID:"", Pod:"coredns-6f6b679f8f-kks2v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic939ee61ee3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:21.985505 containerd[1459]: 2025-01-17 12:19:21.912 [INFO][4012] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.52.131/32] ContainerID="057fa003b811e3b09fc5565096863c5eb1d140925ae4787abcb420df525704f7" Namespace="kube-system" Pod="coredns-6f6b679f8f-kks2v" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--kks2v-eth0" Jan 17 12:19:21.985505 containerd[1459]: 2025-01-17 12:19:21.912 [INFO][4012] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic939ee61ee3 ContainerID="057fa003b811e3b09fc5565096863c5eb1d140925ae4787abcb420df525704f7" Namespace="kube-system" Pod="coredns-6f6b679f8f-kks2v" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--kks2v-eth0" Jan 17 12:19:21.985505 containerd[1459]: 2025-01-17 12:19:21.924 [INFO][4012] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="057fa003b811e3b09fc5565096863c5eb1d140925ae4787abcb420df525704f7" Namespace="kube-system" Pod="coredns-6f6b679f8f-kks2v" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--kks2v-eth0" Jan 17 12:19:21.985505 containerd[1459]: 2025-01-17 12:19:21.934 [INFO][4012] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="057fa003b811e3b09fc5565096863c5eb1d140925ae4787abcb420df525704f7" Namespace="kube-system" Pod="coredns-6f6b679f8f-kks2v" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--kks2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--kks2v-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"540c0bc8-bb65-4107-8514-8f6a7b04b667", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-fd30d73867", ContainerID:"057fa003b811e3b09fc5565096863c5eb1d140925ae4787abcb420df525704f7", Pod:"coredns-6f6b679f8f-kks2v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic939ee61ee3", MAC:"f6:5e:bb:1c:a2:12", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:21.985505 containerd[1459]: 2025-01-17 12:19:21.973 [INFO][4012] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="057fa003b811e3b09fc5565096863c5eb1d140925ae4787abcb420df525704f7" Namespace="kube-system" Pod="coredns-6f6b679f8f-kks2v" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--kks2v-eth0" Jan 17 12:19:22.035914 systemd-networkd[1366]: vxlan.calico: Gained IPv6LL Jan 17 12:19:22.082329 containerd[1459]: time="2025-01-17T12:19:22.082269152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h55hv,Uid:b99954fd-00d0-4234-8172-969ac6f807eb,Namespace:calico-system,Attempt:1,} returns sandbox id \"e06b43e58b7279337376ca4475ccc0dcecf049cb3a95ee19aeab290490321228\"" Jan 17 12:19:22.118858 containerd[1459]: time="2025-01-17T12:19:22.118649649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:22.118858 containerd[1459]: time="2025-01-17T12:19:22.118728929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:22.119583 containerd[1459]: time="2025-01-17T12:19:22.119031565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:22.121252 containerd[1459]: time="2025-01-17T12:19:22.121105808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:22.169134 systemd[1]: Started cri-containerd-057fa003b811e3b09fc5565096863c5eb1d140925ae4787abcb420df525704f7.scope - libcontainer container 057fa003b811e3b09fc5565096863c5eb1d140925ae4787abcb420df525704f7. Jan 17 12:19:22.261292 containerd[1459]: time="2025-01-17T12:19:22.261209742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kks2v,Uid:540c0bc8-bb65-4107-8514-8f6a7b04b667,Namespace:kube-system,Attempt:1,} returns sandbox id \"057fa003b811e3b09fc5565096863c5eb1d140925ae4787abcb420df525704f7\"" Jan 17 12:19:22.264785 kubelet[2491]: E0117 12:19:22.263873 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:22.268028 containerd[1459]: time="2025-01-17T12:19:22.267958182Z" level=info msg="CreateContainer within sandbox \"057fa003b811e3b09fc5565096863c5eb1d140925ae4787abcb420df525704f7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:19:22.275235 containerd[1459]: time="2025-01-17T12:19:22.274927199Z" level=info msg="StopPodSandbox for \"e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f\"" Jan 17 12:19:22.310886 containerd[1459]: time="2025-01-17T12:19:22.309059516Z" level=info msg="CreateContainer within sandbox \"057fa003b811e3b09fc5565096863c5eb1d140925ae4787abcb420df525704f7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aec8ce4b22df69df99d5918539c4782bed72ff1e0a8ca251d186e27da0de1793\"" Jan 17 12:19:22.313085 containerd[1459]: time="2025-01-17T12:19:22.312583832Z" level=info msg="StartContainer for \"aec8ce4b22df69df99d5918539c4782bed72ff1e0a8ca251d186e27da0de1793\"" Jan 17 12:19:22.419344 systemd-networkd[1366]: cali2edcfdc5120: Gained IPv6LL Jan 17 12:19:22.449902 systemd[1]: Started cri-containerd-aec8ce4b22df69df99d5918539c4782bed72ff1e0a8ca251d186e27da0de1793.scope - libcontainer container aec8ce4b22df69df99d5918539c4782bed72ff1e0a8ca251d186e27da0de1793. Jan 17 12:19:22.521071 containerd[1459]: time="2025-01-17T12:19:22.519350239Z" level=info msg="StartContainer for \"aec8ce4b22df69df99d5918539c4782bed72ff1e0a8ca251d186e27da0de1793\" returns successfully" Jan 17 12:19:22.586301 containerd[1459]: 2025-01-17 12:19:22.456 [INFO][4177] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" Jan 17 12:19:22.586301 containerd[1459]: 2025-01-17 12:19:22.457 [INFO][4177] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" iface="eth0" netns="/var/run/netns/cni-9b64455f-4d7d-b037-1168-eff3df8b7f42" Jan 17 12:19:22.586301 containerd[1459]: 2025-01-17 12:19:22.457 [INFO][4177] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" iface="eth0" netns="/var/run/netns/cni-9b64455f-4d7d-b037-1168-eff3df8b7f42" Jan 17 12:19:22.586301 containerd[1459]: 2025-01-17 12:19:22.457 [INFO][4177] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" iface="eth0" netns="/var/run/netns/cni-9b64455f-4d7d-b037-1168-eff3df8b7f42" Jan 17 12:19:22.586301 containerd[1459]: 2025-01-17 12:19:22.457 [INFO][4177] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" Jan 17 12:19:22.586301 containerd[1459]: 2025-01-17 12:19:22.457 [INFO][4177] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" Jan 17 12:19:22.586301 containerd[1459]: 2025-01-17 12:19:22.560 [INFO][4211] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" HandleID="k8s-pod-network.e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--hrc5h-eth0" Jan 17 12:19:22.586301 containerd[1459]: 2025-01-17 12:19:22.561 [INFO][4211] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:22.586301 containerd[1459]: 2025-01-17 12:19:22.561 [INFO][4211] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:22.586301 containerd[1459]: 2025-01-17 12:19:22.571 [WARNING][4211] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" HandleID="k8s-pod-network.e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--hrc5h-eth0" Jan 17 12:19:22.586301 containerd[1459]: 2025-01-17 12:19:22.572 [INFO][4211] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" HandleID="k8s-pod-network.e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--hrc5h-eth0" Jan 17 12:19:22.586301 containerd[1459]: 2025-01-17 12:19:22.575 [INFO][4211] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:22.586301 containerd[1459]: 2025-01-17 12:19:22.577 [INFO][4177] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" Jan 17 12:19:22.586301 containerd[1459]: time="2025-01-17T12:19:22.585914373Z" level=info msg="TearDown network for sandbox \"e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f\" successfully" Jan 17 12:19:22.586301 containerd[1459]: time="2025-01-17T12:19:22.585948451Z" level=info msg="StopPodSandbox for \"e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f\" returns successfully" Jan 17 12:19:22.593076 containerd[1459]: time="2025-01-17T12:19:22.590499275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b466f6854-hrc5h,Uid:d2d2e829-8efa-4f4c-b9c2-2cd87395f520,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:19:22.601545 systemd[1]: run-netns-cni\x2d9b64455f\x2d4d7d\x2db037\x2d1168\x2deff3df8b7f42.mount: Deactivated successfully. Jan 17 12:19:22.779003 kubelet[2491]: E0117 12:19:22.778543 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:22.785038 kubelet[2491]: E0117 12:19:22.784805 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:22.931584 systemd-networkd[1366]: calia2ce85091b0: Link UP Jan 17 12:19:22.933429 systemd-networkd[1366]: calia2ce85091b0: Gained carrier Jan 17 12:19:22.956375 kubelet[2491]: I0117 12:19:22.955116 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-kks2v" podStartSLOduration=38.955086666 podStartE2EDuration="38.955086666s" podCreationTimestamp="2025-01-17 12:18:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:19:22.815605337 +0000 UTC m=+44.726107930" watchObservedRunningTime="2025-01-17 12:19:22.955086666 +0000 UTC m=+44.865589263" Jan 17 12:19:22.966571 containerd[1459]: 2025-01-17 12:19:22.737 [INFO][4247] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--hrc5h-eth0 calico-apiserver-7b466f6854- calico-apiserver d2d2e829-8efa-4f4c-b9c2-2cd87395f520 883 0 2025-01-17 12:18:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b466f6854 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-f-fd30d73867 calico-apiserver-7b466f6854-hrc5h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia2ce85091b0 [] []}} ContainerID="6ba975f575bd431372e670765b3b7a6cb3eda3c4faa19d1861df114c362796c6" Namespace="calico-apiserver" Pod="calico-apiserver-7b466f6854-hrc5h" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--hrc5h-" Jan 17 12:19:22.966571 containerd[1459]: 2025-01-17 12:19:22.737 [INFO][4247] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6ba975f575bd431372e670765b3b7a6cb3eda3c4faa19d1861df114c362796c6" Namespace="calico-apiserver" Pod="calico-apiserver-7b466f6854-hrc5h" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--hrc5h-eth0" Jan 17 12:19:22.966571 containerd[1459]: 2025-01-17 12:19:22.839 [INFO][4260] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ba975f575bd431372e670765b3b7a6cb3eda3c4faa19d1861df114c362796c6" HandleID="k8s-pod-network.6ba975f575bd431372e670765b3b7a6cb3eda3c4faa19d1861df114c362796c6" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--hrc5h-eth0" Jan 17 12:19:22.966571 containerd[1459]: 2025-01-17 12:19:22.858 [INFO][4260] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6ba975f575bd431372e670765b3b7a6cb3eda3c4faa19d1861df114c362796c6" HandleID="k8s-pod-network.6ba975f575bd431372e670765b3b7a6cb3eda3c4faa19d1861df114c362796c6" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--hrc5h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003198d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-f-fd30d73867", "pod":"calico-apiserver-7b466f6854-hrc5h", "timestamp":"2025-01-17 12:19:22.839326263 +0000 UTC"}, Hostname:"ci-4081.3.0-f-fd30d73867", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:19:22.966571 containerd[1459]: 2025-01-17 12:19:22.858 [INFO][4260] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:22.966571 containerd[1459]: 2025-01-17 12:19:22.858 [INFO][4260] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:22.966571 containerd[1459]: 2025-01-17 12:19:22.858 [INFO][4260] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-f-fd30d73867' Jan 17 12:19:22.966571 containerd[1459]: 2025-01-17 12:19:22.865 [INFO][4260] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6ba975f575bd431372e670765b3b7a6cb3eda3c4faa19d1861df114c362796c6" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:22.966571 containerd[1459]: 2025-01-17 12:19:22.876 [INFO][4260] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:22.966571 containerd[1459]: 2025-01-17 12:19:22.889 [INFO][4260] ipam/ipam.go 489: Trying affinity for 192.168.52.128/26 host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:22.966571 containerd[1459]: 2025-01-17 12:19:22.893 [INFO][4260] ipam/ipam.go 155: Attempting to load block cidr=192.168.52.128/26 host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:22.966571 containerd[1459]: 2025-01-17 12:19:22.898 [INFO][4260] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:22.966571 containerd[1459]: 2025-01-17 12:19:22.898 [INFO][4260] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.6ba975f575bd431372e670765b3b7a6cb3eda3c4faa19d1861df114c362796c6" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:22.966571 containerd[1459]: 2025-01-17 12:19:22.901 [INFO][4260] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6ba975f575bd431372e670765b3b7a6cb3eda3c4faa19d1861df114c362796c6 Jan 17 12:19:22.966571 containerd[1459]: 2025-01-17 12:19:22.908 [INFO][4260] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.6ba975f575bd431372e670765b3b7a6cb3eda3c4faa19d1861df114c362796c6" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:22.966571 containerd[1459]: 2025-01-17 12:19:22.919 [INFO][4260] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.52.132/26] block=192.168.52.128/26 handle="k8s-pod-network.6ba975f575bd431372e670765b3b7a6cb3eda3c4faa19d1861df114c362796c6" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:22.966571 containerd[1459]: 2025-01-17 12:19:22.919 [INFO][4260] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.132/26] handle="k8s-pod-network.6ba975f575bd431372e670765b3b7a6cb3eda3c4faa19d1861df114c362796c6" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:22.966571 containerd[1459]: 2025-01-17 12:19:22.919 [INFO][4260] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:22.966571 containerd[1459]: 2025-01-17 12:19:22.919 [INFO][4260] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.132/26] IPv6=[] ContainerID="6ba975f575bd431372e670765b3b7a6cb3eda3c4faa19d1861df114c362796c6" HandleID="k8s-pod-network.6ba975f575bd431372e670765b3b7a6cb3eda3c4faa19d1861df114c362796c6" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--hrc5h-eth0" Jan 17 12:19:22.968299 containerd[1459]: 2025-01-17 12:19:22.922 [INFO][4247] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6ba975f575bd431372e670765b3b7a6cb3eda3c4faa19d1861df114c362796c6" Namespace="calico-apiserver" Pod="calico-apiserver-7b466f6854-hrc5h" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--hrc5h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--hrc5h-eth0", GenerateName:"calico-apiserver-7b466f6854-", Namespace:"calico-apiserver", SelfLink:"", UID:"d2d2e829-8efa-4f4c-b9c2-2cd87395f520", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b466f6854", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-fd30d73867", ContainerID:"", Pod:"calico-apiserver-7b466f6854-hrc5h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia2ce85091b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:22.968299 containerd[1459]: 2025-01-17 12:19:22.922 [INFO][4247] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.52.132/32] ContainerID="6ba975f575bd431372e670765b3b7a6cb3eda3c4faa19d1861df114c362796c6" Namespace="calico-apiserver" Pod="calico-apiserver-7b466f6854-hrc5h" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--hrc5h-eth0" Jan 17 12:19:22.968299 containerd[1459]: 2025-01-17 12:19:22.922 [INFO][4247] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2ce85091b0 ContainerID="6ba975f575bd431372e670765b3b7a6cb3eda3c4faa19d1861df114c362796c6" Namespace="calico-apiserver" Pod="calico-apiserver-7b466f6854-hrc5h" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--hrc5h-eth0" Jan 17 12:19:22.968299 containerd[1459]: 2025-01-17 12:19:22.934 [INFO][4247] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ba975f575bd431372e670765b3b7a6cb3eda3c4faa19d1861df114c362796c6" Namespace="calico-apiserver" Pod="calico-apiserver-7b466f6854-hrc5h" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--hrc5h-eth0" Jan 17 12:19:22.968299 containerd[1459]: 2025-01-17 12:19:22.936 [INFO][4247] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6ba975f575bd431372e670765b3b7a6cb3eda3c4faa19d1861df114c362796c6" Namespace="calico-apiserver" Pod="calico-apiserver-7b466f6854-hrc5h" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--hrc5h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--hrc5h-eth0", GenerateName:"calico-apiserver-7b466f6854-", Namespace:"calico-apiserver", SelfLink:"", UID:"d2d2e829-8efa-4f4c-b9c2-2cd87395f520", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b466f6854", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-fd30d73867", ContainerID:"6ba975f575bd431372e670765b3b7a6cb3eda3c4faa19d1861df114c362796c6", Pod:"calico-apiserver-7b466f6854-hrc5h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia2ce85091b0", MAC:"16:22:7c:55:53:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:22.968299 containerd[1459]: 2025-01-17 12:19:22.958 [INFO][4247] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6ba975f575bd431372e670765b3b7a6cb3eda3c4faa19d1861df114c362796c6" Namespace="calico-apiserver" Pod="calico-apiserver-7b466f6854-hrc5h" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--hrc5h-eth0" Jan 17 12:19:23.016293 containerd[1459]: time="2025-01-17T12:19:23.012938290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:23.016293 containerd[1459]: time="2025-01-17T12:19:23.013043998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:23.016293 containerd[1459]: time="2025-01-17T12:19:23.013066929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:23.016293 containerd[1459]: time="2025-01-17T12:19:23.013773352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:23.059266 systemd-networkd[1366]: cali258509708ee: Gained IPv6LL Jan 17 12:19:23.062199 systemd[1]: Started cri-containerd-6ba975f575bd431372e670765b3b7a6cb3eda3c4faa19d1861df114c362796c6.scope - libcontainer container 6ba975f575bd431372e670765b3b7a6cb3eda3c4faa19d1861df114c362796c6. Jan 17 12:19:23.062887 systemd-networkd[1366]: calic939ee61ee3: Gained IPv6LL Jan 17 12:19:23.155613 containerd[1459]: time="2025-01-17T12:19:23.155569901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b466f6854-hrc5h,Uid:d2d2e829-8efa-4f4c-b9c2-2cd87395f520,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6ba975f575bd431372e670765b3b7a6cb3eda3c4faa19d1861df114c362796c6\"" Jan 17 12:19:23.793655 kubelet[2491]: E0117 12:19:23.792520 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:24.021049 systemd-networkd[1366]: calia2ce85091b0: Gained IPv6LL Jan 17 12:19:24.278400 containerd[1459]: time="2025-01-17T12:19:24.278136764Z" level=info msg="StopPodSandbox for \"5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b\"" Jan 17 12:19:24.283111 containerd[1459]: time="2025-01-17T12:19:24.279488972Z" level=info msg="StopPodSandbox for \"dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a\"" Jan 17 12:19:24.665890 containerd[1459]: 2025-01-17 12:19:24.452 [INFO][4353] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" Jan 17 12:19:24.665890 containerd[1459]: 2025-01-17 12:19:24.453 [INFO][4353] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" iface="eth0" netns="/var/run/netns/cni-63259886-a2a6-32ac-b1f0-b0bf9e52b134" Jan 17 12:19:24.665890 containerd[1459]: 2025-01-17 12:19:24.454 [INFO][4353] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" iface="eth0" netns="/var/run/netns/cni-63259886-a2a6-32ac-b1f0-b0bf9e52b134" Jan 17 12:19:24.665890 containerd[1459]: 2025-01-17 12:19:24.454 [INFO][4353] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" iface="eth0" netns="/var/run/netns/cni-63259886-a2a6-32ac-b1f0-b0bf9e52b134" Jan 17 12:19:24.665890 containerd[1459]: 2025-01-17 12:19:24.454 [INFO][4353] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" Jan 17 12:19:24.665890 containerd[1459]: 2025-01-17 12:19:24.455 [INFO][4353] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" Jan 17 12:19:24.665890 containerd[1459]: 2025-01-17 12:19:24.621 [INFO][4369] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" HandleID="k8s-pod-network.5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:24.665890 containerd[1459]: 2025-01-17 12:19:24.622 [INFO][4369] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:24.665890 containerd[1459]: 2025-01-17 12:19:24.622 [INFO][4369] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:24.665890 containerd[1459]: 2025-01-17 12:19:24.640 [WARNING][4369] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" HandleID="k8s-pod-network.5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:24.665890 containerd[1459]: 2025-01-17 12:19:24.642 [INFO][4369] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" HandleID="k8s-pod-network.5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:24.665890 containerd[1459]: 2025-01-17 12:19:24.646 [INFO][4369] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:24.665890 containerd[1459]: 2025-01-17 12:19:24.657 [INFO][4353] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" Jan 17 12:19:24.679196 containerd[1459]: time="2025-01-17T12:19:24.676257496Z" level=info msg="TearDown network for sandbox \"5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b\" successfully" Jan 17 12:19:24.679196 containerd[1459]: time="2025-01-17T12:19:24.676325197Z" level=info msg="StopPodSandbox for \"5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b\" returns successfully" Jan 17 12:19:24.678201 systemd[1]: run-netns-cni\x2d63259886\x2da2a6\x2d32ac\x2db1f0\x2db0bf9e52b134.mount: Deactivated successfully. Jan 17 12:19:24.684193 containerd[1459]: time="2025-01-17T12:19:24.682701265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75f85c7775-l4kfg,Uid:82477d9d-231e-4438-b265-cae0af210b64,Namespace:calico-system,Attempt:1,}" Jan 17 12:19:24.717182 containerd[1459]: 2025-01-17 12:19:24.510 [INFO][4357] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" Jan 17 12:19:24.717182 containerd[1459]: 2025-01-17 12:19:24.513 [INFO][4357] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" iface="eth0" netns="/var/run/netns/cni-41e3c4f3-e37c-0985-fd5d-a7a3130b0c73" Jan 17 12:19:24.717182 containerd[1459]: 2025-01-17 12:19:24.516 [INFO][4357] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" iface="eth0" netns="/var/run/netns/cni-41e3c4f3-e37c-0985-fd5d-a7a3130b0c73" Jan 17 12:19:24.717182 containerd[1459]: 2025-01-17 12:19:24.520 [INFO][4357] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" iface="eth0" netns="/var/run/netns/cni-41e3c4f3-e37c-0985-fd5d-a7a3130b0c73" Jan 17 12:19:24.717182 containerd[1459]: 2025-01-17 12:19:24.520 [INFO][4357] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" Jan 17 12:19:24.717182 containerd[1459]: 2025-01-17 12:19:24.520 [INFO][4357] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" Jan 17 12:19:24.717182 containerd[1459]: 2025-01-17 12:19:24.654 [INFO][4374] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" HandleID="k8s-pod-network.dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" Workload="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--zgmwb-eth0" Jan 17 12:19:24.717182 containerd[1459]: 2025-01-17 12:19:24.661 [INFO][4374] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:24.717182 containerd[1459]: 2025-01-17 12:19:24.661 [INFO][4374] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:24.717182 containerd[1459]: 2025-01-17 12:19:24.689 [WARNING][4374] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" HandleID="k8s-pod-network.dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" Workload="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--zgmwb-eth0" Jan 17 12:19:24.717182 containerd[1459]: 2025-01-17 12:19:24.689 [INFO][4374] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" HandleID="k8s-pod-network.dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" Workload="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--zgmwb-eth0" Jan 17 12:19:24.717182 containerd[1459]: 2025-01-17 12:19:24.695 [INFO][4374] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:24.717182 containerd[1459]: 2025-01-17 12:19:24.705 [INFO][4357] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" Jan 17 12:19:24.717182 containerd[1459]: time="2025-01-17T12:19:24.717096467Z" level=info msg="TearDown network for sandbox \"dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a\" successfully" Jan 17 12:19:24.717182 containerd[1459]: time="2025-01-17T12:19:24.717124992Z" level=info msg="StopPodSandbox for \"dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a\" returns successfully" Jan 17 12:19:24.726118 containerd[1459]: time="2025-01-17T12:19:24.721088344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zgmwb,Uid:6f470594-2379-4193-8b55-bd3e6a5996c1,Namespace:kube-system,Attempt:1,}" Jan 17 12:19:24.726173 kubelet[2491]: E0117 12:19:24.717556 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:24.741365 systemd[1]: run-netns-cni\x2d41e3c4f3\x2de37c\x2d0985\x2dfd5d\x2da7a3130b0c73.mount: Deactivated successfully. Jan 17 12:19:24.820635 kubelet[2491]: E0117 12:19:24.820535 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:25.355855 systemd-networkd[1366]: cali0367d6b3cb8: Link UP Jan 17 12:19:25.358068 systemd-networkd[1366]: cali0367d6b3cb8: Gained carrier Jan 17 12:19:25.416134 containerd[1459]: 2025-01-17 12:19:24.958 [INFO][4382] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0 calico-kube-controllers-75f85c7775- calico-system 82477d9d-231e-4438-b265-cae0af210b64 906 0 2025-01-17 12:18:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:75f85c7775 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-f-fd30d73867 calico-kube-controllers-75f85c7775-l4kfg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0367d6b3cb8 [] []}} ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Namespace="calico-system" Pod="calico-kube-controllers-75f85c7775-l4kfg" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-" Jan 17 12:19:25.416134 containerd[1459]: 2025-01-17 12:19:24.962 [INFO][4382] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Namespace="calico-system" Pod="calico-kube-controllers-75f85c7775-l4kfg" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:25.416134 containerd[1459]: 2025-01-17 12:19:25.168 [INFO][4406] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" HandleID="k8s-pod-network.e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:25.416134 containerd[1459]: 2025-01-17 12:19:25.195 [INFO][4406] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" HandleID="k8s-pod-network.e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00049d2b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-f-fd30d73867", "pod":"calico-kube-controllers-75f85c7775-l4kfg", "timestamp":"2025-01-17 12:19:25.168893552 +0000 UTC"}, Hostname:"ci-4081.3.0-f-fd30d73867", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:19:25.416134 containerd[1459]: 2025-01-17 12:19:25.195 [INFO][4406] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:25.416134 containerd[1459]: 2025-01-17 12:19:25.198 [INFO][4406] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:25.416134 containerd[1459]: 2025-01-17 12:19:25.199 [INFO][4406] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-f-fd30d73867' Jan 17 12:19:25.416134 containerd[1459]: 2025-01-17 12:19:25.209 [INFO][4406] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:25.416134 containerd[1459]: 2025-01-17 12:19:25.221 [INFO][4406] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:25.416134 containerd[1459]: 2025-01-17 12:19:25.252 [INFO][4406] ipam/ipam.go 489: Trying affinity for 192.168.52.128/26 host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:25.416134 containerd[1459]: 2025-01-17 12:19:25.266 [INFO][4406] ipam/ipam.go 155: Attempting to load block cidr=192.168.52.128/26 host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:25.416134 containerd[1459]: 2025-01-17 12:19:25.273 [INFO][4406] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:25.416134 containerd[1459]: 2025-01-17 12:19:25.273 [INFO][4406] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:25.416134 containerd[1459]: 2025-01-17 12:19:25.280 [INFO][4406] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a Jan 17 12:19:25.416134 containerd[1459]: 2025-01-17 12:19:25.297 [INFO][4406] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:25.416134 containerd[1459]: 2025-01-17 12:19:25.325 [INFO][4406] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.52.133/26] block=192.168.52.128/26 handle="k8s-pod-network.e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:25.416134 containerd[1459]: 2025-01-17 12:19:25.325 [INFO][4406] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.133/26] handle="k8s-pod-network.e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:25.416134 containerd[1459]: 2025-01-17 12:19:25.325 [INFO][4406] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:25.416134 containerd[1459]: 2025-01-17 12:19:25.325 [INFO][4406] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.133/26] IPv6=[] ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" HandleID="k8s-pod-network.e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:25.422249 containerd[1459]: 2025-01-17 12:19:25.334 [INFO][4382] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Namespace="calico-system" Pod="calico-kube-controllers-75f85c7775-l4kfg" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0", GenerateName:"calico-kube-controllers-75f85c7775-", Namespace:"calico-system", SelfLink:"", UID:"82477d9d-231e-4438-b265-cae0af210b64", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75f85c7775", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-fd30d73867", ContainerID:"", Pod:"calico-kube-controllers-75f85c7775-l4kfg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0367d6b3cb8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:25.422249 containerd[1459]: 2025-01-17 12:19:25.334 [INFO][4382] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.52.133/32] ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Namespace="calico-system" Pod="calico-kube-controllers-75f85c7775-l4kfg" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:25.422249 containerd[1459]: 2025-01-17 12:19:25.334 [INFO][4382] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0367d6b3cb8 ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Namespace="calico-system" Pod="calico-kube-controllers-75f85c7775-l4kfg" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:25.422249 containerd[1459]: 2025-01-17 12:19:25.354 [INFO][4382] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Namespace="calico-system" Pod="calico-kube-controllers-75f85c7775-l4kfg" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:25.422249 containerd[1459]: 2025-01-17 12:19:25.359 [INFO][4382] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Namespace="calico-system" Pod="calico-kube-controllers-75f85c7775-l4kfg" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0", GenerateName:"calico-kube-controllers-75f85c7775-", Namespace:"calico-system", SelfLink:"", UID:"82477d9d-231e-4438-b265-cae0af210b64", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75f85c7775", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-fd30d73867", ContainerID:"e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a", Pod:"calico-kube-controllers-75f85c7775-l4kfg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0367d6b3cb8", MAC:"52:ab:49:68:c5:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:25.422249 containerd[1459]: 2025-01-17 12:19:25.401 [INFO][4382] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Namespace="calico-system" Pod="calico-kube-controllers-75f85c7775-l4kfg" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:25.485818 containerd[1459]: time="2025-01-17T12:19:25.485506327Z" level=info msg="StopContainer for \"6731aa64cd6cd84c442d1383ebd9b4f199beba007b2a3a048c808436e7f7325f\" with timeout 300 (s)" Jan 17 12:19:25.490420 containerd[1459]: time="2025-01-17T12:19:25.490355752Z" level=info msg="Stop container \"6731aa64cd6cd84c442d1383ebd9b4f199beba007b2a3a048c808436e7f7325f\" with signal terminated" Jan 17 12:19:25.642643 containerd[1459]: time="2025-01-17T12:19:25.639017466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:25.642643 containerd[1459]: time="2025-01-17T12:19:25.639100436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:25.642643 containerd[1459]: time="2025-01-17T12:19:25.639112384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:25.642643 containerd[1459]: time="2025-01-17T12:19:25.639217428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:25.708417 systemd-networkd[1366]: calif7121a239a4: Link UP Jan 17 12:19:25.710341 systemd-networkd[1366]: calif7121a239a4: Gained carrier Jan 17 12:19:25.750059 systemd[1]: Started cri-containerd-e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a.scope - libcontainer container e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a. Jan 17 12:19:25.783696 containerd[1459]: 2025-01-17 12:19:25.054 [INFO][4393] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--zgmwb-eth0 coredns-6f6b679f8f- kube-system 6f470594-2379-4193-8b55-bd3e6a5996c1 907 0 2025-01-17 12:18:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-f-fd30d73867 coredns-6f6b679f8f-zgmwb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif7121a239a4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8c1154975a3442ee93bb93d10d2f277b7c023d0af3da2e4d0867a668a8633d0a" Namespace="kube-system" Pod="coredns-6f6b679f8f-zgmwb" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--zgmwb-" Jan 17 12:19:25.783696 containerd[1459]: 2025-01-17 12:19:25.054 [INFO][4393] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8c1154975a3442ee93bb93d10d2f277b7c023d0af3da2e4d0867a668a8633d0a" Namespace="kube-system" Pod="coredns-6f6b679f8f-zgmwb" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--zgmwb-eth0" Jan 17 12:19:25.783696 containerd[1459]: 2025-01-17 12:19:25.181 [INFO][4411] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c1154975a3442ee93bb93d10d2f277b7c023d0af3da2e4d0867a668a8633d0a" HandleID="k8s-pod-network.8c1154975a3442ee93bb93d10d2f277b7c023d0af3da2e4d0867a668a8633d0a" Workload="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--zgmwb-eth0" Jan 17 12:19:25.783696 containerd[1459]: 2025-01-17 12:19:25.202 [INFO][4411] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8c1154975a3442ee93bb93d10d2f277b7c023d0af3da2e4d0867a668a8633d0a" HandleID="k8s-pod-network.8c1154975a3442ee93bb93d10d2f277b7c023d0af3da2e4d0867a668a8633d0a" Workload="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--zgmwb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039c200), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-f-fd30d73867", "pod":"coredns-6f6b679f8f-zgmwb", "timestamp":"2025-01-17 12:19:25.181004262 +0000 UTC"}, Hostname:"ci-4081.3.0-f-fd30d73867", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:19:25.783696 containerd[1459]: 2025-01-17 12:19:25.202 [INFO][4411] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:25.783696 containerd[1459]: 2025-01-17 12:19:25.330 [INFO][4411] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:25.783696 containerd[1459]: 2025-01-17 12:19:25.330 [INFO][4411] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-f-fd30d73867' Jan 17 12:19:25.783696 containerd[1459]: 2025-01-17 12:19:25.352 [INFO][4411] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8c1154975a3442ee93bb93d10d2f277b7c023d0af3da2e4d0867a668a8633d0a" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:25.783696 containerd[1459]: 2025-01-17 12:19:25.448 [INFO][4411] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:25.783696 containerd[1459]: 2025-01-17 12:19:25.541 [INFO][4411] ipam/ipam.go 489: Trying affinity for 192.168.52.128/26 host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:25.783696 containerd[1459]: 2025-01-17 12:19:25.562 [INFO][4411] ipam/ipam.go 155: Attempting to load block cidr=192.168.52.128/26 host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:25.783696 containerd[1459]: 2025-01-17 12:19:25.587 [INFO][4411] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:25.783696 containerd[1459]: 2025-01-17 12:19:25.588 [INFO][4411] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.8c1154975a3442ee93bb93d10d2f277b7c023d0af3da2e4d0867a668a8633d0a" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:25.783696 containerd[1459]: 2025-01-17 12:19:25.598 [INFO][4411] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8c1154975a3442ee93bb93d10d2f277b7c023d0af3da2e4d0867a668a8633d0a Jan 17 12:19:25.783696 containerd[1459]: 2025-01-17 12:19:25.626 [INFO][4411] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.8c1154975a3442ee93bb93d10d2f277b7c023d0af3da2e4d0867a668a8633d0a" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:25.783696 containerd[1459]: 2025-01-17 12:19:25.664 [INFO][4411] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.52.134/26] block=192.168.52.128/26 handle="k8s-pod-network.8c1154975a3442ee93bb93d10d2f277b7c023d0af3da2e4d0867a668a8633d0a" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:25.783696 containerd[1459]: 2025-01-17 12:19:25.664 [INFO][4411] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.134/26] handle="k8s-pod-network.8c1154975a3442ee93bb93d10d2f277b7c023d0af3da2e4d0867a668a8633d0a" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:25.783696 containerd[1459]: 2025-01-17 12:19:25.667 [INFO][4411] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:25.783696 containerd[1459]: 2025-01-17 12:19:25.667 [INFO][4411] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.134/26] IPv6=[] ContainerID="8c1154975a3442ee93bb93d10d2f277b7c023d0af3da2e4d0867a668a8633d0a" HandleID="k8s-pod-network.8c1154975a3442ee93bb93d10d2f277b7c023d0af3da2e4d0867a668a8633d0a" Workload="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--zgmwb-eth0" Jan 17 12:19:25.784653 containerd[1459]: 2025-01-17 12:19:25.685 [INFO][4393] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8c1154975a3442ee93bb93d10d2f277b7c023d0af3da2e4d0867a668a8633d0a" Namespace="kube-system" Pod="coredns-6f6b679f8f-zgmwb" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--zgmwb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--zgmwb-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6f470594-2379-4193-8b55-bd3e6a5996c1", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-fd30d73867", ContainerID:"", Pod:"coredns-6f6b679f8f-zgmwb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7121a239a4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:25.784653 containerd[1459]: 2025-01-17 12:19:25.685 [INFO][4393] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.52.134/32] ContainerID="8c1154975a3442ee93bb93d10d2f277b7c023d0af3da2e4d0867a668a8633d0a" Namespace="kube-system" Pod="coredns-6f6b679f8f-zgmwb" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--zgmwb-eth0" Jan 17 12:19:25.784653 containerd[1459]: 2025-01-17 12:19:25.685 [INFO][4393] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7121a239a4 ContainerID="8c1154975a3442ee93bb93d10d2f277b7c023d0af3da2e4d0867a668a8633d0a" Namespace="kube-system" Pod="coredns-6f6b679f8f-zgmwb" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--zgmwb-eth0" Jan 17 12:19:25.784653 containerd[1459]: 2025-01-17 12:19:25.708 [INFO][4393] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c1154975a3442ee93bb93d10d2f277b7c023d0af3da2e4d0867a668a8633d0a" Namespace="kube-system" Pod="coredns-6f6b679f8f-zgmwb" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--zgmwb-eth0" Jan 17 12:19:25.784653 containerd[1459]: 2025-01-17 12:19:25.715 [INFO][4393] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8c1154975a3442ee93bb93d10d2f277b7c023d0af3da2e4d0867a668a8633d0a" Namespace="kube-system" Pod="coredns-6f6b679f8f-zgmwb" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--zgmwb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--zgmwb-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6f470594-2379-4193-8b55-bd3e6a5996c1", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-fd30d73867", ContainerID:"8c1154975a3442ee93bb93d10d2f277b7c023d0af3da2e4d0867a668a8633d0a", Pod:"coredns-6f6b679f8f-zgmwb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7121a239a4", MAC:"72:d4:97:55:a9:16", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:25.784653 containerd[1459]: 2025-01-17 12:19:25.774 [INFO][4393] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8c1154975a3442ee93bb93d10d2f277b7c023d0af3da2e4d0867a668a8633d0a" Namespace="kube-system" Pod="coredns-6f6b679f8f-zgmwb" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--zgmwb-eth0" Jan 17 12:19:25.824927 kubelet[2491]: E0117 12:19:25.823880 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:25.876533 containerd[1459]: time="2025-01-17T12:19:25.875983381Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:25.876533 containerd[1459]: time="2025-01-17T12:19:25.876164714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:25.876533 containerd[1459]: time="2025-01-17T12:19:25.876193446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:25.876533 containerd[1459]: time="2025-01-17T12:19:25.876317837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:25.991174 systemd[1]: Started cri-containerd-8c1154975a3442ee93bb93d10d2f277b7c023d0af3da2e4d0867a668a8633d0a.scope - libcontainer container 8c1154975a3442ee93bb93d10d2f277b7c023d0af3da2e4d0867a668a8633d0a. Jan 17 12:19:26.290524 containerd[1459]: time="2025-01-17T12:19:26.287793609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zgmwb,Uid:6f470594-2379-4193-8b55-bd3e6a5996c1,Namespace:kube-system,Attempt:1,} returns sandbox id \"8c1154975a3442ee93bb93d10d2f277b7c023d0af3da2e4d0867a668a8633d0a\"" Jan 17 12:19:26.291127 containerd[1459]: time="2025-01-17T12:19:26.288523552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75f85c7775-l4kfg,Uid:82477d9d-231e-4438-b265-cae0af210b64,Namespace:calico-system,Attempt:1,} returns sandbox id \"e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a\"" Jan 17 12:19:26.295012 kubelet[2491]: E0117 12:19:26.294877 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:26.307880 containerd[1459]: time="2025-01-17T12:19:26.307112152Z" level=info msg="CreateContainer within sandbox \"8c1154975a3442ee93bb93d10d2f277b7c023d0af3da2e4d0867a668a8633d0a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:19:26.335418 containerd[1459]: time="2025-01-17T12:19:26.334131700Z" level=info msg="CreateContainer within sandbox \"8c1154975a3442ee93bb93d10d2f277b7c023d0af3da2e4d0867a668a8633d0a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a75ef72e9dbe028589749e13f3f631f59734e19cd993a3ad463af218e85be078\"" Jan 17 12:19:26.336137 containerd[1459]: time="2025-01-17T12:19:26.336069661Z" level=info msg="StartContainer for \"a75ef72e9dbe028589749e13f3f631f59734e19cd993a3ad463af218e85be078\"" Jan 17 12:19:26.493100 systemd[1]: Started cri-containerd-a75ef72e9dbe028589749e13f3f631f59734e19cd993a3ad463af218e85be078.scope - libcontainer container a75ef72e9dbe028589749e13f3f631f59734e19cd993a3ad463af218e85be078. Jan 17 12:19:26.595874 containerd[1459]: time="2025-01-17T12:19:26.593173508Z" level=info msg="StartContainer for \"a75ef72e9dbe028589749e13f3f631f59734e19cd993a3ad463af218e85be078\" returns successfully" Jan 17 12:19:26.742168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount616860169.mount: Deactivated successfully. Jan 17 12:19:26.828282 kubelet[2491]: E0117 12:19:26.827642 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:26.864864 kubelet[2491]: I0117 12:19:26.863994 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-zgmwb" podStartSLOduration=42.863964852 podStartE2EDuration="42.863964852s" podCreationTimestamp="2025-01-17 12:18:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:19:26.859474844 +0000 UTC m=+48.769977438" watchObservedRunningTime="2025-01-17 12:19:26.863964852 +0000 UTC m=+48.774467446" Jan 17 12:19:26.878488 containerd[1459]: time="2025-01-17T12:19:26.876288900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:26.879839 containerd[1459]: time="2025-01-17T12:19:26.879765792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 17 12:19:26.881673 containerd[1459]: time="2025-01-17T12:19:26.881602033Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:26.887662 containerd[1459]: time="2025-01-17T12:19:26.887600461Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:26.892263 containerd[1459]: time="2025-01-17T12:19:26.892197536Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 5.827503847s" Jan 17 12:19:26.892578 containerd[1459]: time="2025-01-17T12:19:26.892442468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 17 12:19:26.898966 systemd-networkd[1366]: cali0367d6b3cb8: Gained IPv6LL Jan 17 12:19:26.901541 containerd[1459]: time="2025-01-17T12:19:26.901478174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 17 12:19:26.916718 containerd[1459]: time="2025-01-17T12:19:26.916639381Z" level=info msg="CreateContainer within sandbox \"77ff8ea28cd6538c912a02c940a3c1778b8ae1310ae75ca29a7d7f0f882233dc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:19:27.005620 containerd[1459]: time="2025-01-17T12:19:27.005370101Z" level=info msg="StopContainer for \"c14e391e3459eee1b19097ecd76f8e601abb5f089c380022e6c96dcc093705f9\" with timeout 5 (s)" Jan 17 12:19:27.007335 containerd[1459]: time="2025-01-17T12:19:27.007292517Z" level=info msg="Stop container \"c14e391e3459eee1b19097ecd76f8e601abb5f089c380022e6c96dcc093705f9\" with signal terminated" Jan 17 12:19:27.010472 containerd[1459]: time="2025-01-17T12:19:27.010379985Z" level=info msg="CreateContainer within sandbox \"77ff8ea28cd6538c912a02c940a3c1778b8ae1310ae75ca29a7d7f0f882233dc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"61a4c947b1518cd5faf7e9cbdbae4d6bee824372c3b73951b29dc217bbd4c6f6\"" Jan 17 12:19:27.014437 containerd[1459]: time="2025-01-17T12:19:27.012012832Z" level=info msg="StartContainer for \"61a4c947b1518cd5faf7e9cbdbae4d6bee824372c3b73951b29dc217bbd4c6f6\"" Jan 17 12:19:27.027852 systemd-networkd[1366]: calif7121a239a4: Gained IPv6LL Jan 17 12:19:27.106607 systemd[1]: Started cri-containerd-61a4c947b1518cd5faf7e9cbdbae4d6bee824372c3b73951b29dc217bbd4c6f6.scope - libcontainer container 61a4c947b1518cd5faf7e9cbdbae4d6bee824372c3b73951b29dc217bbd4c6f6. Jan 17 12:19:27.125693 systemd[1]: cri-containerd-c14e391e3459eee1b19097ecd76f8e601abb5f089c380022e6c96dcc093705f9.scope: Deactivated successfully. Jan 17 12:19:27.127230 systemd[1]: cri-containerd-c14e391e3459eee1b19097ecd76f8e601abb5f089c380022e6c96dcc093705f9.scope: Consumed 2.494s CPU time. Jan 17 12:19:27.205001 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c14e391e3459eee1b19097ecd76f8e601abb5f089c380022e6c96dcc093705f9-rootfs.mount: Deactivated successfully. Jan 17 12:19:27.300659 containerd[1459]: time="2025-01-17T12:19:27.300584762Z" level=info msg="StartContainer for \"61a4c947b1518cd5faf7e9cbdbae4d6bee824372c3b73951b29dc217bbd4c6f6\" returns successfully" Jan 17 12:19:27.327672 containerd[1459]: time="2025-01-17T12:19:27.305239873Z" level=info msg="shim disconnected" id=c14e391e3459eee1b19097ecd76f8e601abb5f089c380022e6c96dcc093705f9 namespace=k8s.io Jan 17 12:19:27.327672 containerd[1459]: time="2025-01-17T12:19:27.327675626Z" level=warning msg="cleaning up after shim disconnected" id=c14e391e3459eee1b19097ecd76f8e601abb5f089c380022e6c96dcc093705f9 namespace=k8s.io Jan 17 12:19:27.328703 containerd[1459]: time="2025-01-17T12:19:27.327693793Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:19:27.363841 containerd[1459]: time="2025-01-17T12:19:27.363608438Z" level=info msg="StopContainer for \"c14e391e3459eee1b19097ecd76f8e601abb5f089c380022e6c96dcc093705f9\" returns successfully" Jan 17 12:19:27.368248 containerd[1459]: time="2025-01-17T12:19:27.366847986Z" level=info msg="StopPodSandbox for \"745bc54d77845bf03da2fc2e227c5365cab88e8ffb32a347777d2a84ff5a86dc\"" Jan 17 12:19:27.381165 containerd[1459]: time="2025-01-17T12:19:27.380960588Z" level=info msg="Container to stop \"ed6342248a4ebaddffe8ae5be6608f56418168bd1e8d313d690ffdc5b9e8b798\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:19:27.381165 containerd[1459]: time="2025-01-17T12:19:27.381035291Z" level=info msg="Container to stop \"c14e391e3459eee1b19097ecd76f8e601abb5f089c380022e6c96dcc093705f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:19:27.381165 containerd[1459]: time="2025-01-17T12:19:27.381054617Z" level=info msg="Container to stop \"d8901e73224402e20e3012c4525fa7465ea58001b2dd1fd0e55b0c6776a600ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:19:27.408070 systemd[1]: cri-containerd-745bc54d77845bf03da2fc2e227c5365cab88e8ffb32a347777d2a84ff5a86dc.scope: Deactivated successfully. Jan 17 12:19:27.464638 containerd[1459]: time="2025-01-17T12:19:27.464252313Z" level=info msg="shim disconnected" id=745bc54d77845bf03da2fc2e227c5365cab88e8ffb32a347777d2a84ff5a86dc namespace=k8s.io Jan 17 12:19:27.464638 containerd[1459]: time="2025-01-17T12:19:27.464528020Z" level=warning msg="cleaning up after shim disconnected" id=745bc54d77845bf03da2fc2e227c5365cab88e8ffb32a347777d2a84ff5a86dc namespace=k8s.io Jan 17 12:19:27.464638 containerd[1459]: time="2025-01-17T12:19:27.464543625Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:19:27.492909 containerd[1459]: time="2025-01-17T12:19:27.492846239Z" level=info msg="TearDown network for sandbox \"745bc54d77845bf03da2fc2e227c5365cab88e8ffb32a347777d2a84ff5a86dc\" successfully" Jan 17 12:19:27.492909 containerd[1459]: time="2025-01-17T12:19:27.492887378Z" level=info msg="StopPodSandbox for \"745bc54d77845bf03da2fc2e227c5365cab88e8ffb32a347777d2a84ff5a86dc\" returns successfully" Jan 17 12:19:27.538827 kubelet[2491]: I0117 12:19:27.535947 2491 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-lib-modules\") pod \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\" (UID: \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\") " Jan 17 12:19:27.538827 kubelet[2491]: I0117 12:19:27.536052 2491 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-var-run-calico\") pod \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\" (UID: \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\") " Jan 17 12:19:27.538827 kubelet[2491]: I0117 12:19:27.536364 2491 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8tg4\" (UniqueName: \"kubernetes.io/projected/77bacb2f-b10c-4b7c-824b-6ba816dc5586-kube-api-access-h8tg4\") pod \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\" (UID: \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\") " Jan 17 12:19:27.538827 kubelet[2491]: I0117 12:19:27.536406 2491 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-xtables-lock\") pod \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\" (UID: \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\") " Jan 17 12:19:27.538827 kubelet[2491]: I0117 12:19:27.536479 2491 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-var-lib-calico\") pod \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\" (UID: \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\") " Jan 17 12:19:27.538827 kubelet[2491]: I0117 12:19:27.536563 2491 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-cni-net-dir\") pod \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\" (UID: \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\") " Jan 17 12:19:27.539199 kubelet[2491]: I0117 12:19:27.536591 2491 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-policysync\") pod \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\" (UID: \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\") " Jan 17 12:19:27.539199 kubelet[2491]: I0117 12:19:27.536613 2491 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77bacb2f-b10c-4b7c-824b-6ba816dc5586-tigera-ca-bundle\") pod \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\" (UID: \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\") " Jan 17 12:19:27.539199 kubelet[2491]: I0117 12:19:27.536634 2491 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-flexvol-driver-host\") pod \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\" (UID: \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\") " Jan 17 12:19:27.539199 kubelet[2491]: I0117 12:19:27.536649 2491 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-cni-log-dir\") pod \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\" (UID: \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\") " Jan 17 12:19:27.539199 kubelet[2491]: I0117 12:19:27.536668 2491 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/77bacb2f-b10c-4b7c-824b-6ba816dc5586-node-certs\") pod \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\" (UID: \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\") " Jan 17 12:19:27.539199 kubelet[2491]: I0117 12:19:27.536690 2491 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-cni-bin-dir\") pod \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\" (UID: \"77bacb2f-b10c-4b7c-824b-6ba816dc5586\") " Jan 17 12:19:27.553332 kubelet[2491]: I0117 12:19:27.551997 2491 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "77bacb2f-b10c-4b7c-824b-6ba816dc5586" (UID: "77bacb2f-b10c-4b7c-824b-6ba816dc5586"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:19:27.554174 kubelet[2491]: I0117 12:19:27.551160 2491 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-policysync" (OuterVolumeSpecName: "policysync") pod "77bacb2f-b10c-4b7c-824b-6ba816dc5586" (UID: "77bacb2f-b10c-4b7c-824b-6ba816dc5586"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:19:27.554574 kubelet[2491]: I0117 12:19:27.554426 2491 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "77bacb2f-b10c-4b7c-824b-6ba816dc5586" (UID: "77bacb2f-b10c-4b7c-824b-6ba816dc5586"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:19:27.554574 kubelet[2491]: I0117 12:19:27.554479 2491 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "77bacb2f-b10c-4b7c-824b-6ba816dc5586" (UID: "77bacb2f-b10c-4b7c-824b-6ba816dc5586"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:19:27.574780 kubelet[2491]: I0117 12:19:27.574706 2491 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "77bacb2f-b10c-4b7c-824b-6ba816dc5586" (UID: "77bacb2f-b10c-4b7c-824b-6ba816dc5586"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:19:27.574780 kubelet[2491]: I0117 12:19:27.574795 2491 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "77bacb2f-b10c-4b7c-824b-6ba816dc5586" (UID: "77bacb2f-b10c-4b7c-824b-6ba816dc5586"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:19:27.574974 kubelet[2491]: I0117 12:19:27.574878 2491 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "77bacb2f-b10c-4b7c-824b-6ba816dc5586" (UID: "77bacb2f-b10c-4b7c-824b-6ba816dc5586"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:19:27.578642 kubelet[2491]: I0117 12:19:27.578561 2491 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "77bacb2f-b10c-4b7c-824b-6ba816dc5586" (UID: "77bacb2f-b10c-4b7c-824b-6ba816dc5586"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:19:27.578882 kubelet[2491]: I0117 12:19:27.578686 2491 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "77bacb2f-b10c-4b7c-824b-6ba816dc5586" (UID: "77bacb2f-b10c-4b7c-824b-6ba816dc5586"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:19:27.586946 kubelet[2491]: I0117 12:19:27.586554 2491 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77bacb2f-b10c-4b7c-824b-6ba816dc5586-kube-api-access-h8tg4" (OuterVolumeSpecName: "kube-api-access-h8tg4") pod "77bacb2f-b10c-4b7c-824b-6ba816dc5586" (UID: "77bacb2f-b10c-4b7c-824b-6ba816dc5586"). InnerVolumeSpecName "kube-api-access-h8tg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:19:27.597183 kubelet[2491]: I0117 12:19:27.595266 2491 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77bacb2f-b10c-4b7c-824b-6ba816dc5586-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "77bacb2f-b10c-4b7c-824b-6ba816dc5586" (UID: "77bacb2f-b10c-4b7c-824b-6ba816dc5586"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:19:27.601122 kubelet[2491]: I0117 12:19:27.601068 2491 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77bacb2f-b10c-4b7c-824b-6ba816dc5586-node-certs" (OuterVolumeSpecName: "node-certs") pod "77bacb2f-b10c-4b7c-824b-6ba816dc5586" (UID: "77bacb2f-b10c-4b7c-824b-6ba816dc5586"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 17 12:19:27.613775 kubelet[2491]: E0117 12:19:27.612025 2491 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="77bacb2f-b10c-4b7c-824b-6ba816dc5586" containerName="flexvol-driver" Jan 17 12:19:27.613775 kubelet[2491]: E0117 12:19:27.612268 2491 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="77bacb2f-b10c-4b7c-824b-6ba816dc5586" containerName="install-cni" Jan 17 12:19:27.613775 kubelet[2491]: E0117 12:19:27.612305 2491 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="77bacb2f-b10c-4b7c-824b-6ba816dc5586" containerName="calico-node" Jan 17 12:19:27.613775 kubelet[2491]: I0117 12:19:27.612410 2491 memory_manager.go:354] "RemoveStaleState removing state" podUID="77bacb2f-b10c-4b7c-824b-6ba816dc5586" containerName="calico-node" Jan 17 12:19:27.627043 systemd[1]: Created slice kubepods-besteffort-pod15b1c26c_8da4_4550_bbb1_62a446008bb1.slice - libcontainer container kubepods-besteffort-pod15b1c26c_8da4_4550_bbb1_62a446008bb1.slice. Jan 17 12:19:27.637387 kubelet[2491]: I0117 12:19:27.637240 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/15b1c26c-8da4-4550-bbb1-62a446008bb1-cni-net-dir\") pod \"calico-node-vmbnf\" (UID: \"15b1c26c-8da4-4550-bbb1-62a446008bb1\") " pod="calico-system/calico-node-vmbnf" Jan 17 12:19:27.637387 kubelet[2491]: I0117 12:19:27.637288 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/15b1c26c-8da4-4550-bbb1-62a446008bb1-var-run-calico\") pod \"calico-node-vmbnf\" (UID: \"15b1c26c-8da4-4550-bbb1-62a446008bb1\") " pod="calico-system/calico-node-vmbnf" Jan 17 12:19:27.637387 kubelet[2491]: I0117 12:19:27.637310 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/15b1c26c-8da4-4550-bbb1-62a446008bb1-cni-log-dir\") pod \"calico-node-vmbnf\" (UID: \"15b1c26c-8da4-4550-bbb1-62a446008bb1\") " pod="calico-system/calico-node-vmbnf" Jan 17 12:19:27.637387 kubelet[2491]: I0117 12:19:27.637334 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/15b1c26c-8da4-4550-bbb1-62a446008bb1-policysync\") pod \"calico-node-vmbnf\" (UID: \"15b1c26c-8da4-4550-bbb1-62a446008bb1\") " pod="calico-system/calico-node-vmbnf" Jan 17 12:19:27.637387 kubelet[2491]: I0117 12:19:27.637357 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/15b1c26c-8da4-4550-bbb1-62a446008bb1-flexvol-driver-host\") pod \"calico-node-vmbnf\" (UID: \"15b1c26c-8da4-4550-bbb1-62a446008bb1\") " pod="calico-system/calico-node-vmbnf" Jan 17 12:19:27.637691 kubelet[2491]: I0117 12:19:27.637380 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15b1c26c-8da4-4550-bbb1-62a446008bb1-lib-modules\") pod \"calico-node-vmbnf\" (UID: \"15b1c26c-8da4-4550-bbb1-62a446008bb1\") " pod="calico-system/calico-node-vmbnf" Jan 17 12:19:27.637691 kubelet[2491]: I0117 12:19:27.637406 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15b1c26c-8da4-4550-bbb1-62a446008bb1-tigera-ca-bundle\") pod \"calico-node-vmbnf\" (UID: \"15b1c26c-8da4-4550-bbb1-62a446008bb1\") " pod="calico-system/calico-node-vmbnf" Jan 17 12:19:27.637691 kubelet[2491]: I0117 12:19:27.637433 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/15b1c26c-8da4-4550-bbb1-62a446008bb1-node-certs\") pod \"calico-node-vmbnf\" (UID: \"15b1c26c-8da4-4550-bbb1-62a446008bb1\") " pod="calico-system/calico-node-vmbnf" Jan 17 12:19:27.637691 kubelet[2491]: I0117 12:19:27.637458 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15b1c26c-8da4-4550-bbb1-62a446008bb1-xtables-lock\") pod \"calico-node-vmbnf\" (UID: \"15b1c26c-8da4-4550-bbb1-62a446008bb1\") " pod="calico-system/calico-node-vmbnf" Jan 17 12:19:27.637691 kubelet[2491]: I0117 12:19:27.637479 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/15b1c26c-8da4-4550-bbb1-62a446008bb1-var-lib-calico\") pod \"calico-node-vmbnf\" (UID: \"15b1c26c-8da4-4550-bbb1-62a446008bb1\") " pod="calico-system/calico-node-vmbnf" Jan 17 12:19:27.641102 kubelet[2491]: I0117 12:19:27.637498 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dxpv\" (UniqueName: \"kubernetes.io/projected/15b1c26c-8da4-4550-bbb1-62a446008bb1-kube-api-access-2dxpv\") pod \"calico-node-vmbnf\" (UID: \"15b1c26c-8da4-4550-bbb1-62a446008bb1\") " pod="calico-system/calico-node-vmbnf" Jan 17 12:19:27.641102 kubelet[2491]: I0117 12:19:27.637522 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/15b1c26c-8da4-4550-bbb1-62a446008bb1-cni-bin-dir\") pod \"calico-node-vmbnf\" (UID: \"15b1c26c-8da4-4550-bbb1-62a446008bb1\") " pod="calico-system/calico-node-vmbnf" Jan 17 12:19:27.641102 kubelet[2491]: I0117 12:19:27.637548 2491 reconciler_common.go:288] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-var-run-calico\") on node \"ci-4081.3.0-f-fd30d73867\" DevicePath \"\"" Jan 17 12:19:27.641102 kubelet[2491]: I0117 12:19:27.637560 2491 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-h8tg4\" (UniqueName: \"kubernetes.io/projected/77bacb2f-b10c-4b7c-824b-6ba816dc5586-kube-api-access-h8tg4\") on node \"ci-4081.3.0-f-fd30d73867\" DevicePath \"\"" Jan 17 12:19:27.641102 kubelet[2491]: I0117 12:19:27.637569 2491 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-xtables-lock\") on node \"ci-4081.3.0-f-fd30d73867\" DevicePath \"\"" Jan 17 12:19:27.641102 kubelet[2491]: I0117 12:19:27.637579 2491 reconciler_common.go:288] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-var-lib-calico\") on node \"ci-4081.3.0-f-fd30d73867\" DevicePath \"\"" Jan 17 12:19:27.641102 kubelet[2491]: I0117 12:19:27.637588 2491 reconciler_common.go:288] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-cni-net-dir\") on node \"ci-4081.3.0-f-fd30d73867\" DevicePath \"\"" Jan 17 12:19:27.641276 kubelet[2491]: I0117 12:19:27.637595 2491 reconciler_common.go:288] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-policysync\") on node \"ci-4081.3.0-f-fd30d73867\" DevicePath \"\"" Jan 17 12:19:27.641276 kubelet[2491]: I0117 12:19:27.637604 2491 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77bacb2f-b10c-4b7c-824b-6ba816dc5586-tigera-ca-bundle\") on node \"ci-4081.3.0-f-fd30d73867\" DevicePath \"\"" Jan 17 12:19:27.641276 kubelet[2491]: I0117 12:19:27.637612 2491 reconciler_common.go:288] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-cni-log-dir\") on node \"ci-4081.3.0-f-fd30d73867\" DevicePath \"\"" Jan 17 12:19:27.641276 kubelet[2491]: I0117 12:19:27.637620 2491 reconciler_common.go:288] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/77bacb2f-b10c-4b7c-824b-6ba816dc5586-node-certs\") on node \"ci-4081.3.0-f-fd30d73867\" DevicePath \"\"" Jan 17 12:19:27.641276 kubelet[2491]: I0117 12:19:27.637627 2491 reconciler_common.go:288] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-cni-bin-dir\") on node \"ci-4081.3.0-f-fd30d73867\" DevicePath \"\"" Jan 17 12:19:27.641276 kubelet[2491]: I0117 12:19:27.637636 2491 reconciler_common.go:288] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-flexvol-driver-host\") on node \"ci-4081.3.0-f-fd30d73867\" DevicePath \"\"" Jan 17 12:19:27.641276 kubelet[2491]: I0117 12:19:27.637645 2491 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77bacb2f-b10c-4b7c-824b-6ba816dc5586-lib-modules\") on node \"ci-4081.3.0-f-fd30d73867\" DevicePath \"\"" Jan 17 12:19:27.738258 systemd[1]: var-lib-kubelet-pods-77bacb2f\x2db10c\x2d4b7c\x2d824b\x2d6ba816dc5586-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Jan 17 12:19:27.741030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-745bc54d77845bf03da2fc2e227c5365cab88e8ffb32a347777d2a84ff5a86dc-rootfs.mount: Deactivated successfully. Jan 17 12:19:27.741103 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-745bc54d77845bf03da2fc2e227c5365cab88e8ffb32a347777d2a84ff5a86dc-shm.mount: Deactivated successfully. Jan 17 12:19:27.741165 systemd[1]: var-lib-kubelet-pods-77bacb2f\x2db10c\x2d4b7c\x2d824b\x2d6ba816dc5586-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh8tg4.mount: Deactivated successfully. Jan 17 12:19:27.741228 systemd[1]: var-lib-kubelet-pods-77bacb2f\x2db10c\x2d4b7c\x2d824b\x2d6ba816dc5586-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jan 17 12:19:27.873515 kubelet[2491]: E0117 12:19:27.873476 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:27.880334 kubelet[2491]: I0117 12:19:27.880280 2491 scope.go:117] "RemoveContainer" containerID="c14e391e3459eee1b19097ecd76f8e601abb5f089c380022e6c96dcc093705f9" Jan 17 12:19:27.886673 containerd[1459]: time="2025-01-17T12:19:27.886607831Z" level=info msg="RemoveContainer for \"c14e391e3459eee1b19097ecd76f8e601abb5f089c380022e6c96dcc093705f9\"" Jan 17 12:19:27.922820 containerd[1459]: time="2025-01-17T12:19:27.922381074Z" level=info msg="RemoveContainer for \"c14e391e3459eee1b19097ecd76f8e601abb5f089c380022e6c96dcc093705f9\" returns successfully" Jan 17 12:19:27.923433 kubelet[2491]: I0117 12:19:27.923407 2491 scope.go:117] "RemoveContainer" containerID="ed6342248a4ebaddffe8ae5be6608f56418168bd1e8d313d690ffdc5b9e8b798" Jan 17 12:19:27.932827 systemd[1]: Removed slice kubepods-besteffort-pod77bacb2f_b10c_4b7c_824b_6ba816dc5586.slice - libcontainer container kubepods-besteffort-pod77bacb2f_b10c_4b7c_824b_6ba816dc5586.slice. Jan 17 12:19:27.933154 systemd[1]: kubepods-besteffort-pod77bacb2f_b10c_4b7c_824b_6ba816dc5586.slice: Consumed 3.330s CPU time. Jan 17 12:19:27.936843 containerd[1459]: time="2025-01-17T12:19:27.933658724Z" level=info msg="RemoveContainer for \"ed6342248a4ebaddffe8ae5be6608f56418168bd1e8d313d690ffdc5b9e8b798\"" Jan 17 12:19:27.937485 kubelet[2491]: E0117 12:19:27.937421 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:27.966895 containerd[1459]: time="2025-01-17T12:19:27.965653072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vmbnf,Uid:15b1c26c-8da4-4550-bbb1-62a446008bb1,Namespace:calico-system,Attempt:0,}" Jan 17 12:19:27.981172 kubelet[2491]: I0117 12:19:27.980986 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b466f6854-xrf2v" podStartSLOduration=28.141165388 podStartE2EDuration="33.980944562s" podCreationTimestamp="2025-01-17 12:18:54 +0000 UTC" firstStartedPulling="2025-01-17 12:19:21.054119042 +0000 UTC m=+42.964621629" lastFinishedPulling="2025-01-17 12:19:26.893898209 +0000 UTC m=+48.804400803" observedRunningTime="2025-01-17 12:19:27.926077265 +0000 UTC m=+49.836579882" watchObservedRunningTime="2025-01-17 12:19:27.980944562 +0000 UTC m=+49.891447161" Jan 17 12:19:28.029093 containerd[1459]: time="2025-01-17T12:19:28.027545789Z" level=info msg="RemoveContainer for \"ed6342248a4ebaddffe8ae5be6608f56418168bd1e8d313d690ffdc5b9e8b798\" returns successfully" Jan 17 12:19:28.032065 kubelet[2491]: I0117 12:19:28.031998 2491 scope.go:117] "RemoveContainer" containerID="d8901e73224402e20e3012c4525fa7465ea58001b2dd1fd0e55b0c6776a600ce" Jan 17 12:19:28.059423 containerd[1459]: time="2025-01-17T12:19:28.059350509Z" level=info msg="RemoveContainer for \"d8901e73224402e20e3012c4525fa7465ea58001b2dd1fd0e55b0c6776a600ce\"" Jan 17 12:19:28.080856 containerd[1459]: time="2025-01-17T12:19:28.080428361Z" level=info msg="RemoveContainer for \"d8901e73224402e20e3012c4525fa7465ea58001b2dd1fd0e55b0c6776a600ce\" returns successfully" Jan 17 12:19:28.082361 kubelet[2491]: I0117 12:19:28.082054 2491 scope.go:117] "RemoveContainer" containerID="c14e391e3459eee1b19097ecd76f8e601abb5f089c380022e6c96dcc093705f9" Jan 17 12:19:28.094512 containerd[1459]: time="2025-01-17T12:19:28.094318014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:28.095623 containerd[1459]: time="2025-01-17T12:19:28.095165603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:28.095623 containerd[1459]: time="2025-01-17T12:19:28.095200231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:28.095623 containerd[1459]: time="2025-01-17T12:19:28.095313608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:28.122192 systemd[1]: cri-containerd-6731aa64cd6cd84c442d1383ebd9b4f199beba007b2a3a048c808436e7f7325f.scope: Deactivated successfully. Jan 17 12:19:28.156153 systemd[1]: Started cri-containerd-98ec9be6173ab977bed69a3ac96968dc30f76a10a5209587db2930e1d265e064.scope - libcontainer container 98ec9be6173ab977bed69a3ac96968dc30f76a10a5209587db2930e1d265e064. Jan 17 12:19:28.180434 containerd[1459]: time="2025-01-17T12:19:28.114980404Z" level=error msg="ContainerStatus for \"c14e391e3459eee1b19097ecd76f8e601abb5f089c380022e6c96dcc093705f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c14e391e3459eee1b19097ecd76f8e601abb5f089c380022e6c96dcc093705f9\": not found" Jan 17 12:19:28.221839 kubelet[2491]: E0117 12:19:28.221204 2491 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c14e391e3459eee1b19097ecd76f8e601abb5f089c380022e6c96dcc093705f9\": not found" containerID="c14e391e3459eee1b19097ecd76f8e601abb5f089c380022e6c96dcc093705f9" Jan 17 12:19:28.224837 kubelet[2491]: I0117 12:19:28.223674 2491 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c14e391e3459eee1b19097ecd76f8e601abb5f089c380022e6c96dcc093705f9"} err="failed to get container status \"c14e391e3459eee1b19097ecd76f8e601abb5f089c380022e6c96dcc093705f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"c14e391e3459eee1b19097ecd76f8e601abb5f089c380022e6c96dcc093705f9\": not found" Jan 17 12:19:28.225130 kubelet[2491]: I0117 12:19:28.225073 2491 scope.go:117] "RemoveContainer" containerID="ed6342248a4ebaddffe8ae5be6608f56418168bd1e8d313d690ffdc5b9e8b798" Jan 17 12:19:28.226018 containerd[1459]: time="2025-01-17T12:19:28.225935625Z" level=error msg="ContainerStatus for \"ed6342248a4ebaddffe8ae5be6608f56418168bd1e8d313d690ffdc5b9e8b798\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ed6342248a4ebaddffe8ae5be6608f56418168bd1e8d313d690ffdc5b9e8b798\": not found" Jan 17 12:19:28.226980 kubelet[2491]: E0117 12:19:28.226707 2491 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ed6342248a4ebaddffe8ae5be6608f56418168bd1e8d313d690ffdc5b9e8b798\": not found" containerID="ed6342248a4ebaddffe8ae5be6608f56418168bd1e8d313d690ffdc5b9e8b798" Jan 17 12:19:28.226980 kubelet[2491]: I0117 12:19:28.226806 2491 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ed6342248a4ebaddffe8ae5be6608f56418168bd1e8d313d690ffdc5b9e8b798"} err="failed to get container status \"ed6342248a4ebaddffe8ae5be6608f56418168bd1e8d313d690ffdc5b9e8b798\": rpc error: code = NotFound desc = an error occurred when try to find container \"ed6342248a4ebaddffe8ae5be6608f56418168bd1e8d313d690ffdc5b9e8b798\": not found" Jan 17 12:19:28.226980 kubelet[2491]: I0117 12:19:28.226845 2491 scope.go:117] "RemoveContainer" containerID="d8901e73224402e20e3012c4525fa7465ea58001b2dd1fd0e55b0c6776a600ce" Jan 17 12:19:28.227661 containerd[1459]: time="2025-01-17T12:19:28.227484615Z" level=error msg="ContainerStatus for \"d8901e73224402e20e3012c4525fa7465ea58001b2dd1fd0e55b0c6776a600ce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d8901e73224402e20e3012c4525fa7465ea58001b2dd1fd0e55b0c6776a600ce\": not found" Jan 17 12:19:28.228805 kubelet[2491]: E0117 12:19:28.228708 2491 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d8901e73224402e20e3012c4525fa7465ea58001b2dd1fd0e55b0c6776a600ce\": not found" containerID="d8901e73224402e20e3012c4525fa7465ea58001b2dd1fd0e55b0c6776a600ce" Jan 17 12:19:28.228805 kubelet[2491]: I0117 12:19:28.228761 2491 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d8901e73224402e20e3012c4525fa7465ea58001b2dd1fd0e55b0c6776a600ce"} err="failed to get container status \"d8901e73224402e20e3012c4525fa7465ea58001b2dd1fd0e55b0c6776a600ce\": rpc error: code = NotFound desc = an error occurred when try to find container \"d8901e73224402e20e3012c4525fa7465ea58001b2dd1fd0e55b0c6776a600ce\": not found" Jan 17 12:19:28.298627 kubelet[2491]: I0117 12:19:28.298232 2491 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77bacb2f-b10c-4b7c-824b-6ba816dc5586" path="/var/lib/kubelet/pods/77bacb2f-b10c-4b7c-824b-6ba816dc5586/volumes" Jan 17 12:19:28.304435 containerd[1459]: time="2025-01-17T12:19:28.303728824Z" level=info msg="shim disconnected" id=6731aa64cd6cd84c442d1383ebd9b4f199beba007b2a3a048c808436e7f7325f namespace=k8s.io Jan 17 12:19:28.304829 containerd[1459]: time="2025-01-17T12:19:28.304457105Z" level=warning msg="cleaning up after shim disconnected" id=6731aa64cd6cd84c442d1383ebd9b4f199beba007b2a3a048c808436e7f7325f namespace=k8s.io Jan 17 12:19:28.304829 containerd[1459]: time="2025-01-17T12:19:28.304487436Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:19:28.385059 containerd[1459]: time="2025-01-17T12:19:28.384728300Z" level=info msg="StopContainer for \"6731aa64cd6cd84c442d1383ebd9b4f199beba007b2a3a048c808436e7f7325f\" returns successfully" Jan 17 12:19:28.388133 containerd[1459]: time="2025-01-17T12:19:28.386853182Z" level=info msg="StopPodSandbox for \"1dbe89695f75be8d8f665a121ac442543ad505e3de9fb43098dcf34cc7fd8179\"" Jan 17 12:19:28.388133 containerd[1459]: time="2025-01-17T12:19:28.387209680Z" level=info msg="Container to stop \"6731aa64cd6cd84c442d1383ebd9b4f199beba007b2a3a048c808436e7f7325f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:19:28.427125 systemd[1]: cri-containerd-1dbe89695f75be8d8f665a121ac442543ad505e3de9fb43098dcf34cc7fd8179.scope: Deactivated successfully. Jan 17 12:19:28.563489 containerd[1459]: time="2025-01-17T12:19:28.563117457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vmbnf,Uid:15b1c26c-8da4-4550-bbb1-62a446008bb1,Namespace:calico-system,Attempt:0,} returns sandbox id \"98ec9be6173ab977bed69a3ac96968dc30f76a10a5209587db2930e1d265e064\"" Jan 17 12:19:28.578367 kubelet[2491]: E0117 12:19:28.577403 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:28.595077 containerd[1459]: time="2025-01-17T12:19:28.595019842Z" level=info msg="CreateContainer within sandbox \"98ec9be6173ab977bed69a3ac96968dc30f76a10a5209587db2930e1d265e064\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:19:28.668495 containerd[1459]: time="2025-01-17T12:19:28.668076724Z" level=info msg="shim disconnected" id=1dbe89695f75be8d8f665a121ac442543ad505e3de9fb43098dcf34cc7fd8179 namespace=k8s.io Jan 17 12:19:28.668495 containerd[1459]: time="2025-01-17T12:19:28.668169156Z" level=warning msg="cleaning up after shim disconnected" id=1dbe89695f75be8d8f665a121ac442543ad505e3de9fb43098dcf34cc7fd8179 namespace=k8s.io Jan 17 12:19:28.668495 containerd[1459]: time="2025-01-17T12:19:28.668184031Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:19:28.700294 containerd[1459]: time="2025-01-17T12:19:28.697898729Z" level=info msg="CreateContainer within sandbox \"98ec9be6173ab977bed69a3ac96968dc30f76a10a5209587db2930e1d265e064\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d7f3c39d9212ce338d5c956400cb6909375f6f27610e5cf053f568f6fa123afe\"" Jan 17 12:19:28.700511 containerd[1459]: time="2025-01-17T12:19:28.700307958Z" level=info msg="StartContainer for \"d7f3c39d9212ce338d5c956400cb6909375f6f27610e5cf053f568f6fa123afe\"" Jan 17 12:19:28.743161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6731aa64cd6cd84c442d1383ebd9b4f199beba007b2a3a048c808436e7f7325f-rootfs.mount: Deactivated successfully. Jan 17 12:19:28.743352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1dbe89695f75be8d8f665a121ac442543ad505e3de9fb43098dcf34cc7fd8179-rootfs.mount: Deactivated successfully. Jan 17 12:19:28.743442 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1dbe89695f75be8d8f665a121ac442543ad505e3de9fb43098dcf34cc7fd8179-shm.mount: Deactivated successfully. Jan 17 12:19:28.816408 containerd[1459]: time="2025-01-17T12:19:28.815608659Z" level=info msg="TearDown network for sandbox \"1dbe89695f75be8d8f665a121ac442543ad505e3de9fb43098dcf34cc7fd8179\" successfully" Jan 17 12:19:28.816408 containerd[1459]: time="2025-01-17T12:19:28.815664114Z" level=info msg="StopPodSandbox for \"1dbe89695f75be8d8f665a121ac442543ad505e3de9fb43098dcf34cc7fd8179\" returns successfully" Jan 17 12:19:28.933667 kubelet[2491]: I0117 12:19:28.932225 2491 scope.go:117] "RemoveContainer" containerID="6731aa64cd6cd84c442d1383ebd9b4f199beba007b2a3a048c808436e7f7325f" Jan 17 12:19:28.938686 kubelet[2491]: E0117 12:19:28.938127 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:28.951887 kubelet[2491]: I0117 12:19:28.951786 2491 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0c94f622-80de-4abd-b2f4-f05253e01f5a-typha-certs\") pod \"0c94f622-80de-4abd-b2f4-f05253e01f5a\" (UID: \"0c94f622-80de-4abd-b2f4-f05253e01f5a\") " Jan 17 12:19:28.952103 kubelet[2491]: I0117 12:19:28.951930 2491 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp9vp\" (UniqueName: \"kubernetes.io/projected/0c94f622-80de-4abd-b2f4-f05253e01f5a-kube-api-access-tp9vp\") pod \"0c94f622-80de-4abd-b2f4-f05253e01f5a\" (UID: \"0c94f622-80de-4abd-b2f4-f05253e01f5a\") " Jan 17 12:19:28.952103 kubelet[2491]: I0117 12:19:28.951970 2491 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c94f622-80de-4abd-b2f4-f05253e01f5a-tigera-ca-bundle\") pod \"0c94f622-80de-4abd-b2f4-f05253e01f5a\" (UID: \"0c94f622-80de-4abd-b2f4-f05253e01f5a\") " Jan 17 12:19:28.956062 containerd[1459]: time="2025-01-17T12:19:28.954130155Z" level=info msg="RemoveContainer for \"6731aa64cd6cd84c442d1383ebd9b4f199beba007b2a3a048c808436e7f7325f\"" Jan 17 12:19:28.976164 systemd[1]: var-lib-kubelet-pods-0c94f622\x2d80de\x2d4abd\x2db2f4\x2df05253e01f5a-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jan 17 12:19:29.003218 kubelet[2491]: I0117 12:19:29.002904 2491 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c94f622-80de-4abd-b2f4-f05253e01f5a-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "0c94f622-80de-4abd-b2f4-f05253e01f5a" (UID: "0c94f622-80de-4abd-b2f4-f05253e01f5a"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 17 12:19:29.007089 systemd[1]: Started cri-containerd-d7f3c39d9212ce338d5c956400cb6909375f6f27610e5cf053f568f6fa123afe.scope - libcontainer container d7f3c39d9212ce338d5c956400cb6909375f6f27610e5cf053f568f6fa123afe. Jan 17 12:19:29.022057 systemd[1]: var-lib-kubelet-pods-0c94f622\x2d80de\x2d4abd\x2db2f4\x2df05253e01f5a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtp9vp.mount: Deactivated successfully. Jan 17 12:19:29.024144 kubelet[2491]: I0117 12:19:29.021800 2491 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c94f622-80de-4abd-b2f4-f05253e01f5a-kube-api-access-tp9vp" (OuterVolumeSpecName: "kube-api-access-tp9vp") pod "0c94f622-80de-4abd-b2f4-f05253e01f5a" (UID: "0c94f622-80de-4abd-b2f4-f05253e01f5a"). InnerVolumeSpecName "kube-api-access-tp9vp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:19:29.041407 containerd[1459]: time="2025-01-17T12:19:29.039068340Z" level=info msg="RemoveContainer for \"6731aa64cd6cd84c442d1383ebd9b4f199beba007b2a3a048c808436e7f7325f\" returns successfully" Jan 17 12:19:29.041407 containerd[1459]: time="2025-01-17T12:19:29.040831174Z" level=error msg="ContainerStatus for \"6731aa64cd6cd84c442d1383ebd9b4f199beba007b2a3a048c808436e7f7325f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6731aa64cd6cd84c442d1383ebd9b4f199beba007b2a3a048c808436e7f7325f\": not found" Jan 17 12:19:29.042201 kubelet[2491]: I0117 12:19:29.040292 2491 scope.go:117] "RemoveContainer" containerID="6731aa64cd6cd84c442d1383ebd9b4f199beba007b2a3a048c808436e7f7325f" Jan 17 12:19:29.042201 kubelet[2491]: E0117 12:19:29.041126 2491 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6731aa64cd6cd84c442d1383ebd9b4f199beba007b2a3a048c808436e7f7325f\": not found" containerID="6731aa64cd6cd84c442d1383ebd9b4f199beba007b2a3a048c808436e7f7325f" Jan 17 12:19:29.042201 kubelet[2491]: I0117 12:19:29.041176 2491 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6731aa64cd6cd84c442d1383ebd9b4f199beba007b2a3a048c808436e7f7325f"} err="failed to get container status \"6731aa64cd6cd84c442d1383ebd9b4f199beba007b2a3a048c808436e7f7325f\": rpc error: code = NotFound desc = an error occurred when try to find container \"6731aa64cd6cd84c442d1383ebd9b4f199beba007b2a3a048c808436e7f7325f\": not found" Jan 17 12:19:29.053501 kubelet[2491]: I0117 12:19:29.053362 2491 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c94f622-80de-4abd-b2f4-f05253e01f5a-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "0c94f622-80de-4abd-b2f4-f05253e01f5a" (UID: "0c94f622-80de-4abd-b2f4-f05253e01f5a"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:19:29.054463 kubelet[2491]: I0117 12:19:29.054099 2491 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tp9vp\" (UniqueName: \"kubernetes.io/projected/0c94f622-80de-4abd-b2f4-f05253e01f5a-kube-api-access-tp9vp\") on node \"ci-4081.3.0-f-fd30d73867\" DevicePath \"\"" Jan 17 12:19:29.054849 kubelet[2491]: I0117 12:19:29.054661 2491 reconciler_common.go:288] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0c94f622-80de-4abd-b2f4-f05253e01f5a-typha-certs\") on node \"ci-4081.3.0-f-fd30d73867\" DevicePath \"\"" Jan 17 12:19:29.156372 kubelet[2491]: I0117 12:19:29.155541 2491 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c94f622-80de-4abd-b2f4-f05253e01f5a-tigera-ca-bundle\") on node \"ci-4081.3.0-f-fd30d73867\" DevicePath \"\"" Jan 17 12:19:29.262818 systemd[1]: Removed slice kubepods-besteffort-pod0c94f622_80de_4abd_b2f4_f05253e01f5a.slice - libcontainer container kubepods-besteffort-pod0c94f622_80de_4abd_b2f4_f05253e01f5a.slice. Jan 17 12:19:29.452047 containerd[1459]: time="2025-01-17T12:19:29.451679833Z" level=info msg="StartContainer for \"d7f3c39d9212ce338d5c956400cb6909375f6f27610e5cf053f568f6fa123afe\" returns successfully" Jan 17 12:19:29.475852 containerd[1459]: time="2025-01-17T12:19:29.474565558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:29.497946 containerd[1459]: time="2025-01-17T12:19:29.497711243Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 17 12:19:29.521876 containerd[1459]: time="2025-01-17T12:19:29.521219560Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:29.553397 containerd[1459]: time="2025-01-17T12:19:29.553271679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:29.555314 containerd[1459]: time="2025-01-17T12:19:29.555220084Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.65361963s" Jan 17 12:19:29.555843 containerd[1459]: time="2025-01-17T12:19:29.555592379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 17 12:19:29.560440 containerd[1459]: time="2025-01-17T12:19:29.559825121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:19:29.563451 containerd[1459]: time="2025-01-17T12:19:29.563211156Z" level=info msg="CreateContainer within sandbox \"e06b43e58b7279337376ca4475ccc0dcecf049cb3a95ee19aeab290490321228\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 17 12:19:29.696418 containerd[1459]: time="2025-01-17T12:19:29.696347814Z" level=info msg="CreateContainer within sandbox \"e06b43e58b7279337376ca4475ccc0dcecf049cb3a95ee19aeab290490321228\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0015e45ddb7a8af171e32ad89b0dab88df8b2804c348374ef4641e8b0874dab3\"" Jan 17 12:19:29.698172 containerd[1459]: time="2025-01-17T12:19:29.698062795Z" level=info msg="StartContainer for \"0015e45ddb7a8af171e32ad89b0dab88df8b2804c348374ef4641e8b0874dab3\"" Jan 17 12:19:29.724368 systemd[1]: cri-containerd-d7f3c39d9212ce338d5c956400cb6909375f6f27610e5cf053f568f6fa123afe.scope: Deactivated successfully. Jan 17 12:19:29.738355 systemd[1]: var-lib-kubelet-pods-0c94f622\x2d80de\x2d4abd\x2db2f4\x2df05253e01f5a-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jan 17 12:19:29.898130 systemd[1]: Started cri-containerd-0015e45ddb7a8af171e32ad89b0dab88df8b2804c348374ef4641e8b0874dab3.scope - libcontainer container 0015e45ddb7a8af171e32ad89b0dab88df8b2804c348374ef4641e8b0874dab3. Jan 17 12:19:29.913989 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7f3c39d9212ce338d5c956400cb6909375f6f27610e5cf053f568f6fa123afe-rootfs.mount: Deactivated successfully. Jan 17 12:19:29.958791 containerd[1459]: time="2025-01-17T12:19:29.957614883Z" level=info msg="shim disconnected" id=d7f3c39d9212ce338d5c956400cb6909375f6f27610e5cf053f568f6fa123afe namespace=k8s.io Jan 17 12:19:29.958791 containerd[1459]: time="2025-01-17T12:19:29.957691092Z" level=warning msg="cleaning up after shim disconnected" id=d7f3c39d9212ce338d5c956400cb6909375f6f27610e5cf053f568f6fa123afe namespace=k8s.io Jan 17 12:19:29.958791 containerd[1459]: time="2025-01-17T12:19:29.957704880Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:19:29.963792 kubelet[2491]: E0117 12:19:29.962435 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:29.976346 kubelet[2491]: E0117 12:19:29.975657 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:30.076528 containerd[1459]: time="2025-01-17T12:19:30.076385948Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:30.087797 containerd[1459]: time="2025-01-17T12:19:30.086064816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 17 12:19:30.109792 containerd[1459]: time="2025-01-17T12:19:30.107369696Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 547.478681ms" Jan 17 12:19:30.109792 containerd[1459]: time="2025-01-17T12:19:30.107442911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 17 12:19:30.127794 containerd[1459]: time="2025-01-17T12:19:30.126858141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 17 12:19:30.138527 containerd[1459]: time="2025-01-17T12:19:30.137798355Z" level=info msg="CreateContainer within sandbox \"6ba975f575bd431372e670765b3b7a6cb3eda3c4faa19d1861df114c362796c6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:19:30.196441 containerd[1459]: time="2025-01-17T12:19:30.196385007Z" level=info msg="StartContainer for \"0015e45ddb7a8af171e32ad89b0dab88df8b2804c348374ef4641e8b0874dab3\" returns successfully" Jan 17 12:19:30.224993 containerd[1459]: time="2025-01-17T12:19:30.224906040Z" level=info msg="CreateContainer within sandbox \"6ba975f575bd431372e670765b3b7a6cb3eda3c4faa19d1861df114c362796c6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f5cc41c0eb2f52179325da6768ff82935cdb3527d2e788e5d9bd8f27086b0367\"" Jan 17 12:19:30.231702 containerd[1459]: time="2025-01-17T12:19:30.231117627Z" level=info msg="StartContainer for \"f5cc41c0eb2f52179325da6768ff82935cdb3527d2e788e5d9bd8f27086b0367\"" Jan 17 12:19:30.292117 kubelet[2491]: I0117 12:19:30.291702 2491 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c94f622-80de-4abd-b2f4-f05253e01f5a" path="/var/lib/kubelet/pods/0c94f622-80de-4abd-b2f4-f05253e01f5a/volumes" Jan 17 12:19:30.306080 systemd[1]: Started cri-containerd-f5cc41c0eb2f52179325da6768ff82935cdb3527d2e788e5d9bd8f27086b0367.scope - libcontainer container f5cc41c0eb2f52179325da6768ff82935cdb3527d2e788e5d9bd8f27086b0367. Jan 17 12:19:30.489431 systemd[1]: Started sshd@7-209.38.138.250:22-139.178.68.195:44474.service - OpenSSH per-connection server daemon (139.178.68.195:44474). Jan 17 12:19:30.737901 sshd[4953]: Accepted publickey for core from 139.178.68.195 port 44474 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:30.745352 sshd[4953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:30.748051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3992676677.mount: Deactivated successfully. Jan 17 12:19:30.793594 systemd-logind[1443]: New session 8 of user core. Jan 17 12:19:30.800379 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:19:30.851378 containerd[1459]: time="2025-01-17T12:19:30.851291850Z" level=info msg="StartContainer for \"f5cc41c0eb2f52179325da6768ff82935cdb3527d2e788e5d9bd8f27086b0367\" returns successfully" Jan 17 12:19:31.045827 kubelet[2491]: E0117 12:19:31.044238 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:31.078098 containerd[1459]: time="2025-01-17T12:19:31.077698695Z" level=info msg="CreateContainer within sandbox \"98ec9be6173ab977bed69a3ac96968dc30f76a10a5209587db2930e1d265e064\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:19:31.163985 containerd[1459]: time="2025-01-17T12:19:31.163327586Z" level=info msg="CreateContainer within sandbox \"98ec9be6173ab977bed69a3ac96968dc30f76a10a5209587db2930e1d265e064\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8c3b1d5f636553670ca17f1c4405c0cee697d4af81e42b0ae63963f2136a4145\"" Jan 17 12:19:31.171786 containerd[1459]: time="2025-01-17T12:19:31.165096896Z" level=info msg="StartContainer for \"8c3b1d5f636553670ca17f1c4405c0cee697d4af81e42b0ae63963f2136a4145\"" Jan 17 12:19:31.237655 kubelet[2491]: I0117 12:19:31.235441 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b466f6854-hrc5h" podStartSLOduration=30.280036543 podStartE2EDuration="37.235406364s" podCreationTimestamp="2025-01-17 12:18:54 +0000 UTC" firstStartedPulling="2025-01-17 12:19:23.157899119 +0000 UTC m=+45.068401711" lastFinishedPulling="2025-01-17 12:19:30.113268959 +0000 UTC m=+52.023771532" observedRunningTime="2025-01-17 12:19:31.077678931 +0000 UTC m=+52.988181538" watchObservedRunningTime="2025-01-17 12:19:31.235406364 +0000 UTC m=+53.145908974" Jan 17 12:19:31.361441 systemd[1]: Started cri-containerd-8c3b1d5f636553670ca17f1c4405c0cee697d4af81e42b0ae63963f2136a4145.scope - libcontainer container 8c3b1d5f636553670ca17f1c4405c0cee697d4af81e42b0ae63963f2136a4145. Jan 17 12:19:31.936440 sshd[4953]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:31.950704 systemd[1]: sshd@7-209.38.138.250:22-139.178.68.195:44474.service: Deactivated successfully. Jan 17 12:19:31.964637 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:19:31.970954 systemd-logind[1443]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:19:31.975315 systemd-logind[1443]: Removed session 8. Jan 17 12:19:31.979606 containerd[1459]: time="2025-01-17T12:19:31.979506560Z" level=info msg="StartContainer for \"8c3b1d5f636553670ca17f1c4405c0cee697d4af81e42b0ae63963f2136a4145\" returns successfully" Jan 17 12:19:32.058340 kubelet[2491]: E0117 12:19:32.058290 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:33.068041 kubelet[2491]: E0117 12:19:33.068000 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:34.471281 containerd[1459]: time="2025-01-17T12:19:34.470416823Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:34.476083 containerd[1459]: time="2025-01-17T12:19:34.475801865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 17 12:19:34.479903 containerd[1459]: time="2025-01-17T12:19:34.477556242Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:34.486701 containerd[1459]: time="2025-01-17T12:19:34.484844837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:34.488710 containerd[1459]: time="2025-01-17T12:19:34.487242317Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 4.358313972s" Jan 17 12:19:34.489073 containerd[1459]: time="2025-01-17T12:19:34.489023195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 17 12:19:34.495659 containerd[1459]: time="2025-01-17T12:19:34.495601853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 17 12:19:34.532881 containerd[1459]: time="2025-01-17T12:19:34.532775061Z" level=info msg="CreateContainer within sandbox \"e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 17 12:19:34.578552 containerd[1459]: time="2025-01-17T12:19:34.577164277Z" level=info msg="CreateContainer within sandbox \"e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1378530a1b374f732d29c6e1330890fbaeb08276e01a80b66ba1244c1c84b7a4\"" Jan 17 12:19:34.580423 containerd[1459]: time="2025-01-17T12:19:34.580374421Z" level=info msg="StartContainer for \"1378530a1b374f732d29c6e1330890fbaeb08276e01a80b66ba1244c1c84b7a4\"" Jan 17 12:19:34.688588 systemd[1]: Started cri-containerd-1378530a1b374f732d29c6e1330890fbaeb08276e01a80b66ba1244c1c84b7a4.scope - libcontainer container 1378530a1b374f732d29c6e1330890fbaeb08276e01a80b66ba1244c1c84b7a4. Jan 17 12:19:34.889779 containerd[1459]: time="2025-01-17T12:19:34.887826846Z" level=info msg="StartContainer for \"1378530a1b374f732d29c6e1330890fbaeb08276e01a80b66ba1244c1c84b7a4\" returns successfully" Jan 17 12:19:35.106278 containerd[1459]: time="2025-01-17T12:19:35.106208013Z" level=info msg="StopContainer for \"1378530a1b374f732d29c6e1330890fbaeb08276e01a80b66ba1244c1c84b7a4\" with timeout 30 (s)" Jan 17 12:19:35.107677 containerd[1459]: time="2025-01-17T12:19:35.107071162Z" level=info msg="Stop container \"1378530a1b374f732d29c6e1330890fbaeb08276e01a80b66ba1244c1c84b7a4\" with signal terminated" Jan 17 12:19:35.149473 systemd[1]: cri-containerd-1378530a1b374f732d29c6e1330890fbaeb08276e01a80b66ba1244c1c84b7a4.scope: Deactivated successfully. Jan 17 12:19:35.163592 kubelet[2491]: I0117 12:19:35.163498 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-75f85c7775-l4kfg" podStartSLOduration=32.971527395 podStartE2EDuration="41.163472094s" podCreationTimestamp="2025-01-17 12:18:54 +0000 UTC" firstStartedPulling="2025-01-17 12:19:26.30043406 +0000 UTC m=+48.210936647" lastFinishedPulling="2025-01-17 12:19:34.492378769 +0000 UTC m=+56.402881346" observedRunningTime="2025-01-17 12:19:35.163122956 +0000 UTC m=+57.073625556" watchObservedRunningTime="2025-01-17 12:19:35.163472094 +0000 UTC m=+57.073974682" Jan 17 12:19:35.212797 containerd[1459]: time="2025-01-17T12:19:35.211373098Z" level=error msg="ExecSync for \"1378530a1b374f732d29c6e1330890fbaeb08276e01a80b66ba1244c1c84b7a4\" failed" error="failed to exec in container: failed to start exec \"c5261845777c81cda68dd2cc5a23743ca3e4a3c12672b9251cb830d53339fcd1\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" Jan 17 12:19:35.215992 kubelet[2491]: E0117 12:19:35.215489 2491 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"c5261845777c81cda68dd2cc5a23743ca3e4a3c12672b9251cb830d53339fcd1\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="1378530a1b374f732d29c6e1330890fbaeb08276e01a80b66ba1244c1c84b7a4" cmd=["/usr/bin/check-status","-r"] Jan 17 12:19:35.260392 containerd[1459]: time="2025-01-17T12:19:35.259856008Z" level=info msg="shim disconnected" id=1378530a1b374f732d29c6e1330890fbaeb08276e01a80b66ba1244c1c84b7a4 namespace=k8s.io Jan 17 12:19:35.260392 containerd[1459]: time="2025-01-17T12:19:35.260247890Z" level=warning msg="cleaning up after shim disconnected" id=1378530a1b374f732d29c6e1330890fbaeb08276e01a80b66ba1244c1c84b7a4 namespace=k8s.io Jan 17 12:19:35.260392 containerd[1459]: time="2025-01-17T12:19:35.260318837Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:19:35.260392 containerd[1459]: time="2025-01-17T12:19:35.259965492Z" level=error msg="ExecSync for \"1378530a1b374f732d29c6e1330890fbaeb08276e01a80b66ba1244c1c84b7a4\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"8daf396f4a111a950016dd38d82f73edf33bab39e36be3b92b5ab3435061d5c8\": task 1378530a1b374f732d29c6e1330890fbaeb08276e01a80b66ba1244c1c84b7a4 not found: not found" Jan 17 12:19:35.263422 kubelet[2491]: E0117 12:19:35.262902 2491 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"8daf396f4a111a950016dd38d82f73edf33bab39e36be3b92b5ab3435061d5c8\": task 1378530a1b374f732d29c6e1330890fbaeb08276e01a80b66ba1244c1c84b7a4 not found: not found" containerID="1378530a1b374f732d29c6e1330890fbaeb08276e01a80b66ba1244c1c84b7a4" cmd=["/usr/bin/check-status","-r"] Jan 17 12:19:35.268471 containerd[1459]: time="2025-01-17T12:19:35.268177119Z" level=error msg="ExecSync for \"1378530a1b374f732d29c6e1330890fbaeb08276e01a80b66ba1244c1c84b7a4\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 1378530a1b374f732d29c6e1330890fbaeb08276e01a80b66ba1244c1c84b7a4 not found: not found" Jan 17 12:19:35.268993 kubelet[2491]: E0117 12:19:35.268579 2491 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 1378530a1b374f732d29c6e1330890fbaeb08276e01a80b66ba1244c1c84b7a4 not found: not found" containerID="1378530a1b374f732d29c6e1330890fbaeb08276e01a80b66ba1244c1c84b7a4" cmd=["/usr/bin/check-status","-r"] Jan 17 12:19:35.315294 containerd[1459]: time="2025-01-17T12:19:35.315138795Z" level=info msg="StopContainer for \"1378530a1b374f732d29c6e1330890fbaeb08276e01a80b66ba1244c1c84b7a4\" returns successfully" Jan 17 12:19:35.318040 containerd[1459]: time="2025-01-17T12:19:35.317807371Z" level=info msg="StopPodSandbox for \"e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a\"" Jan 17 12:19:35.318040 containerd[1459]: time="2025-01-17T12:19:35.317883331Z" level=info msg="Container to stop \"1378530a1b374f732d29c6e1330890fbaeb08276e01a80b66ba1244c1c84b7a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:19:35.341246 systemd[1]: cri-containerd-e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a.scope: Deactivated successfully. Jan 17 12:19:35.400234 containerd[1459]: time="2025-01-17T12:19:35.399993229Z" level=info msg="shim disconnected" id=e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a namespace=k8s.io Jan 17 12:19:35.400234 containerd[1459]: time="2025-01-17T12:19:35.400095479Z" level=warning msg="cleaning up after shim disconnected" id=e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a namespace=k8s.io Jan 17 12:19:35.400234 containerd[1459]: time="2025-01-17T12:19:35.400107561Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:19:35.449567 containerd[1459]: time="2025-01-17T12:19:35.449451927Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:19:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:19:35.513493 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1378530a1b374f732d29c6e1330890fbaeb08276e01a80b66ba1244c1c84b7a4-rootfs.mount: Deactivated successfully. Jan 17 12:19:35.513908 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a-rootfs.mount: Deactivated successfully. Jan 17 12:19:35.514026 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a-shm.mount: Deactivated successfully. Jan 17 12:19:35.646607 systemd-networkd[1366]: cali0367d6b3cb8: Link DOWN Jan 17 12:19:35.646622 systemd-networkd[1366]: cali0367d6b3cb8: Lost carrier Jan 17 12:19:35.938888 containerd[1459]: 2025-01-17 12:19:35.644 [INFO][5160] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Jan 17 12:19:35.938888 containerd[1459]: 2025-01-17 12:19:35.645 [INFO][5160] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" iface="eth0" netns="/var/run/netns/cni-b3ed6138-8cbe-8754-8ba3-1ce41e442390" Jan 17 12:19:35.938888 containerd[1459]: 2025-01-17 12:19:35.645 [INFO][5160] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" iface="eth0" netns="/var/run/netns/cni-b3ed6138-8cbe-8754-8ba3-1ce41e442390" Jan 17 12:19:35.938888 containerd[1459]: 2025-01-17 12:19:35.652 [INFO][5160] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" after=7.189708ms iface="eth0" netns="/var/run/netns/cni-b3ed6138-8cbe-8754-8ba3-1ce41e442390" Jan 17 12:19:35.938888 containerd[1459]: 2025-01-17 12:19:35.652 [INFO][5160] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Jan 17 12:19:35.938888 containerd[1459]: 2025-01-17 12:19:35.652 [INFO][5160] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Jan 17 12:19:35.938888 containerd[1459]: 2025-01-17 12:19:35.734 [INFO][5166] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" HandleID="k8s-pod-network.e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:35.938888 containerd[1459]: 2025-01-17 12:19:35.734 [INFO][5166] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:35.938888 containerd[1459]: 2025-01-17 12:19:35.734 [INFO][5166] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:35.938888 containerd[1459]: 2025-01-17 12:19:35.910 [INFO][5166] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" HandleID="k8s-pod-network.e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:35.938888 containerd[1459]: 2025-01-17 12:19:35.910 [INFO][5166] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" HandleID="k8s-pod-network.e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:35.938888 containerd[1459]: 2025-01-17 12:19:35.928 [INFO][5166] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:35.938888 containerd[1459]: 2025-01-17 12:19:35.934 [INFO][5160] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Jan 17 12:19:35.946271 containerd[1459]: time="2025-01-17T12:19:35.944797584Z" level=info msg="TearDown network for sandbox \"e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a\" successfully" Jan 17 12:19:35.946271 containerd[1459]: time="2025-01-17T12:19:35.946126373Z" level=info msg="StopPodSandbox for \"e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a\" returns successfully" Jan 17 12:19:35.946474 systemd[1]: run-netns-cni\x2db3ed6138\x2d8cbe\x2d8754\x2d8ba3\x2d1ce41e442390.mount: Deactivated successfully. Jan 17 12:19:35.951611 containerd[1459]: time="2025-01-17T12:19:35.948345021Z" level=info msg="StopPodSandbox for \"5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b\"" Jan 17 12:19:36.118826 kubelet[2491]: I0117 12:19:36.118411 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Jan 17 12:19:36.176754 containerd[1459]: 2025-01-17 12:19:36.083 [WARNING][5186] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0", GenerateName:"calico-kube-controllers-75f85c7775-", Namespace:"calico-system", SelfLink:"", UID:"82477d9d-231e-4438-b265-cae0af210b64", ResourceVersion:"1124", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75f85c7775", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-fd30d73867", ContainerID:"e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a", Pod:"calico-kube-controllers-75f85c7775-l4kfg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0367d6b3cb8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:36.176754 containerd[1459]: 2025-01-17 12:19:36.083 [INFO][5186] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" Jan 17 12:19:36.176754 containerd[1459]: 2025-01-17 12:19:36.083 [INFO][5186] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" iface="eth0" netns="" Jan 17 12:19:36.176754 containerd[1459]: 2025-01-17 12:19:36.083 [INFO][5186] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" Jan 17 12:19:36.176754 containerd[1459]: 2025-01-17 12:19:36.083 [INFO][5186] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" Jan 17 12:19:36.176754 containerd[1459]: 2025-01-17 12:19:36.145 [INFO][5192] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" HandleID="k8s-pod-network.5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:36.176754 containerd[1459]: 2025-01-17 12:19:36.146 [INFO][5192] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:36.176754 containerd[1459]: 2025-01-17 12:19:36.146 [INFO][5192] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:36.176754 containerd[1459]: 2025-01-17 12:19:36.165 [WARNING][5192] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" HandleID="k8s-pod-network.5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:36.176754 containerd[1459]: 2025-01-17 12:19:36.165 [INFO][5192] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" HandleID="k8s-pod-network.5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:36.176754 containerd[1459]: 2025-01-17 12:19:36.169 [INFO][5192] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:36.176754 containerd[1459]: 2025-01-17 12:19:36.172 [INFO][5186] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" Jan 17 12:19:36.176754 containerd[1459]: time="2025-01-17T12:19:36.175694721Z" level=info msg="TearDown network for sandbox \"5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b\" successfully" Jan 17 12:19:36.178603 containerd[1459]: time="2025-01-17T12:19:36.175732952Z" level=info msg="StopPodSandbox for \"5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b\" returns successfully" Jan 17 12:19:36.335116 kubelet[2491]: I0117 12:19:36.334560 2491 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82477d9d-231e-4438-b265-cae0af210b64-tigera-ca-bundle\") pod \"82477d9d-231e-4438-b265-cae0af210b64\" (UID: \"82477d9d-231e-4438-b265-cae0af210b64\") " Jan 17 12:19:36.335116 kubelet[2491]: I0117 12:19:36.334619 2491 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st7s2\" (UniqueName: \"kubernetes.io/projected/82477d9d-231e-4438-b265-cae0af210b64-kube-api-access-st7s2\") pod \"82477d9d-231e-4438-b265-cae0af210b64\" (UID: \"82477d9d-231e-4438-b265-cae0af210b64\") " Jan 17 12:19:36.352836 kubelet[2491]: I0117 12:19:36.351553 2491 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82477d9d-231e-4438-b265-cae0af210b64-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "82477d9d-231e-4438-b265-cae0af210b64" (UID: "82477d9d-231e-4438-b265-cae0af210b64"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:19:36.352735 systemd[1]: var-lib-kubelet-pods-82477d9d\x2d231e\x2d4438\x2db265\x2dcae0af210b64-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Jan 17 12:19:36.363623 kubelet[2491]: I0117 12:19:36.363529 2491 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82477d9d-231e-4438-b265-cae0af210b64-kube-api-access-st7s2" (OuterVolumeSpecName: "kube-api-access-st7s2") pod "82477d9d-231e-4438-b265-cae0af210b64" (UID: "82477d9d-231e-4438-b265-cae0af210b64"). InnerVolumeSpecName "kube-api-access-st7s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:19:36.363697 systemd[1]: var-lib-kubelet-pods-82477d9d\x2d231e\x2d4438\x2db265\x2dcae0af210b64-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dst7s2.mount: Deactivated successfully. Jan 17 12:19:36.436422 kubelet[2491]: I0117 12:19:36.434871 2491 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82477d9d-231e-4438-b265-cae0af210b64-tigera-ca-bundle\") on node \"ci-4081.3.0-f-fd30d73867\" DevicePath \"\"" Jan 17 12:19:36.436422 kubelet[2491]: I0117 12:19:36.434908 2491 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-st7s2\" (UniqueName: \"kubernetes.io/projected/82477d9d-231e-4438-b265-cae0af210b64-kube-api-access-st7s2\") on node \"ci-4081.3.0-f-fd30d73867\" DevicePath \"\"" Jan 17 12:19:36.962261 systemd[1]: Started sshd@8-209.38.138.250:22-139.178.68.195:35294.service - OpenSSH per-connection server daemon (139.178.68.195:35294). Jan 17 12:19:37.134420 systemd[1]: Removed slice kubepods-besteffort-pod82477d9d_231e_4438_b265_cae0af210b64.slice - libcontainer container kubepods-besteffort-pod82477d9d_231e_4438_b265_cae0af210b64.slice. Jan 17 12:19:37.156090 sshd[5204]: Accepted publickey for core from 139.178.68.195 port 35294 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:37.163859 sshd[5204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:37.180857 systemd-logind[1443]: New session 9 of user core. Jan 17 12:19:37.189066 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:19:37.944580 sshd[5204]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:37.956574 systemd[1]: sshd@8-209.38.138.250:22-139.178.68.195:35294.service: Deactivated successfully. Jan 17 12:19:37.964626 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:19:37.973433 systemd-logind[1443]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:19:37.978182 systemd-logind[1443]: Removed session 9. Jan 17 12:19:38.142168 containerd[1459]: time="2025-01-17T12:19:38.141957781Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:38.166075 containerd[1459]: time="2025-01-17T12:19:38.165965130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 17 12:19:38.180296 containerd[1459]: time="2025-01-17T12:19:38.179931965Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:38.183478 containerd[1459]: time="2025-01-17T12:19:38.183207163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:38.185620 containerd[1459]: time="2025-01-17T12:19:38.184723591Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 3.688892659s" Jan 17 12:19:38.185620 containerd[1459]: time="2025-01-17T12:19:38.184839684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 17 12:19:38.189210 containerd[1459]: time="2025-01-17T12:19:38.188986137Z" level=info msg="CreateContainer within sandbox \"e06b43e58b7279337376ca4475ccc0dcecf049cb3a95ee19aeab290490321228\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 17 12:19:38.261259 containerd[1459]: time="2025-01-17T12:19:38.260191305Z" level=info msg="CreateContainer within sandbox \"e06b43e58b7279337376ca4475ccc0dcecf049cb3a95ee19aeab290490321228\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"df86162f070ff4c8c1d3ca83ef2afecc261a6569c7e9f7d175e906077f5249e3\"" Jan 17 12:19:38.303027 containerd[1459]: time="2025-01-17T12:19:38.302074703Z" level=info msg="StartContainer for \"df86162f070ff4c8c1d3ca83ef2afecc261a6569c7e9f7d175e906077f5249e3\"" Jan 17 12:19:38.423053 systemd[1]: Started cri-containerd-df86162f070ff4c8c1d3ca83ef2afecc261a6569c7e9f7d175e906077f5249e3.scope - libcontainer container df86162f070ff4c8c1d3ca83ef2afecc261a6569c7e9f7d175e906077f5249e3. Jan 17 12:19:38.425581 kubelet[2491]: I0117 12:19:38.425442 2491 scope.go:117] "RemoveContainer" containerID="1378530a1b374f732d29c6e1330890fbaeb08276e01a80b66ba1244c1c84b7a4" Jan 17 12:19:38.475333 kubelet[2491]: I0117 12:19:38.474375 2491 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82477d9d-231e-4438-b265-cae0af210b64" path="/var/lib/kubelet/pods/82477d9d-231e-4438-b265-cae0af210b64/volumes" Jan 17 12:19:38.487807 containerd[1459]: time="2025-01-17T12:19:38.487695869Z" level=info msg="RemoveContainer for \"1378530a1b374f732d29c6e1330890fbaeb08276e01a80b66ba1244c1c84b7a4\"" Jan 17 12:19:38.506364 containerd[1459]: time="2025-01-17T12:19:38.506307908Z" level=info msg="RemoveContainer for \"1378530a1b374f732d29c6e1330890fbaeb08276e01a80b66ba1244c1c84b7a4\" returns successfully" Jan 17 12:19:38.513363 containerd[1459]: time="2025-01-17T12:19:38.511988018Z" level=info msg="StopPodSandbox for \"745bc54d77845bf03da2fc2e227c5365cab88e8ffb32a347777d2a84ff5a86dc\"" Jan 17 12:19:38.513363 containerd[1459]: time="2025-01-17T12:19:38.512140246Z" level=info msg="TearDown network for sandbox \"745bc54d77845bf03da2fc2e227c5365cab88e8ffb32a347777d2a84ff5a86dc\" successfully" Jan 17 12:19:38.513363 containerd[1459]: time="2025-01-17T12:19:38.512161547Z" level=info msg="StopPodSandbox for \"745bc54d77845bf03da2fc2e227c5365cab88e8ffb32a347777d2a84ff5a86dc\" returns successfully" Jan 17 12:19:38.521802 containerd[1459]: time="2025-01-17T12:19:38.521712124Z" level=info msg="RemovePodSandbox for \"745bc54d77845bf03da2fc2e227c5365cab88e8ffb32a347777d2a84ff5a86dc\"" Jan 17 12:19:38.533277 containerd[1459]: time="2025-01-17T12:19:38.533210794Z" level=info msg="Forcibly stopping sandbox \"745bc54d77845bf03da2fc2e227c5365cab88e8ffb32a347777d2a84ff5a86dc\"" Jan 17 12:19:38.533499 containerd[1459]: time="2025-01-17T12:19:38.533468618Z" level=info msg="TearDown network for sandbox \"745bc54d77845bf03da2fc2e227c5365cab88e8ffb32a347777d2a84ff5a86dc\" successfully" Jan 17 12:19:38.562142 containerd[1459]: time="2025-01-17T12:19:38.560946592Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"745bc54d77845bf03da2fc2e227c5365cab88e8ffb32a347777d2a84ff5a86dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:19:38.562142 containerd[1459]: time="2025-01-17T12:19:38.562018204Z" level=info msg="RemovePodSandbox \"745bc54d77845bf03da2fc2e227c5365cab88e8ffb32a347777d2a84ff5a86dc\" returns successfully" Jan 17 12:19:38.573901 containerd[1459]: time="2025-01-17T12:19:38.573400526Z" level=info msg="StopPodSandbox for \"dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a\"" Jan 17 12:19:38.747877 containerd[1459]: time="2025-01-17T12:19:38.746710189Z" level=info msg="StartContainer for \"df86162f070ff4c8c1d3ca83ef2afecc261a6569c7e9f7d175e906077f5249e3\" returns successfully" Jan 17 12:19:38.858457 containerd[1459]: 2025-01-17 12:19:38.770 [WARNING][5259] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--zgmwb-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6f470594-2379-4193-8b55-bd3e6a5996c1", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-fd30d73867", ContainerID:"8c1154975a3442ee93bb93d10d2f277b7c023d0af3da2e4d0867a668a8633d0a", Pod:"coredns-6f6b679f8f-zgmwb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7121a239a4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:38.858457 containerd[1459]: 2025-01-17 12:19:38.772 [INFO][5259] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" Jan 17 12:19:38.858457 containerd[1459]: 2025-01-17 12:19:38.772 [INFO][5259] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" iface="eth0" netns="" Jan 17 12:19:38.858457 containerd[1459]: 2025-01-17 12:19:38.772 [INFO][5259] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" Jan 17 12:19:38.858457 containerd[1459]: 2025-01-17 12:19:38.772 [INFO][5259] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" Jan 17 12:19:38.858457 containerd[1459]: 2025-01-17 12:19:38.835 [INFO][5275] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" HandleID="k8s-pod-network.dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" Workload="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--zgmwb-eth0" Jan 17 12:19:38.858457 containerd[1459]: 2025-01-17 12:19:38.836 [INFO][5275] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:38.858457 containerd[1459]: 2025-01-17 12:19:38.836 [INFO][5275] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:38.858457 containerd[1459]: 2025-01-17 12:19:38.847 [WARNING][5275] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" HandleID="k8s-pod-network.dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" Workload="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--zgmwb-eth0" Jan 17 12:19:38.858457 containerd[1459]: 2025-01-17 12:19:38.847 [INFO][5275] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" HandleID="k8s-pod-network.dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" Workload="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--zgmwb-eth0" Jan 17 12:19:38.858457 containerd[1459]: 2025-01-17 12:19:38.850 [INFO][5275] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:38.858457 containerd[1459]: 2025-01-17 12:19:38.854 [INFO][5259] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" Jan 17 12:19:38.860220 containerd[1459]: time="2025-01-17T12:19:38.858453643Z" level=info msg="TearDown network for sandbox \"dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a\" successfully" Jan 17 12:19:38.860220 containerd[1459]: time="2025-01-17T12:19:38.858481696Z" level=info msg="StopPodSandbox for \"dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a\" returns successfully" Jan 17 12:19:38.866011 containerd[1459]: time="2025-01-17T12:19:38.865945215Z" level=info msg="RemovePodSandbox for \"dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a\"" Jan 17 12:19:38.866011 containerd[1459]: time="2025-01-17T12:19:38.866006787Z" level=info msg="Forcibly stopping sandbox \"dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a\"" Jan 17 12:19:39.045985 containerd[1459]: 2025-01-17 12:19:38.960 [WARNING][5293] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--zgmwb-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"6f470594-2379-4193-8b55-bd3e6a5996c1", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-fd30d73867", ContainerID:"8c1154975a3442ee93bb93d10d2f277b7c023d0af3da2e4d0867a668a8633d0a", Pod:"coredns-6f6b679f8f-zgmwb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7121a239a4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:39.045985 containerd[1459]: 2025-01-17 12:19:38.961 [INFO][5293] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" Jan 17 12:19:39.045985 containerd[1459]: 2025-01-17 12:19:38.962 [INFO][5293] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" iface="eth0" netns="" Jan 17 12:19:39.045985 containerd[1459]: 2025-01-17 12:19:38.962 [INFO][5293] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" Jan 17 12:19:39.045985 containerd[1459]: 2025-01-17 12:19:38.962 [INFO][5293] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" Jan 17 12:19:39.045985 containerd[1459]: 2025-01-17 12:19:39.025 [INFO][5303] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" HandleID="k8s-pod-network.dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" Workload="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--zgmwb-eth0" Jan 17 12:19:39.045985 containerd[1459]: 2025-01-17 12:19:39.025 [INFO][5303] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:39.045985 containerd[1459]: 2025-01-17 12:19:39.025 [INFO][5303] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:39.045985 containerd[1459]: 2025-01-17 12:19:39.035 [WARNING][5303] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" HandleID="k8s-pod-network.dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" Workload="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--zgmwb-eth0" Jan 17 12:19:39.045985 containerd[1459]: 2025-01-17 12:19:39.035 [INFO][5303] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" HandleID="k8s-pod-network.dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" Workload="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--zgmwb-eth0" Jan 17 12:19:39.045985 containerd[1459]: 2025-01-17 12:19:39.038 [INFO][5303] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:39.045985 containerd[1459]: 2025-01-17 12:19:39.042 [INFO][5293] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a" Jan 17 12:19:39.048601 containerd[1459]: time="2025-01-17T12:19:39.046041582Z" level=info msg="TearDown network for sandbox \"dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a\" successfully" Jan 17 12:19:39.053979 containerd[1459]: time="2025-01-17T12:19:39.053882867Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:19:39.053979 containerd[1459]: time="2025-01-17T12:19:39.053993611Z" level=info msg="RemovePodSandbox \"dd87f6163bc7f793810f306e2f14362ca4c34c2b304932d50aecc0adb722703a\" returns successfully" Jan 17 12:19:39.055857 containerd[1459]: time="2025-01-17T12:19:39.055533644Z" level=info msg="StopPodSandbox for \"e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9\"" Jan 17 12:19:39.249435 containerd[1459]: 2025-01-17 12:19:39.164 [WARNING][5321] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--fd30d73867-k8s-csi--node--driver--h55hv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b99954fd-00d0-4234-8172-969ac6f807eb", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-fd30d73867", ContainerID:"e06b43e58b7279337376ca4475ccc0dcecf049cb3a95ee19aeab290490321228", Pod:"csi-node-driver-h55hv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali258509708ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:39.249435 containerd[1459]: 2025-01-17 12:19:39.164 [INFO][5321] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" Jan 17 12:19:39.249435 containerd[1459]: 2025-01-17 12:19:39.164 [INFO][5321] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" iface="eth0" netns="" Jan 17 12:19:39.249435 containerd[1459]: 2025-01-17 12:19:39.164 [INFO][5321] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" Jan 17 12:19:39.249435 containerd[1459]: 2025-01-17 12:19:39.164 [INFO][5321] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" Jan 17 12:19:39.249435 containerd[1459]: 2025-01-17 12:19:39.217 [INFO][5327] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" HandleID="k8s-pod-network.e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" Workload="ci--4081.3.0--f--fd30d73867-k8s-csi--node--driver--h55hv-eth0" Jan 17 12:19:39.249435 containerd[1459]: 2025-01-17 12:19:39.221 [INFO][5327] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:39.249435 containerd[1459]: 2025-01-17 12:19:39.221 [INFO][5327] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:39.249435 containerd[1459]: 2025-01-17 12:19:39.237 [WARNING][5327] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" HandleID="k8s-pod-network.e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" Workload="ci--4081.3.0--f--fd30d73867-k8s-csi--node--driver--h55hv-eth0" Jan 17 12:19:39.249435 containerd[1459]: 2025-01-17 12:19:39.238 [INFO][5327] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" HandleID="k8s-pod-network.e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" Workload="ci--4081.3.0--f--fd30d73867-k8s-csi--node--driver--h55hv-eth0" Jan 17 12:19:39.249435 containerd[1459]: 2025-01-17 12:19:39.242 [INFO][5327] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:39.249435 containerd[1459]: 2025-01-17 12:19:39.246 [INFO][5321] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" Jan 17 12:19:39.250530 containerd[1459]: time="2025-01-17T12:19:39.249476678Z" level=info msg="TearDown network for sandbox \"e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9\" successfully" Jan 17 12:19:39.250530 containerd[1459]: time="2025-01-17T12:19:39.249512184Z" level=info msg="StopPodSandbox for \"e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9\" returns successfully" Jan 17 12:19:39.254619 containerd[1459]: time="2025-01-17T12:19:39.251964746Z" level=info msg="RemovePodSandbox for \"e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9\"" Jan 17 12:19:39.254619 containerd[1459]: time="2025-01-17T12:19:39.252005693Z" level=info msg="Forcibly stopping sandbox \"e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9\"" Jan 17 12:19:39.403967 systemd[1]: cri-containerd-8c3b1d5f636553670ca17f1c4405c0cee697d4af81e42b0ae63963f2136a4145.scope: Deactivated successfully. Jan 17 12:19:39.405145 systemd[1]: cri-containerd-8c3b1d5f636553670ca17f1c4405c0cee697d4af81e42b0ae63963f2136a4145.scope: Consumed 1.463s CPU time. Jan 17 12:19:39.537224 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c3b1d5f636553670ca17f1c4405c0cee697d4af81e42b0ae63963f2136a4145-rootfs.mount: Deactivated successfully. Jan 17 12:19:39.545884 containerd[1459]: time="2025-01-17T12:19:39.545789329Z" level=info msg="shim disconnected" id=8c3b1d5f636553670ca17f1c4405c0cee697d4af81e42b0ae63963f2136a4145 namespace=k8s.io Jan 17 12:19:39.545884 containerd[1459]: time="2025-01-17T12:19:39.545864255Z" level=warning msg="cleaning up after shim disconnected" id=8c3b1d5f636553670ca17f1c4405c0cee697d4af81e42b0ae63963f2136a4145 namespace=k8s.io Jan 17 12:19:39.545884 containerd[1459]: time="2025-01-17T12:19:39.545875204Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:19:39.584318 containerd[1459]: 2025-01-17 12:19:39.362 [WARNING][5346] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--fd30d73867-k8s-csi--node--driver--h55hv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b99954fd-00d0-4234-8172-969ac6f807eb", ResourceVersion:"1180", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-fd30d73867", ContainerID:"e06b43e58b7279337376ca4475ccc0dcecf049cb3a95ee19aeab290490321228", Pod:"csi-node-driver-h55hv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali258509708ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:39.584318 containerd[1459]: 2025-01-17 12:19:39.363 [INFO][5346] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" Jan 17 12:19:39.584318 containerd[1459]: 2025-01-17 12:19:39.363 [INFO][5346] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" iface="eth0" netns="" Jan 17 12:19:39.584318 containerd[1459]: 2025-01-17 12:19:39.363 [INFO][5346] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" Jan 17 12:19:39.584318 containerd[1459]: 2025-01-17 12:19:39.363 [INFO][5346] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" Jan 17 12:19:39.584318 containerd[1459]: 2025-01-17 12:19:39.493 [INFO][5352] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" HandleID="k8s-pod-network.e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" Workload="ci--4081.3.0--f--fd30d73867-k8s-csi--node--driver--h55hv-eth0" Jan 17 12:19:39.584318 containerd[1459]: 2025-01-17 12:19:39.493 [INFO][5352] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:39.584318 containerd[1459]: 2025-01-17 12:19:39.493 [INFO][5352] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:39.584318 containerd[1459]: 2025-01-17 12:19:39.535 [WARNING][5352] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" HandleID="k8s-pod-network.e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" Workload="ci--4081.3.0--f--fd30d73867-k8s-csi--node--driver--h55hv-eth0" Jan 17 12:19:39.584318 containerd[1459]: 2025-01-17 12:19:39.537 [INFO][5352] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" HandleID="k8s-pod-network.e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" Workload="ci--4081.3.0--f--fd30d73867-k8s-csi--node--driver--h55hv-eth0" Jan 17 12:19:39.584318 containerd[1459]: 2025-01-17 12:19:39.555 [INFO][5352] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:39.584318 containerd[1459]: 2025-01-17 12:19:39.567 [INFO][5346] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9" Jan 17 12:19:39.590203 containerd[1459]: time="2025-01-17T12:19:39.585713441Z" level=info msg="TearDown network for sandbox \"e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9\" successfully" Jan 17 12:19:39.642464 containerd[1459]: time="2025-01-17T12:19:39.642400180Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:19:39.643025 containerd[1459]: time="2025-01-17T12:19:39.642797936Z" level=info msg="RemovePodSandbox \"e9aa889a71e88b5d32cdecf01943cb5cea458241caf41052af422feb34d6fea9\" returns successfully" Jan 17 12:19:39.649914 containerd[1459]: time="2025-01-17T12:19:39.649853482Z" level=info msg="StopPodSandbox for \"ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386\"" Jan 17 12:19:39.736624 kubelet[2491]: I0117 12:19:39.736553 2491 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 17 12:19:39.740248 kubelet[2491]: I0117 12:19:39.740049 2491 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 17 12:19:39.949359 containerd[1459]: 2025-01-17 12:19:39.853 [WARNING][5395] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--xrf2v-eth0", GenerateName:"calico-apiserver-7b466f6854-", Namespace:"calico-apiserver", SelfLink:"", UID:"5e7faaed-af39-479f-9b85-c936c88dbeb7", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b466f6854", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-fd30d73867", ContainerID:"77ff8ea28cd6538c912a02c940a3c1778b8ae1310ae75ca29a7d7f0f882233dc", Pod:"calico-apiserver-7b466f6854-xrf2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2edcfdc5120", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:39.949359 containerd[1459]: 2025-01-17 12:19:39.856 [INFO][5395] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" Jan 17 12:19:39.949359 containerd[1459]: 2025-01-17 12:19:39.856 [INFO][5395] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" iface="eth0" netns="" Jan 17 12:19:39.949359 containerd[1459]: 2025-01-17 12:19:39.856 [INFO][5395] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" Jan 17 12:19:39.949359 containerd[1459]: 2025-01-17 12:19:39.856 [INFO][5395] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" Jan 17 12:19:39.949359 containerd[1459]: 2025-01-17 12:19:39.928 [INFO][5401] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" HandleID="k8s-pod-network.ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--xrf2v-eth0" Jan 17 12:19:39.949359 containerd[1459]: 2025-01-17 12:19:39.930 [INFO][5401] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:39.949359 containerd[1459]: 2025-01-17 12:19:39.930 [INFO][5401] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:39.949359 containerd[1459]: 2025-01-17 12:19:39.939 [WARNING][5401] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" HandleID="k8s-pod-network.ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--xrf2v-eth0" Jan 17 12:19:39.949359 containerd[1459]: 2025-01-17 12:19:39.939 [INFO][5401] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" HandleID="k8s-pod-network.ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--xrf2v-eth0" Jan 17 12:19:39.949359 containerd[1459]: 2025-01-17 12:19:39.942 [INFO][5401] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:39.949359 containerd[1459]: 2025-01-17 12:19:39.946 [INFO][5395] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" Jan 17 12:19:39.950939 containerd[1459]: time="2025-01-17T12:19:39.950888182Z" level=info msg="TearDown network for sandbox \"ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386\" successfully" Jan 17 12:19:39.951070 containerd[1459]: time="2025-01-17T12:19:39.951048017Z" level=info msg="StopPodSandbox for \"ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386\" returns successfully" Jan 17 12:19:39.952792 containerd[1459]: time="2025-01-17T12:19:39.952672435Z" level=info msg="RemovePodSandbox for \"ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386\"" Jan 17 12:19:39.952792 containerd[1459]: time="2025-01-17T12:19:39.952768620Z" level=info msg="Forcibly stopping sandbox \"ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386\"" Jan 17 12:19:40.170525 containerd[1459]: 2025-01-17 12:19:40.072 [WARNING][5419] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--xrf2v-eth0", GenerateName:"calico-apiserver-7b466f6854-", Namespace:"calico-apiserver", SelfLink:"", UID:"5e7faaed-af39-479f-9b85-c936c88dbeb7", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b466f6854", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-fd30d73867", ContainerID:"77ff8ea28cd6538c912a02c940a3c1778b8ae1310ae75ca29a7d7f0f882233dc", Pod:"calico-apiserver-7b466f6854-xrf2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2edcfdc5120", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:40.170525 containerd[1459]: 2025-01-17 12:19:40.075 [INFO][5419] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" Jan 17 12:19:40.170525 containerd[1459]: 2025-01-17 12:19:40.075 [INFO][5419] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" iface="eth0" netns="" Jan 17 12:19:40.170525 containerd[1459]: 2025-01-17 12:19:40.075 [INFO][5419] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" Jan 17 12:19:40.170525 containerd[1459]: 2025-01-17 12:19:40.075 [INFO][5419] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" Jan 17 12:19:40.170525 containerd[1459]: 2025-01-17 12:19:40.133 [INFO][5425] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" HandleID="k8s-pod-network.ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--xrf2v-eth0" Jan 17 12:19:40.170525 containerd[1459]: 2025-01-17 12:19:40.134 [INFO][5425] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:40.170525 containerd[1459]: 2025-01-17 12:19:40.134 [INFO][5425] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:40.170525 containerd[1459]: 2025-01-17 12:19:40.154 [WARNING][5425] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" HandleID="k8s-pod-network.ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--xrf2v-eth0" Jan 17 12:19:40.170525 containerd[1459]: 2025-01-17 12:19:40.154 [INFO][5425] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" HandleID="k8s-pod-network.ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--xrf2v-eth0" Jan 17 12:19:40.170525 containerd[1459]: 2025-01-17 12:19:40.158 [INFO][5425] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:40.170525 containerd[1459]: 2025-01-17 12:19:40.164 [INFO][5419] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386" Jan 17 12:19:40.170525 containerd[1459]: time="2025-01-17T12:19:40.170254377Z" level=info msg="TearDown network for sandbox \"ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386\" successfully" Jan 17 12:19:40.177242 containerd[1459]: time="2025-01-17T12:19:40.175451266Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:19:40.177242 containerd[1459]: time="2025-01-17T12:19:40.175529946Z" level=info msg="RemovePodSandbox \"ce34773ee130ba512f4bbe3e3b3f5235d165e88a2037eca8e82dfdcd78aad386\" returns successfully" Jan 17 12:19:40.177439 kubelet[2491]: E0117 12:19:40.176612 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:40.179535 containerd[1459]: time="2025-01-17T12:19:40.179330201Z" level=info msg="StopPodSandbox for \"e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f\"" Jan 17 12:19:40.221377 containerd[1459]: time="2025-01-17T12:19:40.221247152Z" level=info msg="CreateContainer within sandbox \"98ec9be6173ab977bed69a3ac96968dc30f76a10a5209587db2930e1d265e064\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:19:40.231990 kubelet[2491]: I0117 12:19:40.230405 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-h55hv" podStartSLOduration=30.147965119 podStartE2EDuration="46.230372231s" podCreationTimestamp="2025-01-17 12:18:54 +0000 UTC" firstStartedPulling="2025-01-17 12:19:22.103972626 +0000 UTC m=+44.014475215" lastFinishedPulling="2025-01-17 12:19:38.186379757 +0000 UTC m=+60.096882327" observedRunningTime="2025-01-17 12:19:39.182979922 +0000 UTC m=+61.093482612" watchObservedRunningTime="2025-01-17 12:19:40.230372231 +0000 UTC m=+62.140874827" Jan 17 12:19:40.282211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3441762326.mount: Deactivated successfully. Jan 17 12:19:40.295696 containerd[1459]: time="2025-01-17T12:19:40.295188061Z" level=info msg="CreateContainer within sandbox \"98ec9be6173ab977bed69a3ac96968dc30f76a10a5209587db2930e1d265e064\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1add83100d22b3f2549302dd724b8679c98bc2ddd7e018d9968e02a4af3bb298\"" Jan 17 12:19:40.301441 containerd[1459]: time="2025-01-17T12:19:40.301363043Z" level=info msg="StartContainer for \"1add83100d22b3f2549302dd724b8679c98bc2ddd7e018d9968e02a4af3bb298\"" Jan 17 12:19:40.389626 systemd[1]: Started cri-containerd-1add83100d22b3f2549302dd724b8679c98bc2ddd7e018d9968e02a4af3bb298.scope - libcontainer container 1add83100d22b3f2549302dd724b8679c98bc2ddd7e018d9968e02a4af3bb298. Jan 17 12:19:40.449409 containerd[1459]: 2025-01-17 12:19:40.356 [WARNING][5445] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--hrc5h-eth0", GenerateName:"calico-apiserver-7b466f6854-", Namespace:"calico-apiserver", SelfLink:"", UID:"d2d2e829-8efa-4f4c-b9c2-2cd87395f520", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b466f6854", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-fd30d73867", ContainerID:"6ba975f575bd431372e670765b3b7a6cb3eda3c4faa19d1861df114c362796c6", Pod:"calico-apiserver-7b466f6854-hrc5h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia2ce85091b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:40.449409 containerd[1459]: 2025-01-17 12:19:40.357 [INFO][5445] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" Jan 17 12:19:40.449409 containerd[1459]: 2025-01-17 12:19:40.357 [INFO][5445] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" iface="eth0" netns="" Jan 17 12:19:40.449409 containerd[1459]: 2025-01-17 12:19:40.357 [INFO][5445] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" Jan 17 12:19:40.449409 containerd[1459]: 2025-01-17 12:19:40.357 [INFO][5445] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" Jan 17 12:19:40.449409 containerd[1459]: 2025-01-17 12:19:40.431 [INFO][5464] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" HandleID="k8s-pod-network.e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--hrc5h-eth0" Jan 17 12:19:40.449409 containerd[1459]: 2025-01-17 12:19:40.431 [INFO][5464] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:40.449409 containerd[1459]: 2025-01-17 12:19:40.431 [INFO][5464] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:40.449409 containerd[1459]: 2025-01-17 12:19:40.442 [WARNING][5464] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" HandleID="k8s-pod-network.e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--hrc5h-eth0" Jan 17 12:19:40.449409 containerd[1459]: 2025-01-17 12:19:40.442 [INFO][5464] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" HandleID="k8s-pod-network.e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--hrc5h-eth0" Jan 17 12:19:40.449409 containerd[1459]: 2025-01-17 12:19:40.445 [INFO][5464] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:40.449409 containerd[1459]: 2025-01-17 12:19:40.446 [INFO][5445] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" Jan 17 12:19:40.451774 containerd[1459]: time="2025-01-17T12:19:40.449478030Z" level=info msg="TearDown network for sandbox \"e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f\" successfully" Jan 17 12:19:40.451774 containerd[1459]: time="2025-01-17T12:19:40.449521357Z" level=info msg="StopPodSandbox for \"e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f\" returns successfully" Jan 17 12:19:40.453031 containerd[1459]: time="2025-01-17T12:19:40.452966564Z" level=info msg="RemovePodSandbox for \"e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f\"" Jan 17 12:19:40.453169 containerd[1459]: time="2025-01-17T12:19:40.453041244Z" level=info msg="Forcibly stopping sandbox \"e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f\"" Jan 17 12:19:40.594977 containerd[1459]: time="2025-01-17T12:19:40.594017152Z" level=info msg="StartContainer for \"1add83100d22b3f2549302dd724b8679c98bc2ddd7e018d9968e02a4af3bb298\" returns successfully" Jan 17 12:19:40.639793 containerd[1459]: 2025-01-17 12:19:40.525 [WARNING][5493] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--hrc5h-eth0", GenerateName:"calico-apiserver-7b466f6854-", Namespace:"calico-apiserver", SelfLink:"", UID:"d2d2e829-8efa-4f4c-b9c2-2cd87395f520", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b466f6854", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-fd30d73867", ContainerID:"6ba975f575bd431372e670765b3b7a6cb3eda3c4faa19d1861df114c362796c6", Pod:"calico-apiserver-7b466f6854-hrc5h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia2ce85091b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:40.639793 containerd[1459]: 2025-01-17 12:19:40.526 [INFO][5493] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" Jan 17 12:19:40.639793 containerd[1459]: 2025-01-17 12:19:40.527 [INFO][5493] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" iface="eth0" netns="" Jan 17 12:19:40.639793 containerd[1459]: 2025-01-17 12:19:40.527 [INFO][5493] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" Jan 17 12:19:40.639793 containerd[1459]: 2025-01-17 12:19:40.527 [INFO][5493] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" Jan 17 12:19:40.639793 containerd[1459]: 2025-01-17 12:19:40.610 [INFO][5500] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" HandleID="k8s-pod-network.e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--hrc5h-eth0" Jan 17 12:19:40.639793 containerd[1459]: 2025-01-17 12:19:40.610 [INFO][5500] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:40.639793 containerd[1459]: 2025-01-17 12:19:40.610 [INFO][5500] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:40.639793 containerd[1459]: 2025-01-17 12:19:40.626 [WARNING][5500] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" HandleID="k8s-pod-network.e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--hrc5h-eth0" Jan 17 12:19:40.639793 containerd[1459]: 2025-01-17 12:19:40.626 [INFO][5500] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" HandleID="k8s-pod-network.e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--apiserver--7b466f6854--hrc5h-eth0" Jan 17 12:19:40.639793 containerd[1459]: 2025-01-17 12:19:40.632 [INFO][5500] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:40.639793 containerd[1459]: 2025-01-17 12:19:40.634 [INFO][5493] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f" Jan 17 12:19:40.642150 containerd[1459]: time="2025-01-17T12:19:40.640568664Z" level=info msg="TearDown network for sandbox \"e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f\" successfully" Jan 17 12:19:40.647315 containerd[1459]: time="2025-01-17T12:19:40.646302660Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:19:40.647315 containerd[1459]: time="2025-01-17T12:19:40.646380455Z" level=info msg="RemovePodSandbox \"e399849c70323198d1eb1e46bf23ee3c261991b9f694606d0a37b73bb0ac890f\" returns successfully" Jan 17 12:19:40.650052 containerd[1459]: time="2025-01-17T12:19:40.648316889Z" level=info msg="StopPodSandbox for \"e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40\"" Jan 17 12:19:40.812916 containerd[1459]: 2025-01-17 12:19:40.724 [WARNING][5527] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--kks2v-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"540c0bc8-bb65-4107-8514-8f6a7b04b667", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-fd30d73867", ContainerID:"057fa003b811e3b09fc5565096863c5eb1d140925ae4787abcb420df525704f7", Pod:"coredns-6f6b679f8f-kks2v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic939ee61ee3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:40.812916 containerd[1459]: 2025-01-17 12:19:40.725 [INFO][5527] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" Jan 17 12:19:40.812916 containerd[1459]: 2025-01-17 12:19:40.725 [INFO][5527] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" iface="eth0" netns="" Jan 17 12:19:40.812916 containerd[1459]: 2025-01-17 12:19:40.725 [INFO][5527] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" Jan 17 12:19:40.812916 containerd[1459]: 2025-01-17 12:19:40.725 [INFO][5527] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" Jan 17 12:19:40.812916 containerd[1459]: 2025-01-17 12:19:40.786 [INFO][5537] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" HandleID="k8s-pod-network.e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" Workload="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--kks2v-eth0" Jan 17 12:19:40.812916 containerd[1459]: 2025-01-17 12:19:40.786 [INFO][5537] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:40.812916 containerd[1459]: 2025-01-17 12:19:40.786 [INFO][5537] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:40.812916 containerd[1459]: 2025-01-17 12:19:40.803 [WARNING][5537] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" HandleID="k8s-pod-network.e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" Workload="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--kks2v-eth0" Jan 17 12:19:40.812916 containerd[1459]: 2025-01-17 12:19:40.803 [INFO][5537] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" HandleID="k8s-pod-network.e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" Workload="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--kks2v-eth0" Jan 17 12:19:40.812916 containerd[1459]: 2025-01-17 12:19:40.805 [INFO][5537] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:40.812916 containerd[1459]: 2025-01-17 12:19:40.809 [INFO][5527] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" Jan 17 12:19:40.815096 containerd[1459]: time="2025-01-17T12:19:40.813935429Z" level=info msg="TearDown network for sandbox \"e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40\" successfully" Jan 17 12:19:40.815096 containerd[1459]: time="2025-01-17T12:19:40.813973150Z" level=info msg="StopPodSandbox for \"e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40\" returns successfully" Jan 17 12:19:40.815390 containerd[1459]: time="2025-01-17T12:19:40.815242050Z" level=info msg="RemovePodSandbox for \"e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40\"" Jan 17 12:19:40.815454 containerd[1459]: time="2025-01-17T12:19:40.815411604Z" level=info msg="Forcibly stopping sandbox \"e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40\"" Jan 17 12:19:40.988881 containerd[1459]: 2025-01-17 12:19:40.897 [WARNING][5557] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--kks2v-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"540c0bc8-bb65-4107-8514-8f6a7b04b667", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 18, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-fd30d73867", ContainerID:"057fa003b811e3b09fc5565096863c5eb1d140925ae4787abcb420df525704f7", Pod:"coredns-6f6b679f8f-kks2v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic939ee61ee3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:40.988881 containerd[1459]: 2025-01-17 12:19:40.898 [INFO][5557] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" Jan 17 12:19:40.988881 containerd[1459]: 2025-01-17 12:19:40.898 [INFO][5557] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" iface="eth0" netns="" Jan 17 12:19:40.988881 containerd[1459]: 2025-01-17 12:19:40.898 [INFO][5557] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" Jan 17 12:19:40.988881 containerd[1459]: 2025-01-17 12:19:40.898 [INFO][5557] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" Jan 17 12:19:40.988881 containerd[1459]: 2025-01-17 12:19:40.954 [INFO][5563] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" HandleID="k8s-pod-network.e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" Workload="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--kks2v-eth0" Jan 17 12:19:40.988881 containerd[1459]: 2025-01-17 12:19:40.954 [INFO][5563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:40.988881 containerd[1459]: 2025-01-17 12:19:40.954 [INFO][5563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:40.988881 containerd[1459]: 2025-01-17 12:19:40.975 [WARNING][5563] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" HandleID="k8s-pod-network.e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" Workload="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--kks2v-eth0" Jan 17 12:19:40.988881 containerd[1459]: 2025-01-17 12:19:40.975 [INFO][5563] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" HandleID="k8s-pod-network.e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" Workload="ci--4081.3.0--f--fd30d73867-k8s-coredns--6f6b679f8f--kks2v-eth0" Jan 17 12:19:40.988881 containerd[1459]: 2025-01-17 12:19:40.981 [INFO][5563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:40.988881 containerd[1459]: 2025-01-17 12:19:40.986 [INFO][5557] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40" Jan 17 12:19:40.989409 containerd[1459]: time="2025-01-17T12:19:40.989103670Z" level=info msg="TearDown network for sandbox \"e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40\" successfully" Jan 17 12:19:40.998196 containerd[1459]: time="2025-01-17T12:19:40.997403181Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:19:40.998196 containerd[1459]: time="2025-01-17T12:19:40.997598955Z" level=info msg="RemovePodSandbox \"e14238a0946c47a0ed8654432d9e0cad89799161b04d117be58babdbd5bead40\" returns successfully" Jan 17 12:19:41.000852 containerd[1459]: time="2025-01-17T12:19:40.998761405Z" level=info msg="StopPodSandbox for \"1dbe89695f75be8d8f665a121ac442543ad505e3de9fb43098dcf34cc7fd8179\"" Jan 17 12:19:41.000852 containerd[1459]: time="2025-01-17T12:19:40.999169966Z" level=info msg="TearDown network for sandbox \"1dbe89695f75be8d8f665a121ac442543ad505e3de9fb43098dcf34cc7fd8179\" successfully" Jan 17 12:19:41.000852 containerd[1459]: time="2025-01-17T12:19:40.999196029Z" level=info msg="StopPodSandbox for \"1dbe89695f75be8d8f665a121ac442543ad505e3de9fb43098dcf34cc7fd8179\" returns successfully" Jan 17 12:19:41.001786 containerd[1459]: time="2025-01-17T12:19:41.001653699Z" level=info msg="RemovePodSandbox for \"1dbe89695f75be8d8f665a121ac442543ad505e3de9fb43098dcf34cc7fd8179\"" Jan 17 12:19:41.001786 containerd[1459]: time="2025-01-17T12:19:41.001778532Z" level=info msg="Forcibly stopping sandbox \"1dbe89695f75be8d8f665a121ac442543ad505e3de9fb43098dcf34cc7fd8179\"" Jan 17 12:19:41.002011 containerd[1459]: time="2025-01-17T12:19:41.001953194Z" level=info msg="TearDown network for sandbox \"1dbe89695f75be8d8f665a121ac442543ad505e3de9fb43098dcf34cc7fd8179\" successfully" Jan 17 12:19:41.010950 containerd[1459]: time="2025-01-17T12:19:41.009148230Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1dbe89695f75be8d8f665a121ac442543ad505e3de9fb43098dcf34cc7fd8179\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:19:41.010950 containerd[1459]: time="2025-01-17T12:19:41.009322028Z" level=info msg="RemovePodSandbox \"1dbe89695f75be8d8f665a121ac442543ad505e3de9fb43098dcf34cc7fd8179\" returns successfully" Jan 17 12:19:41.013975 containerd[1459]: time="2025-01-17T12:19:41.013492106Z" level=info msg="StopPodSandbox for \"e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a\"" Jan 17 12:19:41.015781 kubelet[2491]: E0117 12:19:41.014169 2491 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0c94f622-80de-4abd-b2f4-f05253e01f5a" containerName="calico-typha" Jan 17 12:19:41.015781 kubelet[2491]: E0117 12:19:41.014230 2491 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="82477d9d-231e-4438-b265-cae0af210b64" containerName="calico-kube-controllers" Jan 17 12:19:41.015781 kubelet[2491]: I0117 12:19:41.014327 2491 memory_manager.go:354] "RemoveStaleState removing state" podUID="82477d9d-231e-4438-b265-cae0af210b64" containerName="calico-kube-controllers" Jan 17 12:19:41.015781 kubelet[2491]: I0117 12:19:41.014350 2491 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c94f622-80de-4abd-b2f4-f05253e01f5a" containerName="calico-typha" Jan 17 12:19:41.069709 systemd[1]: Created slice kubepods-besteffort-pod216cc737_7c52_482d_ae13_e22e7ad98c6f.slice - libcontainer container kubepods-besteffort-pod216cc737_7c52_482d_ae13_e22e7ad98c6f.slice. Jan 17 12:19:41.135726 kubelet[2491]: I0117 12:19:41.135586 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z2mf\" (UniqueName: \"kubernetes.io/projected/216cc737-7c52-482d-ae13-e22e7ad98c6f-kube-api-access-7z2mf\") pod \"calico-kube-controllers-6f8d969f76-v6xkz\" (UID: \"216cc737-7c52-482d-ae13-e22e7ad98c6f\") " pod="calico-system/calico-kube-controllers-6f8d969f76-v6xkz" Jan 17 12:19:41.135726 kubelet[2491]: I0117 12:19:41.135651 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/216cc737-7c52-482d-ae13-e22e7ad98c6f-tigera-ca-bundle\") pod \"calico-kube-controllers-6f8d969f76-v6xkz\" (UID: \"216cc737-7c52-482d-ae13-e22e7ad98c6f\") " pod="calico-system/calico-kube-controllers-6f8d969f76-v6xkz" Jan 17 12:19:41.196936 kubelet[2491]: E0117 12:19:41.196253 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:41.243206 kubelet[2491]: I0117 12:19:41.242827 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vmbnf" podStartSLOduration=14.242799195 podStartE2EDuration="14.242799195s" podCreationTimestamp="2025-01-17 12:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:19:41.237046169 +0000 UTC m=+63.147548781" watchObservedRunningTime="2025-01-17 12:19:41.242799195 +0000 UTC m=+63.153301795" Jan 17 12:19:41.287371 containerd[1459]: 2025-01-17 12:19:41.143 [WARNING][5582] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:41.287371 containerd[1459]: 2025-01-17 12:19:41.144 [INFO][5582] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Jan 17 12:19:41.287371 containerd[1459]: 2025-01-17 12:19:41.144 [INFO][5582] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" iface="eth0" netns="" Jan 17 12:19:41.287371 containerd[1459]: 2025-01-17 12:19:41.145 [INFO][5582] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Jan 17 12:19:41.287371 containerd[1459]: 2025-01-17 12:19:41.145 [INFO][5582] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Jan 17 12:19:41.287371 containerd[1459]: 2025-01-17 12:19:41.205 [INFO][5593] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" HandleID="k8s-pod-network.e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:41.287371 containerd[1459]: 2025-01-17 12:19:41.209 [INFO][5593] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:41.287371 containerd[1459]: 2025-01-17 12:19:41.214 [INFO][5593] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:41.287371 containerd[1459]: 2025-01-17 12:19:41.242 [WARNING][5593] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" HandleID="k8s-pod-network.e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:41.287371 containerd[1459]: 2025-01-17 12:19:41.242 [INFO][5593] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" HandleID="k8s-pod-network.e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:41.287371 containerd[1459]: 2025-01-17 12:19:41.254 [INFO][5593] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:41.287371 containerd[1459]: 2025-01-17 12:19:41.273 [INFO][5582] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Jan 17 12:19:41.289212 containerd[1459]: time="2025-01-17T12:19:41.288721704Z" level=info msg="TearDown network for sandbox \"e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a\" successfully" Jan 17 12:19:41.289212 containerd[1459]: time="2025-01-17T12:19:41.288808000Z" level=info msg="StopPodSandbox for \"e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a\" returns successfully" Jan 17 12:19:41.293038 containerd[1459]: time="2025-01-17T12:19:41.292804594Z" level=info msg="RemovePodSandbox for \"e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a\"" Jan 17 12:19:41.293038 containerd[1459]: time="2025-01-17T12:19:41.292865221Z" level=info msg="Forcibly stopping sandbox \"e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a\"" Jan 17 12:19:41.385775 containerd[1459]: time="2025-01-17T12:19:41.385695938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f8d969f76-v6xkz,Uid:216cc737-7c52-482d-ae13-e22e7ad98c6f,Namespace:calico-system,Attempt:0,}" Jan 17 12:19:41.582357 containerd[1459]: 2025-01-17 12:19:41.391 [WARNING][5631] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:41.582357 containerd[1459]: 2025-01-17 12:19:41.392 [INFO][5631] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Jan 17 12:19:41.582357 containerd[1459]: 2025-01-17 12:19:41.392 [INFO][5631] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" iface="eth0" netns="" Jan 17 12:19:41.582357 containerd[1459]: 2025-01-17 12:19:41.392 [INFO][5631] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Jan 17 12:19:41.582357 containerd[1459]: 2025-01-17 12:19:41.392 [INFO][5631] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Jan 17 12:19:41.582357 containerd[1459]: 2025-01-17 12:19:41.519 [INFO][5641] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" HandleID="k8s-pod-network.e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:41.582357 containerd[1459]: 2025-01-17 12:19:41.520 [INFO][5641] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:41.582357 containerd[1459]: 2025-01-17 12:19:41.520 [INFO][5641] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:41.582357 containerd[1459]: 2025-01-17 12:19:41.537 [WARNING][5641] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" HandleID="k8s-pod-network.e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:41.582357 containerd[1459]: 2025-01-17 12:19:41.537 [INFO][5641] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" HandleID="k8s-pod-network.e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:41.582357 containerd[1459]: 2025-01-17 12:19:41.545 [INFO][5641] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:41.582357 containerd[1459]: 2025-01-17 12:19:41.554 [INFO][5631] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a" Jan 17 12:19:41.582357 containerd[1459]: time="2025-01-17T12:19:41.582118523Z" level=info msg="TearDown network for sandbox \"e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a\" successfully" Jan 17 12:19:41.631573 containerd[1459]: time="2025-01-17T12:19:41.631211318Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:19:41.631573 containerd[1459]: time="2025-01-17T12:19:41.631295366Z" level=info msg="RemovePodSandbox \"e43b51ebb198bb315e2ddeda6667f1f6050cf04fe898ec81c920be3d8754aa8a\" returns successfully" Jan 17 12:19:41.633328 containerd[1459]: time="2025-01-17T12:19:41.632967254Z" level=info msg="StopPodSandbox for \"5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b\"" Jan 17 12:19:41.813150 containerd[1459]: 2025-01-17 12:19:41.734 [WARNING][5676] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:41.813150 containerd[1459]: 2025-01-17 12:19:41.735 [INFO][5676] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" Jan 17 12:19:41.813150 containerd[1459]: 2025-01-17 12:19:41.735 [INFO][5676] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" iface="eth0" netns="" Jan 17 12:19:41.813150 containerd[1459]: 2025-01-17 12:19:41.735 [INFO][5676] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" Jan 17 12:19:41.813150 containerd[1459]: 2025-01-17 12:19:41.735 [INFO][5676] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" Jan 17 12:19:41.813150 containerd[1459]: 2025-01-17 12:19:41.777 [INFO][5685] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" HandleID="k8s-pod-network.5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:41.813150 containerd[1459]: 2025-01-17 12:19:41.778 [INFO][5685] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:41.813150 containerd[1459]: 2025-01-17 12:19:41.779 [INFO][5685] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:41.813150 containerd[1459]: 2025-01-17 12:19:41.801 [WARNING][5685] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" HandleID="k8s-pod-network.5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:41.813150 containerd[1459]: 2025-01-17 12:19:41.801 [INFO][5685] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" HandleID="k8s-pod-network.5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:41.813150 containerd[1459]: 2025-01-17 12:19:41.807 [INFO][5685] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:41.813150 containerd[1459]: 2025-01-17 12:19:41.811 [INFO][5676] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" Jan 17 12:19:41.816317 containerd[1459]: time="2025-01-17T12:19:41.813173247Z" level=info msg="TearDown network for sandbox \"5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b\" successfully" Jan 17 12:19:41.816317 containerd[1459]: time="2025-01-17T12:19:41.813210632Z" level=info msg="StopPodSandbox for \"5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b\" returns successfully" Jan 17 12:19:41.816317 containerd[1459]: time="2025-01-17T12:19:41.813943448Z" level=info msg="RemovePodSandbox for \"5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b\"" Jan 17 12:19:41.816317 containerd[1459]: time="2025-01-17T12:19:41.813995434Z" level=info msg="Forcibly stopping sandbox \"5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b\"" Jan 17 12:19:41.922186 systemd-networkd[1366]: calicfe8c694b3d: Link UP Jan 17 12:19:41.925579 systemd-networkd[1366]: calicfe8c694b3d: Gained carrier Jan 17 12:19:41.967076 containerd[1459]: 2025-01-17 12:19:41.510 [INFO][5645] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--6f8d969f76--v6xkz-eth0 calico-kube-controllers-6f8d969f76- calico-system 216cc737-7c52-482d-ae13-e22e7ad98c6f 1202 0 2025-01-17 12:19:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6f8d969f76 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-f-fd30d73867 calico-kube-controllers-6f8d969f76-v6xkz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicfe8c694b3d [] []}} ContainerID="a08bf991e61cee5f837da950d18c907d2bbaa28c9d69c13bdd47675476726553" Namespace="calico-system" Pod="calico-kube-controllers-6f8d969f76-v6xkz" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--6f8d969f76--v6xkz-" Jan 17 12:19:41.967076 containerd[1459]: 2025-01-17 12:19:41.510 [INFO][5645] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a08bf991e61cee5f837da950d18c907d2bbaa28c9d69c13bdd47675476726553" Namespace="calico-system" Pod="calico-kube-controllers-6f8d969f76-v6xkz" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--6f8d969f76--v6xkz-eth0" Jan 17 12:19:41.967076 containerd[1459]: 2025-01-17 12:19:41.679 [INFO][5658] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a08bf991e61cee5f837da950d18c907d2bbaa28c9d69c13bdd47675476726553" HandleID="k8s-pod-network.a08bf991e61cee5f837da950d18c907d2bbaa28c9d69c13bdd47675476726553" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--6f8d969f76--v6xkz-eth0" Jan 17 12:19:41.967076 containerd[1459]: 2025-01-17 12:19:41.801 [INFO][5658] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a08bf991e61cee5f837da950d18c907d2bbaa28c9d69c13bdd47675476726553" HandleID="k8s-pod-network.a08bf991e61cee5f837da950d18c907d2bbaa28c9d69c13bdd47675476726553" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--6f8d969f76--v6xkz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011aa00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-f-fd30d73867", "pod":"calico-kube-controllers-6f8d969f76-v6xkz", "timestamp":"2025-01-17 12:19:41.678977272 +0000 UTC"}, Hostname:"ci-4081.3.0-f-fd30d73867", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:19:41.967076 containerd[1459]: 2025-01-17 12:19:41.802 [INFO][5658] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:41.967076 containerd[1459]: 2025-01-17 12:19:41.807 [INFO][5658] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:41.967076 containerd[1459]: 2025-01-17 12:19:41.809 [INFO][5658] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-f-fd30d73867' Jan 17 12:19:41.967076 containerd[1459]: 2025-01-17 12:19:41.819 [INFO][5658] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a08bf991e61cee5f837da950d18c907d2bbaa28c9d69c13bdd47675476726553" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:41.967076 containerd[1459]: 2025-01-17 12:19:41.837 [INFO][5658] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:41.967076 containerd[1459]: 2025-01-17 12:19:41.847 [INFO][5658] ipam/ipam.go 489: Trying affinity for 192.168.52.128/26 host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:41.967076 containerd[1459]: 2025-01-17 12:19:41.850 [INFO][5658] ipam/ipam.go 155: Attempting to load block cidr=192.168.52.128/26 host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:41.967076 containerd[1459]: 2025-01-17 12:19:41.856 [INFO][5658] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:41.967076 containerd[1459]: 2025-01-17 12:19:41.857 [INFO][5658] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.a08bf991e61cee5f837da950d18c907d2bbaa28c9d69c13bdd47675476726553" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:41.967076 containerd[1459]: 2025-01-17 12:19:41.861 [INFO][5658] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a08bf991e61cee5f837da950d18c907d2bbaa28c9d69c13bdd47675476726553 Jan 17 12:19:41.967076 containerd[1459]: 2025-01-17 12:19:41.871 [INFO][5658] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.a08bf991e61cee5f837da950d18c907d2bbaa28c9d69c13bdd47675476726553" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:41.967076 containerd[1459]: 2025-01-17 12:19:41.894 [INFO][5658] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.52.135/26] block=192.168.52.128/26 handle="k8s-pod-network.a08bf991e61cee5f837da950d18c907d2bbaa28c9d69c13bdd47675476726553" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:41.967076 containerd[1459]: 2025-01-17 12:19:41.895 [INFO][5658] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.135/26] handle="k8s-pod-network.a08bf991e61cee5f837da950d18c907d2bbaa28c9d69c13bdd47675476726553" host="ci-4081.3.0-f-fd30d73867" Jan 17 12:19:41.967076 containerd[1459]: 2025-01-17 12:19:41.896 [INFO][5658] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:41.967076 containerd[1459]: 2025-01-17 12:19:41.896 [INFO][5658] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.135/26] IPv6=[] ContainerID="a08bf991e61cee5f837da950d18c907d2bbaa28c9d69c13bdd47675476726553" HandleID="k8s-pod-network.a08bf991e61cee5f837da950d18c907d2bbaa28c9d69c13bdd47675476726553" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--6f8d969f76--v6xkz-eth0" Jan 17 12:19:41.969335 containerd[1459]: 2025-01-17 12:19:41.906 [INFO][5645] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a08bf991e61cee5f837da950d18c907d2bbaa28c9d69c13bdd47675476726553" Namespace="calico-system" Pod="calico-kube-controllers-6f8d969f76-v6xkz" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--6f8d969f76--v6xkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--6f8d969f76--v6xkz-eth0", GenerateName:"calico-kube-controllers-6f8d969f76-", Namespace:"calico-system", SelfLink:"", UID:"216cc737-7c52-482d-ae13-e22e7ad98c6f", ResourceVersion:"1202", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 19, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f8d969f76", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-fd30d73867", ContainerID:"", Pod:"calico-kube-controllers-6f8d969f76-v6xkz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicfe8c694b3d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:41.969335 containerd[1459]: 2025-01-17 12:19:41.906 [INFO][5645] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.52.135/32] ContainerID="a08bf991e61cee5f837da950d18c907d2bbaa28c9d69c13bdd47675476726553" Namespace="calico-system" Pod="calico-kube-controllers-6f8d969f76-v6xkz" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--6f8d969f76--v6xkz-eth0" Jan 17 12:19:41.969335 containerd[1459]: 2025-01-17 12:19:41.906 [INFO][5645] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicfe8c694b3d ContainerID="a08bf991e61cee5f837da950d18c907d2bbaa28c9d69c13bdd47675476726553" Namespace="calico-system" Pod="calico-kube-controllers-6f8d969f76-v6xkz" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--6f8d969f76--v6xkz-eth0" Jan 17 12:19:41.969335 containerd[1459]: 2025-01-17 12:19:41.926 [INFO][5645] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a08bf991e61cee5f837da950d18c907d2bbaa28c9d69c13bdd47675476726553" Namespace="calico-system" Pod="calico-kube-controllers-6f8d969f76-v6xkz" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--6f8d969f76--v6xkz-eth0" Jan 17 12:19:41.969335 containerd[1459]: 2025-01-17 12:19:41.928 [INFO][5645] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a08bf991e61cee5f837da950d18c907d2bbaa28c9d69c13bdd47675476726553" Namespace="calico-system" Pod="calico-kube-controllers-6f8d969f76-v6xkz" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--6f8d969f76--v6xkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--6f8d969f76--v6xkz-eth0", GenerateName:"calico-kube-controllers-6f8d969f76-", Namespace:"calico-system", SelfLink:"", UID:"216cc737-7c52-482d-ae13-e22e7ad98c6f", ResourceVersion:"1202", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 19, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f8d969f76", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-f-fd30d73867", ContainerID:"a08bf991e61cee5f837da950d18c907d2bbaa28c9d69c13bdd47675476726553", Pod:"calico-kube-controllers-6f8d969f76-v6xkz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicfe8c694b3d", MAC:"3e:2d:b5:e3:21:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:41.969335 containerd[1459]: 2025-01-17 12:19:41.958 [INFO][5645] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a08bf991e61cee5f837da950d18c907d2bbaa28c9d69c13bdd47675476726553" Namespace="calico-system" Pod="calico-kube-controllers-6f8d969f76-v6xkz" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--6f8d969f76--v6xkz-eth0" Jan 17 12:19:42.107944 containerd[1459]: 2025-01-17 12:19:41.895 [WARNING][5708] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" WorkloadEndpoint="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:42.107944 containerd[1459]: 2025-01-17 12:19:41.895 [INFO][5708] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" Jan 17 12:19:42.107944 containerd[1459]: 2025-01-17 12:19:41.895 [INFO][5708] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" iface="eth0" netns="" Jan 17 12:19:42.107944 containerd[1459]: 2025-01-17 12:19:41.895 [INFO][5708] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" Jan 17 12:19:42.107944 containerd[1459]: 2025-01-17 12:19:41.895 [INFO][5708] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" Jan 17 12:19:42.107944 containerd[1459]: 2025-01-17 12:19:42.062 [INFO][5718] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" HandleID="k8s-pod-network.5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:42.107944 containerd[1459]: 2025-01-17 12:19:42.064 [INFO][5718] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:42.107944 containerd[1459]: 2025-01-17 12:19:42.065 [INFO][5718] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:42.107944 containerd[1459]: 2025-01-17 12:19:42.088 [WARNING][5718] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" HandleID="k8s-pod-network.5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:42.107944 containerd[1459]: 2025-01-17 12:19:42.088 [INFO][5718] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" HandleID="k8s-pod-network.5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" Workload="ci--4081.3.0--f--fd30d73867-k8s-calico--kube--controllers--75f85c7775--l4kfg-eth0" Jan 17 12:19:42.107944 containerd[1459]: 2025-01-17 12:19:42.093 [INFO][5718] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:42.107944 containerd[1459]: 2025-01-17 12:19:42.101 [INFO][5708] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b" Jan 17 12:19:42.119300 containerd[1459]: time="2025-01-17T12:19:42.118809982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:42.119300 containerd[1459]: time="2025-01-17T12:19:42.118877655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:42.119300 containerd[1459]: time="2025-01-17T12:19:42.118899494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:42.119300 containerd[1459]: time="2025-01-17T12:19:42.119027260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:42.166846 containerd[1459]: time="2025-01-17T12:19:42.166288894Z" level=info msg="TearDown network for sandbox \"5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b\" successfully" Jan 17 12:19:42.181177 containerd[1459]: time="2025-01-17T12:19:42.179191073Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:19:42.181177 containerd[1459]: time="2025-01-17T12:19:42.179282241Z" level=info msg="RemovePodSandbox \"5056d89a981675069034c9a2c62aab694f5c269adf690ab5fdd906634e164e6b\" returns successfully" Jan 17 12:19:42.200489 systemd[1]: Started cri-containerd-a08bf991e61cee5f837da950d18c907d2bbaa28c9d69c13bdd47675476726553.scope - libcontainer container a08bf991e61cee5f837da950d18c907d2bbaa28c9d69c13bdd47675476726553. Jan 17 12:19:42.235320 kubelet[2491]: E0117 12:19:42.235258 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:42.290444 systemd[1]: run-containerd-runc-k8s.io-1add83100d22b3f2549302dd724b8679c98bc2ddd7e018d9968e02a4af3bb298-runc.gbHdJ0.mount: Deactivated successfully. Jan 17 12:19:42.402951 containerd[1459]: time="2025-01-17T12:19:42.401934769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f8d969f76-v6xkz,Uid:216cc737-7c52-482d-ae13-e22e7ad98c6f,Namespace:calico-system,Attempt:0,} returns sandbox id \"a08bf991e61cee5f837da950d18c907d2bbaa28c9d69c13bdd47675476726553\"" Jan 17 12:19:42.441502 containerd[1459]: time="2025-01-17T12:19:42.438789377Z" level=info msg="CreateContainer within sandbox \"a08bf991e61cee5f837da950d18c907d2bbaa28c9d69c13bdd47675476726553\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 17 12:19:42.469362 containerd[1459]: time="2025-01-17T12:19:42.469246203Z" level=info msg="CreateContainer within sandbox \"a08bf991e61cee5f837da950d18c907d2bbaa28c9d69c13bdd47675476726553\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"53d6389a78566e803c6303fd2c95c26dc4a085730284a2efb1d08b68b311a390\"" Jan 17 12:19:42.472854 containerd[1459]: time="2025-01-17T12:19:42.472195229Z" level=info msg="StartContainer for \"53d6389a78566e803c6303fd2c95c26dc4a085730284a2efb1d08b68b311a390\"" Jan 17 12:19:42.534371 systemd[1]: Started cri-containerd-53d6389a78566e803c6303fd2c95c26dc4a085730284a2efb1d08b68b311a390.scope - libcontainer container 53d6389a78566e803c6303fd2c95c26dc4a085730284a2efb1d08b68b311a390. Jan 17 12:19:42.718181 containerd[1459]: time="2025-01-17T12:19:42.718025024Z" level=info msg="StartContainer for \"53d6389a78566e803c6303fd2c95c26dc4a085730284a2efb1d08b68b311a390\" returns successfully" Jan 17 12:19:42.967876 systemd[1]: Started sshd@9-209.38.138.250:22-139.178.68.195:35296.service - OpenSSH per-connection server daemon (139.178.68.195:35296). Jan 17 12:19:43.143008 sshd[5924]: Accepted publickey for core from 139.178.68.195 port 35296 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:43.146599 sshd[5924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:43.156605 systemd-logind[1443]: New session 10 of user core. Jan 17 12:19:43.162025 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:19:43.412790 systemd-networkd[1366]: calicfe8c694b3d: Gained IPv6LL Jan 17 12:19:44.024688 sshd[5924]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:44.033542 systemd-logind[1443]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:19:44.034620 systemd[1]: sshd@9-209.38.138.250:22-139.178.68.195:35296.service: Deactivated successfully. Jan 17 12:19:44.042629 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:19:44.045572 systemd-logind[1443]: Removed session 10. Jan 17 12:19:49.046385 systemd[1]: Started sshd@10-209.38.138.250:22-139.178.68.195:39934.service - OpenSSH per-connection server daemon (139.178.68.195:39934). Jan 17 12:19:49.149574 sshd[6104]: Accepted publickey for core from 139.178.68.195 port 39934 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:49.151563 sshd[6104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:49.159347 systemd-logind[1443]: New session 11 of user core. Jan 17 12:19:49.167190 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:19:49.485025 sshd[6104]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:49.495670 systemd[1]: sshd@10-209.38.138.250:22-139.178.68.195:39934.service: Deactivated successfully. Jan 17 12:19:49.499403 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:19:49.502024 systemd-logind[1443]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:19:49.511607 systemd[1]: Started sshd@11-209.38.138.250:22-139.178.68.195:39944.service - OpenSSH per-connection server daemon (139.178.68.195:39944). Jan 17 12:19:49.514574 systemd-logind[1443]: Removed session 11. Jan 17 12:19:49.557667 sshd[6121]: Accepted publickey for core from 139.178.68.195 port 39944 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:49.561115 sshd[6121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:49.568625 systemd-logind[1443]: New session 12 of user core. Jan 17 12:19:49.576275 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:19:49.850362 sshd[6121]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:49.868418 systemd[1]: sshd@11-209.38.138.250:22-139.178.68.195:39944.service: Deactivated successfully. Jan 17 12:19:49.876577 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:19:49.880059 systemd-logind[1443]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:19:49.897930 systemd[1]: Started sshd@12-209.38.138.250:22-139.178.68.195:39960.service - OpenSSH per-connection server daemon (139.178.68.195:39960). Jan 17 12:19:49.905157 systemd-logind[1443]: Removed session 12. Jan 17 12:19:49.960428 sshd[6146]: Accepted publickey for core from 139.178.68.195 port 39960 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:49.962826 sshd[6146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:49.971185 systemd-logind[1443]: New session 13 of user core. Jan 17 12:19:49.977302 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:19:50.180540 sshd[6146]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:50.186032 systemd-logind[1443]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:19:50.186218 systemd[1]: sshd@12-209.38.138.250:22-139.178.68.195:39960.service: Deactivated successfully. Jan 17 12:19:50.189078 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:19:50.191150 systemd-logind[1443]: Removed session 13. Jan 17 12:19:51.272608 kubelet[2491]: E0117 12:19:51.272495 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:52.275440 kubelet[2491]: E0117 12:19:52.273522 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:53.287070 kubelet[2491]: E0117 12:19:53.286211 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:55.207358 systemd[1]: Started sshd@13-209.38.138.250:22-139.178.68.195:58664.service - OpenSSH per-connection server daemon (139.178.68.195:58664). Jan 17 12:19:55.275383 kubelet[2491]: E0117 12:19:55.275308 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:55.366308 sshd[6301]: Accepted publickey for core from 139.178.68.195 port 58664 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:55.374799 sshd[6301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:55.388245 systemd-logind[1443]: New session 14 of user core. Jan 17 12:19:55.395061 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:19:56.160977 sshd[6301]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:56.170209 systemd-logind[1443]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:19:56.172910 systemd[1]: sshd@13-209.38.138.250:22-139.178.68.195:58664.service: Deactivated successfully. Jan 17 12:19:56.179337 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:19:56.181980 systemd-logind[1443]: Removed session 14. Jan 17 12:19:58.005576 systemd[1]: run-containerd-runc-k8s.io-1add83100d22b3f2549302dd724b8679c98bc2ddd7e018d9968e02a4af3bb298-runc.K9Etyd.mount: Deactivated successfully. Jan 17 12:19:58.108899 kubelet[2491]: E0117 12:19:58.108818 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:58.140557 kubelet[2491]: I0117 12:19:58.140453 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6f8d969f76-v6xkz" podStartSLOduration=21.140425543 podStartE2EDuration="21.140425543s" podCreationTimestamp="2025-01-17 12:19:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:19:43.375524032 +0000 UTC m=+65.286026627" watchObservedRunningTime="2025-01-17 12:19:58.140425543 +0000 UTC m=+80.050928143" Jan 17 12:20:01.212363 systemd[1]: Started sshd@14-209.38.138.250:22-139.178.68.195:58666.service - OpenSSH per-connection server daemon (139.178.68.195:58666). Jan 17 12:20:01.326643 sshd[6346]: Accepted publickey for core from 139.178.68.195 port 58666 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:20:01.330312 sshd[6346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:01.349921 systemd-logind[1443]: New session 15 of user core. Jan 17 12:20:01.356322 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:20:02.036835 sshd[6346]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:02.050462 systemd[1]: sshd@14-209.38.138.250:22-139.178.68.195:58666.service: Deactivated successfully. Jan 17 12:20:02.061066 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:20:02.071968 systemd-logind[1443]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:20:02.075130 systemd-logind[1443]: Removed session 15. Jan 17 12:20:07.075152 systemd[1]: Started sshd@15-209.38.138.250:22-139.178.68.195:42762.service - OpenSSH per-connection server daemon (139.178.68.195:42762). Jan 17 12:20:07.166073 sshd[6366]: Accepted publickey for core from 139.178.68.195 port 42762 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:20:07.171061 sshd[6366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:07.204582 systemd-logind[1443]: New session 16 of user core. Jan 17 12:20:07.217129 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:20:07.620766 sshd[6366]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:07.635990 systemd[1]: sshd@15-209.38.138.250:22-139.178.68.195:42762.service: Deactivated successfully. Jan 17 12:20:07.643508 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:20:07.647284 systemd-logind[1443]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:20:07.650291 systemd-logind[1443]: Removed session 16. Jan 17 12:20:12.602707 systemd[1]: Started sshd@16-209.38.138.250:22-139.178.68.195:42764.service - OpenSSH per-connection server daemon (139.178.68.195:42764). Jan 17 12:20:12.714570 sshd[6407]: Accepted publickey for core from 139.178.68.195 port 42764 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:20:12.721381 sshd[6407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:12.733443 systemd-logind[1443]: New session 17 of user core. Jan 17 12:20:12.746260 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:20:13.182372 sshd[6407]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:13.197995 systemd[1]: sshd@16-209.38.138.250:22-139.178.68.195:42764.service: Deactivated successfully. Jan 17 12:20:13.207061 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:20:13.210035 systemd-logind[1443]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:20:13.225537 systemd[1]: Started sshd@17-209.38.138.250:22-139.178.68.195:42778.service - OpenSSH per-connection server daemon (139.178.68.195:42778). Jan 17 12:20:13.228474 systemd-logind[1443]: Removed session 17. Jan 17 12:20:13.371955 sshd[6420]: Accepted publickey for core from 139.178.68.195 port 42778 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:20:13.379620 sshd[6420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:13.396334 systemd-logind[1443]: New session 18 of user core. Jan 17 12:20:13.401445 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:20:13.992854 sshd[6420]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:14.026642 systemd[1]: Started sshd@18-209.38.138.250:22-139.178.68.195:42786.service - OpenSSH per-connection server daemon (139.178.68.195:42786). Jan 17 12:20:14.027325 systemd[1]: sshd@17-209.38.138.250:22-139.178.68.195:42778.service: Deactivated successfully. Jan 17 12:20:14.041191 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:20:14.049926 systemd-logind[1443]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:20:14.058853 systemd-logind[1443]: Removed session 18. Jan 17 12:20:14.117974 sshd[6429]: Accepted publickey for core from 139.178.68.195 port 42786 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:20:14.122001 sshd[6429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:14.137148 systemd-logind[1443]: New session 19 of user core. Jan 17 12:20:14.149331 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:20:18.115223 sshd[6429]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:18.133384 systemd[1]: sshd@18-209.38.138.250:22-139.178.68.195:42786.service: Deactivated successfully. Jan 17 12:20:18.140167 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:20:18.141103 systemd[1]: session-19.scope: Consumed 1.047s CPU time. Jan 17 12:20:18.145930 systemd-logind[1443]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:20:18.160865 systemd[1]: Started sshd@19-209.38.138.250:22-139.178.68.195:54788.service - OpenSSH per-connection server daemon (139.178.68.195:54788). Jan 17 12:20:18.165449 systemd-logind[1443]: Removed session 19. Jan 17 12:20:18.439301 sshd[6452]: Accepted publickey for core from 139.178.68.195 port 54788 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:20:18.445708 sshd[6452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:18.460919 systemd-logind[1443]: New session 20 of user core. Jan 17 12:20:18.465551 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:20:19.881059 sshd[6452]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:19.901678 systemd[1]: sshd@19-209.38.138.250:22-139.178.68.195:54788.service: Deactivated successfully. Jan 17 12:20:19.907274 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:20:19.912246 systemd-logind[1443]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:20:19.922124 systemd[1]: Started sshd@20-209.38.138.250:22-139.178.68.195:54804.service - OpenSSH per-connection server daemon (139.178.68.195:54804). Jan 17 12:20:19.941367 systemd-logind[1443]: Removed session 20. Jan 17 12:20:20.019387 sshd[6463]: Accepted publickey for core from 139.178.68.195 port 54804 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:20:20.023228 sshd[6463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:20.035965 systemd-logind[1443]: New session 21 of user core. Jan 17 12:20:20.049244 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:20:20.282388 sshd[6463]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:20.290564 systemd[1]: sshd@20-209.38.138.250:22-139.178.68.195:54804.service: Deactivated successfully. Jan 17 12:20:20.296470 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:20:20.300730 systemd-logind[1443]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:20:20.309137 systemd-logind[1443]: Removed session 21. Jan 17 12:20:25.301238 systemd[1]: Started sshd@21-209.38.138.250:22-139.178.68.195:56180.service - OpenSSH per-connection server daemon (139.178.68.195:56180). Jan 17 12:20:25.352426 sshd[6480]: Accepted publickey for core from 139.178.68.195 port 56180 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:20:25.353340 sshd[6480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:25.360620 systemd-logind[1443]: New session 22 of user core. Jan 17 12:20:25.369070 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:20:25.522517 sshd[6480]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:25.529295 systemd[1]: sshd@21-209.38.138.250:22-139.178.68.195:56180.service: Deactivated successfully. Jan 17 12:20:25.534799 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:20:25.535974 systemd-logind[1443]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:20:25.537455 systemd-logind[1443]: Removed session 22. Jan 17 12:20:28.043384 systemd[1]: run-containerd-runc-k8s.io-1add83100d22b3f2549302dd724b8679c98bc2ddd7e018d9968e02a4af3bb298-runc.jplIrT.mount: Deactivated successfully. Jan 17 12:20:30.548485 systemd[1]: Started sshd@22-209.38.138.250:22-139.178.68.195:56194.service - OpenSSH per-connection server daemon (139.178.68.195:56194). Jan 17 12:20:30.664356 sshd[6515]: Accepted publickey for core from 139.178.68.195 port 56194 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:20:30.666081 sshd[6515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:30.674753 systemd-logind[1443]: New session 23 of user core. Jan 17 12:20:30.681540 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:20:31.012421 sshd[6515]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:31.016620 systemd-logind[1443]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:20:31.017053 systemd[1]: sshd@22-209.38.138.250:22-139.178.68.195:56194.service: Deactivated successfully. Jan 17 12:20:31.022036 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:20:31.025556 systemd-logind[1443]: Removed session 23. Jan 17 12:20:36.039116 systemd[1]: Started sshd@23-209.38.138.250:22-139.178.68.195:50752.service - OpenSSH per-connection server daemon (139.178.68.195:50752). Jan 17 12:20:36.091899 sshd[6536]: Accepted publickey for core from 139.178.68.195 port 50752 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:20:36.094588 sshd[6536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:36.103852 systemd-logind[1443]: New session 24 of user core. Jan 17 12:20:36.109509 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 12:20:36.347620 sshd[6536]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:36.355096 systemd[1]: sshd@23-209.38.138.250:22-139.178.68.195:50752.service: Deactivated successfully. Jan 17 12:20:36.360262 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 12:20:36.365086 systemd-logind[1443]: Session 24 logged out. Waiting for processes to exit. Jan 17 12:20:36.366720 systemd-logind[1443]: Removed session 24. Jan 17 12:20:37.009924 systemd[1]: Started sshd@24-209.38.138.250:22-170.64.173.50:41264.service - OpenSSH per-connection server daemon (170.64.173.50:41264). Jan 17 12:20:37.215296 sshd[6549]: Connection closed by 170.64.173.50 port 41264 Jan 17 12:20:37.218580 systemd[1]: sshd@24-209.38.138.250:22-170.64.173.50:41264.service: Deactivated successfully. Jan 17 12:20:41.408766 systemd[1]: Started sshd@25-209.38.138.250:22-139.178.68.195:50766.service - OpenSSH per-connection server daemon (139.178.68.195:50766). Jan 17 12:20:41.526608 sshd[6555]: Accepted publickey for core from 139.178.68.195 port 50766 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:20:41.531459 sshd[6555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:41.616897 systemd-logind[1443]: New session 25 of user core. Jan 17 12:20:41.623113 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 12:20:41.973146 sshd[6555]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:41.984258 systemd[1]: sshd@25-209.38.138.250:22-139.178.68.195:50766.service: Deactivated successfully. Jan 17 12:20:41.988897 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 12:20:42.002362 systemd-logind[1443]: Session 25 logged out. Waiting for processes to exit. Jan 17 12:20:42.007244 systemd-logind[1443]: Removed session 25.