Jan 17 12:21:25.074278 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:21:25.074317 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:21:25.074336 kernel: BIOS-provided physical RAM map: Jan 17 12:21:25.074349 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 12:21:25.074360 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 12:21:25.074371 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 12:21:25.074383 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 17 12:21:25.074395 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 17 12:21:25.074406 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 12:21:25.074420 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 12:21:25.074431 kernel: NX (Execute Disable) protection: active Jan 17 12:21:25.074443 kernel: APIC: Static calls initialized Jan 17 12:21:25.074488 kernel: SMBIOS 2.8 present. Jan 17 12:21:25.074500 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 17 12:21:25.074514 kernel: Hypervisor detected: KVM Jan 17 12:21:25.074532 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:21:25.074550 kernel: kvm-clock: using sched offset of 4093290879 cycles Jan 17 12:21:25.074563 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:21:25.074575 kernel: tsc: Detected 1995.312 MHz processor Jan 17 12:21:25.074588 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:21:25.074600 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:21:25.074613 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 17 12:21:25.074626 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 12:21:25.074639 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:21:25.074655 kernel: ACPI: Early table checksum verification disabled Jan 17 12:21:25.074667 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 17 12:21:25.074679 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:21:25.074691 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:21:25.074702 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:21:25.074716 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 17 12:21:25.074728 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:21:25.074739 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:21:25.074750 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:21:25.074764 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:21:25.074775 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 17 12:21:25.074784 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 17 12:21:25.074793 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 17 12:21:25.074803 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 17 12:21:25.074813 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 17 12:21:25.074824 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 17 12:21:25.074844 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 17 12:21:25.074857 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 12:21:25.074869 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 12:21:25.074881 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 12:21:25.074891 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 17 12:21:25.074909 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 17 12:21:25.074921 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 17 12:21:25.074938 kernel: Zone ranges: Jan 17 12:21:25.074949 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:21:25.074961 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 17 12:21:25.074973 kernel: Normal empty Jan 17 12:21:25.074984 kernel: Movable zone start for each node Jan 17 12:21:25.074996 kernel: Early memory node ranges Jan 17 12:21:25.075008 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 12:21:25.075018 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 17 12:21:25.075029 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 17 12:21:25.075044 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:21:25.075056 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 12:21:25.075072 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 17 12:21:25.075084 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 12:21:25.075096 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:21:25.075107 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:21:25.075119 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 12:21:25.075130 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:21:25.075141 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:21:25.075158 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:21:25.075169 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:21:25.075179 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:21:25.075190 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 12:21:25.075201 kernel: TSC deadline timer available Jan 17 12:21:25.075213 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 12:21:25.075224 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 12:21:25.075236 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 17 12:21:25.075252 kernel: Booting paravirtualized kernel on KVM Jan 17 12:21:25.075264 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:21:25.075282 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 12:21:25.075294 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 17 12:21:25.075307 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 17 12:21:25.075319 kernel: pcpu-alloc: [0] 0 1 Jan 17 12:21:25.075332 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 17 12:21:25.075346 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:21:25.075355 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:21:25.075367 kernel: random: crng init done Jan 17 12:21:25.075374 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:21:25.075382 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 12:21:25.075389 kernel: Fallback order for Node 0: 0 Jan 17 12:21:25.075397 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 17 12:21:25.075405 kernel: Policy zone: DMA32 Jan 17 12:21:25.075412 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:21:25.075420 kernel: Memory: 1971200K/2096612K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 125152K reserved, 0K cma-reserved) Jan 17 12:21:25.075427 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:21:25.075438 kernel: Kernel/User page tables isolation: enabled Jan 17 12:21:25.075446 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:21:25.075468 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:21:25.075476 kernel: Dynamic Preempt: voluntary Jan 17 12:21:25.075484 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:21:25.075492 kernel: rcu: RCU event tracing is enabled. Jan 17 12:21:25.075500 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:21:25.075507 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:21:25.075515 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:21:25.075523 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:21:25.075535 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:21:25.075542 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:21:25.075549 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 12:21:25.075557 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:21:25.075569 kernel: Console: colour VGA+ 80x25 Jan 17 12:21:25.075577 kernel: printk: console [tty0] enabled Jan 17 12:21:25.075584 kernel: printk: console [ttyS0] enabled Jan 17 12:21:25.075592 kernel: ACPI: Core revision 20230628 Jan 17 12:21:25.075599 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 12:21:25.075610 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:21:25.075617 kernel: x2apic enabled Jan 17 12:21:25.075625 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:21:25.075632 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 12:21:25.075640 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Jan 17 12:21:25.075647 kernel: Calibrating delay loop (skipped) preset value.. 3990.62 BogoMIPS (lpj=1995312) Jan 17 12:21:25.075655 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 17 12:21:25.075663 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 17 12:21:25.075682 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:21:25.075708 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 12:21:25.075721 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:21:25.075736 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:21:25.075750 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 17 12:21:25.075763 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:21:25.075776 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:21:25.075789 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 12:21:25.075802 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 12:21:25.075824 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:21:25.075836 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:21:25.075848 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:21:25.075860 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:21:25.075873 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 12:21:25.075884 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:21:25.075895 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:21:25.075908 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:21:25.075924 kernel: landlock: Up and running. Jan 17 12:21:25.075937 kernel: SELinux: Initializing. Jan 17 12:21:25.075950 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:21:25.075962 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:21:25.075976 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 17 12:21:25.075984 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:21:25.075992 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:21:25.076000 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:21:25.076009 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 17 12:21:25.076021 kernel: signal: max sigframe size: 1776 Jan 17 12:21:25.076029 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:21:25.076037 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:21:25.076049 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 12:21:25.076061 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:21:25.076075 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:21:25.076088 kernel: .... node #0, CPUs: #1 Jan 17 12:21:25.076101 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:21:25.076122 kernel: smpboot: Max logical packages: 1 Jan 17 12:21:25.076141 kernel: smpboot: Total of 2 processors activated (7981.24 BogoMIPS) Jan 17 12:21:25.076154 kernel: devtmpfs: initialized Jan 17 12:21:25.076167 kernel: x86/mm: Memory block size: 128MB Jan 17 12:21:25.076180 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:21:25.076194 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:21:25.076203 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:21:25.076211 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:21:25.076219 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:21:25.076227 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:21:25.076239 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:21:25.076247 kernel: audit: type=2000 audit(1737116483.810:1): state=initialized audit_enabled=0 res=1 Jan 17 12:21:25.076255 kernel: cpuidle: using governor menu Jan 17 12:21:25.076264 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:21:25.076272 kernel: dca service started, version 1.12.1 Jan 17 12:21:25.076280 kernel: PCI: Using configuration type 1 for base access Jan 17 12:21:25.076288 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:21:25.076296 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:21:25.076305 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:21:25.076315 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:21:25.076323 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:21:25.076331 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:21:25.076344 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:21:25.076357 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:21:25.076369 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:21:25.076383 kernel: ACPI: Interpreter enabled Jan 17 12:21:25.076398 kernel: ACPI: PM: (supports S0 S5) Jan 17 12:21:25.076410 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:21:25.076428 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:21:25.076437 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:21:25.076445 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 17 12:21:25.076482 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:21:25.076726 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:21:25.076866 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 12:21:25.077015 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 12:21:25.077035 kernel: acpiphp: Slot [3] registered Jan 17 12:21:25.077045 kernel: acpiphp: Slot [4] registered Jan 17 12:21:25.077059 kernel: acpiphp: Slot [5] registered Jan 17 12:21:25.077073 kernel: acpiphp: Slot [6] registered Jan 17 12:21:25.077088 kernel: acpiphp: Slot [7] registered Jan 17 12:21:25.077103 kernel: acpiphp: Slot [8] registered Jan 17 12:21:25.077112 kernel: acpiphp: Slot [9] registered Jan 17 12:21:25.077121 kernel: acpiphp: Slot [10] registered Jan 17 12:21:25.077132 kernel: acpiphp: Slot [11] registered Jan 17 12:21:25.077144 kernel: acpiphp: Slot [12] registered Jan 17 12:21:25.077152 kernel: acpiphp: Slot [13] registered Jan 17 12:21:25.077160 kernel: acpiphp: Slot [14] registered Jan 17 12:21:25.077173 kernel: acpiphp: Slot [15] registered Jan 17 12:21:25.077187 kernel: acpiphp: Slot [16] registered Jan 17 12:21:25.077196 kernel: acpiphp: Slot [17] registered Jan 17 12:21:25.077204 kernel: acpiphp: Slot [18] registered Jan 17 12:21:25.077212 kernel: acpiphp: Slot [19] registered Jan 17 12:21:25.077220 kernel: acpiphp: Slot [20] registered Jan 17 12:21:25.077228 kernel: acpiphp: Slot [21] registered Jan 17 12:21:25.077242 kernel: acpiphp: Slot [22] registered Jan 17 12:21:25.077255 kernel: acpiphp: Slot [23] registered Jan 17 12:21:25.077268 kernel: acpiphp: Slot [24] registered Jan 17 12:21:25.077282 kernel: acpiphp: Slot [25] registered Jan 17 12:21:25.077295 kernel: acpiphp: Slot [26] registered Jan 17 12:21:25.077306 kernel: acpiphp: Slot [27] registered Jan 17 12:21:25.077315 kernel: acpiphp: Slot [28] registered Jan 17 12:21:25.077323 kernel: acpiphp: Slot [29] registered Jan 17 12:21:25.077331 kernel: acpiphp: Slot [30] registered Jan 17 12:21:25.077348 kernel: acpiphp: Slot [31] registered Jan 17 12:21:25.077361 kernel: PCI host bridge to bus 0000:00 Jan 17 12:21:25.077612 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:21:25.077744 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:21:25.077875 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:21:25.077985 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 17 12:21:25.078105 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 17 12:21:25.078216 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:21:25.078380 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 12:21:25.078542 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 17 12:21:25.078714 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 17 12:21:25.078852 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 17 12:21:25.078973 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 17 12:21:25.079091 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 17 12:21:25.079241 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 17 12:21:25.079392 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 17 12:21:25.079661 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 17 12:21:25.079825 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 17 12:21:25.079959 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 17 12:21:25.080053 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 17 12:21:25.080152 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 17 12:21:25.080262 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 17 12:21:25.080358 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 17 12:21:25.080478 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 17 12:21:25.080583 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 17 12:21:25.080729 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 17 12:21:25.080836 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:21:25.080983 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:21:25.081099 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 17 12:21:25.081206 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 17 12:21:25.081304 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 17 12:21:25.081438 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:21:25.083818 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 17 12:21:25.084560 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 17 12:21:25.084728 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 17 12:21:25.084911 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 17 12:21:25.085070 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 17 12:21:25.085220 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 17 12:21:25.085366 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 17 12:21:25.086674 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 17 12:21:25.086841 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 17 12:21:25.087030 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 17 12:21:25.087168 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 17 12:21:25.087297 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 17 12:21:25.087413 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 17 12:21:25.088711 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 17 12:21:25.088872 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 17 12:21:25.089031 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 17 12:21:25.089194 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 17 12:21:25.089338 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 17 12:21:25.089357 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:21:25.089373 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:21:25.089387 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:21:25.089402 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:21:25.089417 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 12:21:25.089437 kernel: iommu: Default domain type: Translated Jan 17 12:21:25.089470 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:21:25.089485 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:21:25.089513 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:21:25.089527 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 12:21:25.089540 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 17 12:21:25.089696 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 17 12:21:25.089838 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 17 12:21:25.089979 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:21:25.090004 kernel: vgaarb: loaded Jan 17 12:21:25.090019 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 12:21:25.090034 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 12:21:25.090048 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:21:25.090062 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:21:25.090077 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:21:25.090090 kernel: pnp: PnP ACPI init Jan 17 12:21:25.090105 kernel: pnp: PnP ACPI: found 4 devices Jan 17 12:21:25.090120 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:21:25.090138 kernel: NET: Registered PF_INET protocol family Jan 17 12:21:25.090153 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:21:25.090168 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 12:21:25.090183 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:21:25.090198 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:21:25.090212 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 12:21:25.090227 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 12:21:25.090240 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:21:25.090259 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:21:25.090273 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:21:25.090287 kernel: NET: Registered PF_XDP protocol family Jan 17 12:21:25.090431 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:21:25.092675 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:21:25.092821 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:21:25.092949 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 17 12:21:25.093070 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 17 12:21:25.093223 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 17 12:21:25.093383 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 12:21:25.093404 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 17 12:21:25.093574 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 43579 usecs Jan 17 12:21:25.093594 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:21:25.093609 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 12:21:25.093624 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Jan 17 12:21:25.093639 kernel: Initialise system trusted keyrings Jan 17 12:21:25.093653 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 12:21:25.093673 kernel: Key type asymmetric registered Jan 17 12:21:25.093687 kernel: Asymmetric key parser 'x509' registered Jan 17 12:21:25.093702 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:21:25.093717 kernel: io scheduler mq-deadline registered Jan 17 12:21:25.093731 kernel: io scheduler kyber registered Jan 17 12:21:25.093745 kernel: io scheduler bfq registered Jan 17 12:21:25.093759 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:21:25.093773 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 17 12:21:25.093788 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 17 12:21:25.093806 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 17 12:21:25.093821 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:21:25.093835 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:21:25.093850 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:21:25.093865 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:21:25.093879 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:21:25.093894 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:21:25.094081 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 17 12:21:25.094224 kernel: rtc_cmos 00:03: registered as rtc0 Jan 17 12:21:25.094357 kernel: rtc_cmos 00:03: setting system clock to 2025-01-17T12:21:24 UTC (1737116484) Jan 17 12:21:25.096635 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 17 12:21:25.096662 kernel: intel_pstate: CPU model not supported Jan 17 12:21:25.096677 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:21:25.096691 kernel: Segment Routing with IPv6 Jan 17 12:21:25.096706 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:21:25.096717 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:21:25.096726 kernel: Key type dns_resolver registered Jan 17 12:21:25.096749 kernel: IPI shorthand broadcast: enabled Jan 17 12:21:25.096763 kernel: sched_clock: Marking stable (1355006779, 163159962)->(1571616702, -53449961) Jan 17 12:21:25.096776 kernel: registered taskstats version 1 Jan 17 12:21:25.096790 kernel: Loading compiled-in X.509 certificates Jan 17 12:21:25.096805 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:21:25.096819 kernel: Key type .fscrypt registered Jan 17 12:21:25.096833 kernel: Key type fscrypt-provisioning registered Jan 17 12:21:25.096847 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:21:25.096865 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:21:25.096880 kernel: ima: No architecture policies found Jan 17 12:21:25.096894 kernel: clk: Disabling unused clocks Jan 17 12:21:25.096910 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:21:25.096925 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:21:25.096966 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:21:25.096984 kernel: Run /init as init process Jan 17 12:21:25.096997 kernel: with arguments: Jan 17 12:21:25.097011 kernel: /init Jan 17 12:21:25.097025 kernel: with environment: Jan 17 12:21:25.097043 kernel: HOME=/ Jan 17 12:21:25.097058 kernel: TERM=linux Jan 17 12:21:25.097072 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:21:25.097091 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:21:25.097110 systemd[1]: Detected virtualization kvm. Jan 17 12:21:25.097126 systemd[1]: Detected architecture x86-64. Jan 17 12:21:25.097141 systemd[1]: Running in initrd. Jan 17 12:21:25.097161 systemd[1]: No hostname configured, using default hostname. Jan 17 12:21:25.097175 systemd[1]: Hostname set to . Jan 17 12:21:25.097191 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:21:25.097207 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:21:25.097223 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:21:25.097239 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:21:25.097256 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:21:25.097272 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:21:25.097291 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:21:25.097307 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:21:25.097326 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:21:25.097341 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:21:25.097357 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:21:25.097373 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:21:25.097388 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:21:25.097408 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:21:25.097424 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:21:25.097445 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:21:25.097485 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:21:25.097501 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:21:25.097522 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:21:25.097538 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:21:25.097554 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:21:25.097570 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:21:25.097586 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:21:25.097602 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:21:25.097618 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:21:25.097634 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:21:25.097650 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:21:25.097669 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:21:25.097684 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:21:25.097701 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:21:25.097716 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:21:25.097732 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:21:25.097747 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:21:25.097763 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:21:25.097784 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:21:25.097839 systemd-journald[183]: Collecting audit messages is disabled. Jan 17 12:21:25.097883 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:21:25.097899 systemd-journald[183]: Journal started Jan 17 12:21:25.097934 systemd-journald[183]: Runtime Journal (/run/log/journal/35fdd3ff048c4099a18fcd3e349c6e56) is 4.9M, max 39.3M, 34.4M free. Jan 17 12:21:25.084188 systemd-modules-load[184]: Inserted module 'overlay' Jan 17 12:21:25.149285 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:21:25.149342 kernel: Bridge firewalling registered Jan 17 12:21:25.149361 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:21:25.124800 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 17 12:21:25.156016 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:21:25.156996 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:21:25.164798 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:21:25.168754 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:21:25.172778 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:21:25.176640 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:21:25.202762 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:21:25.206532 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:21:25.210276 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:21:25.211604 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:21:25.218791 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:21:25.223737 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:21:25.244130 dracut-cmdline[218]: dracut-dracut-053 Jan 17 12:21:25.249491 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:21:25.280259 systemd-resolved[219]: Positive Trust Anchors: Jan 17 12:21:25.280282 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:21:25.280333 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:21:25.289651 systemd-resolved[219]: Defaulting to hostname 'linux'. Jan 17 12:21:25.292732 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:21:25.293514 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:21:25.358528 kernel: SCSI subsystem initialized Jan 17 12:21:25.370495 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:21:25.385749 kernel: iscsi: registered transport (tcp) Jan 17 12:21:25.415647 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:21:25.415899 kernel: QLogic iSCSI HBA Driver Jan 17 12:21:25.478995 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:21:25.486781 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:21:25.529542 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:21:25.529623 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:21:25.531594 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:21:25.579548 kernel: raid6: avx2x4 gen() 27531 MB/s Jan 17 12:21:25.596545 kernel: raid6: avx2x2 gen() 27375 MB/s Jan 17 12:21:25.613949 kernel: raid6: avx2x1 gen() 19353 MB/s Jan 17 12:21:25.614053 kernel: raid6: using algorithm avx2x4 gen() 27531 MB/s Jan 17 12:21:25.633541 kernel: raid6: .... xor() 8629 MB/s, rmw enabled Jan 17 12:21:25.633627 kernel: raid6: using avx2x2 recovery algorithm Jan 17 12:21:25.665525 kernel: xor: automatically using best checksumming function avx Jan 17 12:21:25.908312 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:21:25.934777 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:21:25.942764 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:21:25.975993 systemd-udevd[403]: Using default interface naming scheme 'v255'. Jan 17 12:21:25.981821 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:21:25.993125 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:21:26.026835 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Jan 17 12:21:26.074214 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:21:26.079928 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:21:26.169195 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:21:26.178681 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:21:26.215967 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:21:26.218673 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:21:26.220646 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:21:26.222337 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:21:26.229023 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:21:26.262387 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:21:26.293521 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 17 12:21:26.404903 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 17 12:21:26.405106 kernel: scsi host0: Virtio SCSI HBA Jan 17 12:21:26.405291 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:21:26.405327 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:21:26.405345 kernel: GPT:9289727 != 125829119 Jan 17 12:21:26.405360 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:21:26.405376 kernel: GPT:9289727 != 125829119 Jan 17 12:21:26.405391 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:21:26.405408 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:21:26.405427 kernel: ACPI: bus type USB registered Jan 17 12:21:26.405446 kernel: usbcore: registered new interface driver usbfs Jan 17 12:21:26.405491 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:21:26.398995 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:21:26.399203 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:21:26.409310 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 17 12:21:26.441967 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Jan 17 12:21:26.442205 kernel: usbcore: registered new interface driver hub Jan 17 12:21:26.442229 kernel: AES CTR mode by8 optimization enabled Jan 17 12:21:26.442247 kernel: usbcore: registered new device driver usb Jan 17 12:21:26.442266 kernel: libata version 3.00 loaded. Jan 17 12:21:26.400262 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:21:26.401148 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:21:26.401364 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:21:26.403618 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:21:26.409948 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:21:26.505506 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 17 12:21:26.529644 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 17 12:21:26.529915 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 17 12:21:26.530102 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 17 12:21:26.530281 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (451) Jan 17 12:21:26.530315 kernel: scsi host1: ata_piix Jan 17 12:21:26.531418 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (448) Jan 17 12:21:26.531439 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 17 12:21:26.531658 kernel: scsi host2: ata_piix Jan 17 12:21:26.531954 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 17 12:21:26.531974 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 17 12:21:26.531991 kernel: hub 1-0:1.0: USB hub found Jan 17 12:21:26.532207 kernel: hub 1-0:1.0: 2 ports detected Jan 17 12:21:26.535320 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 12:21:26.602248 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:21:26.614891 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 12:21:26.628180 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 12:21:26.629130 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 12:21:26.644622 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:21:26.654816 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:21:26.660743 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:21:26.665591 disk-uuid[542]: Primary Header is updated. Jan 17 12:21:26.665591 disk-uuid[542]: Secondary Entries is updated. Jan 17 12:21:26.665591 disk-uuid[542]: Secondary Header is updated. Jan 17 12:21:26.674548 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:21:26.707140 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:21:27.687536 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:21:27.689248 disk-uuid[543]: The operation has completed successfully. Jan 17 12:21:27.754242 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:21:27.755601 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:21:27.762754 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:21:27.770221 sh[565]: Success Jan 17 12:21:27.794676 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 12:21:27.887966 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:21:27.904685 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:21:27.909768 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:21:27.939620 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:21:27.939891 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:21:27.939943 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:21:27.941937 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:21:27.943928 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:21:27.955611 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:21:27.957857 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:21:27.967190 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:21:27.972195 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:21:28.001584 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:21:28.005266 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:21:28.005414 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:21:28.015611 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:21:28.032403 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:21:28.034122 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:21:28.042987 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:21:28.051886 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:21:28.262262 ignition[659]: Ignition 2.19.0 Jan 17 12:21:28.262973 ignition[659]: Stage: fetch-offline Jan 17 12:21:28.264843 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:21:28.263068 ignition[659]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:28.266357 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:21:28.263085 ignition[659]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:21:28.263258 ignition[659]: parsed url from cmdline: "" Jan 17 12:21:28.263265 ignition[659]: no config URL provided Jan 17 12:21:28.263274 ignition[659]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:21:28.263288 ignition[659]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:21:28.263298 ignition[659]: failed to fetch config: resource requires networking Jan 17 12:21:28.263639 ignition[659]: Ignition finished successfully Jan 17 12:21:28.278885 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:21:28.328981 systemd-networkd[755]: lo: Link UP Jan 17 12:21:28.328991 systemd-networkd[755]: lo: Gained carrier Jan 17 12:21:28.332334 systemd-networkd[755]: Enumeration completed Jan 17 12:21:28.333099 systemd-networkd[755]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 17 12:21:28.333105 systemd-networkd[755]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 17 12:21:28.334605 systemd-networkd[755]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:21:28.334611 systemd-networkd[755]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:21:28.335420 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:21:28.336098 systemd-networkd[755]: eth0: Link UP Jan 17 12:21:28.336105 systemd-networkd[755]: eth0: Gained carrier Jan 17 12:21:28.336292 systemd-networkd[755]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 17 12:21:28.336515 systemd[1]: Reached target network.target - Network. Jan 17 12:21:28.343064 systemd-networkd[755]: eth1: Link UP Jan 17 12:21:28.343069 systemd-networkd[755]: eth1: Gained carrier Jan 17 12:21:28.343084 systemd-networkd[755]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:21:28.343921 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:21:28.356556 systemd-networkd[755]: eth1: DHCPv4 address 10.124.0.5/20 acquired from 169.254.169.253 Jan 17 12:21:28.361635 systemd-networkd[755]: eth0: DHCPv4 address 137.184.236.252/20, gateway 137.184.224.1 acquired from 169.254.169.253 Jan 17 12:21:28.371534 ignition[757]: Ignition 2.19.0 Jan 17 12:21:28.372648 ignition[757]: Stage: fetch Jan 17 12:21:28.372966 ignition[757]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:28.372981 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:21:28.373123 ignition[757]: parsed url from cmdline: "" Jan 17 12:21:28.373130 ignition[757]: no config URL provided Jan 17 12:21:28.373137 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:21:28.373149 ignition[757]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:21:28.373182 ignition[757]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 17 12:21:28.392212 ignition[757]: GET result: OK Jan 17 12:21:28.392365 ignition[757]: parsing config with SHA512: 71e24bffe6ec5ce51594a436ed92201e95d61fdcceb561363db72f49185dc192a8c32e54ef055611720082b494f0f83dcec0042ca621abc9d40921de74360a0a Jan 17 12:21:28.400729 unknown[757]: fetched base config from "system" Jan 17 12:21:28.400768 unknown[757]: fetched base config from "system" Jan 17 12:21:28.400780 unknown[757]: fetched user config from "digitalocean" Jan 17 12:21:28.403073 ignition[757]: fetch: fetch complete Jan 17 12:21:28.403085 ignition[757]: fetch: fetch passed Jan 17 12:21:28.403161 ignition[757]: Ignition finished successfully Jan 17 12:21:28.408140 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:21:28.422768 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:21:28.456228 ignition[764]: Ignition 2.19.0 Jan 17 12:21:28.456250 ignition[764]: Stage: kargs Jan 17 12:21:28.458906 ignition[764]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:28.458934 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:21:28.462542 ignition[764]: kargs: kargs passed Jan 17 12:21:28.462692 ignition[764]: Ignition finished successfully Jan 17 12:21:28.465154 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:21:28.475024 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:21:28.497497 ignition[770]: Ignition 2.19.0 Jan 17 12:21:28.497511 ignition[770]: Stage: disks Jan 17 12:21:28.500192 ignition[770]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:28.500221 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:21:28.501822 ignition[770]: disks: disks passed Jan 17 12:21:28.503276 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:21:28.501908 ignition[770]: Ignition finished successfully Jan 17 12:21:28.513434 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:21:28.515256 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:21:28.516705 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:21:28.517994 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:21:28.519422 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:21:28.531419 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:21:28.560747 systemd-fsck[779]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 12:21:28.566893 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:21:28.574621 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:21:28.755536 kernel: EXT4-fs (vda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:21:28.756390 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:21:28.757602 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:21:28.767781 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:21:28.770653 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:21:28.778810 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 17 12:21:28.787893 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 12:21:28.801699 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (787) Jan 17 12:21:28.806143 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:21:28.809248 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:21:28.809338 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:21:28.811480 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:21:28.811537 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:21:28.831730 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:21:28.815928 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:21:28.827262 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:21:28.848872 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:21:28.905502 coreos-metadata[789]: Jan 17 12:21:28.903 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:21:28.915492 initrd-setup-root[817]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:21:28.922035 coreos-metadata[789]: Jan 17 12:21:28.920 INFO Fetch successful Jan 17 12:21:28.922982 coreos-metadata[790]: Jan 17 12:21:28.922 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:21:28.925572 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:21:28.933561 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 17 12:21:28.935184 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 17 12:21:28.938377 coreos-metadata[790]: Jan 17 12:21:28.938 INFO Fetch successful Jan 17 12:21:28.942511 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:21:28.954074 coreos-metadata[790]: Jan 17 12:21:28.952 INFO wrote hostname ci-4081.3.0-1-b9b10bea58 to /sysroot/etc/hostname Jan 17 12:21:28.954973 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:21:28.960485 initrd-setup-root[840]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:21:29.133980 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:21:29.140674 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:21:29.143790 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:21:29.161569 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:21:29.164596 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:21:29.198839 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:21:29.218640 ignition[907]: INFO : Ignition 2.19.0 Jan 17 12:21:29.220683 ignition[907]: INFO : Stage: mount Jan 17 12:21:29.220683 ignition[907]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:29.223613 ignition[907]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:21:29.223613 ignition[907]: INFO : mount: mount passed Jan 17 12:21:29.223613 ignition[907]: INFO : Ignition finished successfully Jan 17 12:21:29.224991 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:21:29.242639 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:21:29.257689 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:21:29.282427 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (920) Jan 17 12:21:29.286627 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:21:29.286728 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:21:29.287586 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:21:29.293507 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:21:29.297742 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:21:29.334512 ignition[937]: INFO : Ignition 2.19.0 Jan 17 12:21:29.334512 ignition[937]: INFO : Stage: files Jan 17 12:21:29.336114 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:29.336114 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:21:29.338302 ignition[937]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:21:29.338302 ignition[937]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:21:29.341092 ignition[937]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:21:29.345199 ignition[937]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:21:29.346281 ignition[937]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:21:29.347329 ignition[937]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:21:29.346669 unknown[937]: wrote ssh authorized keys file for user: core Jan 17 12:21:29.350608 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:21:29.352257 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:21:29.352257 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:21:29.352257 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:21:29.396584 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 12:21:29.670131 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:21:29.670131 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:21:29.673391 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:21:29.673391 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:21:29.673391 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:21:29.673391 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:21:29.673391 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:21:29.673391 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:21:29.673391 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:21:29.673391 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:21:29.673391 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:21:29.673391 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:21:29.673391 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:21:29.673391 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:21:29.673391 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 17 12:21:29.777217 systemd-networkd[755]: eth0: Gained IPv6LL Jan 17 12:21:30.077031 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 12:21:30.359349 systemd-networkd[755]: eth1: Gained IPv6LL Jan 17 12:21:30.519647 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:21:30.519647 ignition[937]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 17 12:21:30.523762 ignition[937]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:21:30.523762 ignition[937]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:21:30.523762 ignition[937]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 17 12:21:30.523762 ignition[937]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 17 12:21:30.523762 ignition[937]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:21:30.523762 ignition[937]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:21:30.523762 ignition[937]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 17 12:21:30.523762 ignition[937]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:21:30.523762 ignition[937]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:21:30.523762 ignition[937]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:21:30.523762 ignition[937]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:21:30.523762 ignition[937]: INFO : files: files passed Jan 17 12:21:30.523762 ignition[937]: INFO : Ignition finished successfully Jan 17 12:21:30.523975 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:21:30.532927 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:21:30.541798 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:21:30.553719 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:21:30.553885 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:21:30.568505 initrd-setup-root-after-ignition[966]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:21:30.568505 initrd-setup-root-after-ignition[966]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:21:30.571958 initrd-setup-root-after-ignition[970]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:21:30.574890 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:21:30.577088 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:21:30.586184 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:21:30.624264 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:21:30.624435 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:21:30.626293 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:21:30.628485 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:21:30.630571 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:21:30.638696 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:21:30.661945 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:21:30.670772 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:21:30.701032 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:21:30.702104 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:21:30.710591 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:21:30.711498 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:21:30.711686 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:21:30.712900 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:21:30.713719 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:21:30.714410 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:21:30.715182 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:21:30.715998 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:21:30.717645 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:21:30.718345 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:21:30.719100 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:21:30.719933 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:21:30.720596 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:21:30.728290 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:21:30.728638 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:21:30.730443 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:21:30.732936 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:21:30.733847 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:21:30.740926 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:21:30.760869 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:21:30.761098 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:21:30.762248 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:21:30.762547 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:21:30.763501 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:21:30.767838 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:21:30.769890 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 12:21:30.770130 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:21:30.784241 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:21:30.790395 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:21:30.791194 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:21:30.792823 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:21:30.797915 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:21:30.798338 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:21:30.818654 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:21:30.818785 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:21:30.830023 ignition[990]: INFO : Ignition 2.19.0 Jan 17 12:21:30.830023 ignition[990]: INFO : Stage: umount Jan 17 12:21:30.830023 ignition[990]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:30.830023 ignition[990]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:21:30.836097 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:21:30.840989 ignition[990]: INFO : umount: umount passed Jan 17 12:21:30.840989 ignition[990]: INFO : Ignition finished successfully Jan 17 12:21:30.836296 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:21:30.838378 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:21:30.838536 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:21:30.840337 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:21:30.840397 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:21:30.842520 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:21:30.842605 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:21:30.846035 systemd[1]: Stopped target network.target - Network. Jan 17 12:21:30.846737 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:21:30.846819 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:21:30.848987 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:21:30.851522 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:21:30.851605 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:21:30.853786 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:21:30.856481 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:21:30.859801 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:21:30.859890 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:21:30.862597 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:21:30.862682 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:21:30.865972 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:21:30.866073 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:21:30.867637 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:21:30.867743 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:21:30.870955 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:21:30.876825 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:21:30.880620 systemd-networkd[755]: eth0: DHCPv6 lease lost Jan 17 12:21:30.884341 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:21:30.885487 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:21:30.885656 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:21:30.886793 systemd-networkd[755]: eth1: DHCPv6 lease lost Jan 17 12:21:30.889082 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:21:30.889309 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:21:30.890576 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:21:30.890736 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:21:30.893768 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:21:30.893858 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:21:30.895369 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:21:30.895473 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:21:30.903654 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:21:30.905092 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:21:30.905192 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:21:30.909332 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:21:30.909426 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:21:30.910637 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:21:30.910725 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:21:30.912586 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:21:30.912656 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:21:30.914044 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:21:30.937663 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:21:30.937910 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:21:30.944615 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:21:30.944887 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:21:30.947999 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:21:30.948083 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:21:30.949446 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:21:30.949526 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:21:30.951423 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:21:30.951657 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:21:30.954212 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:21:30.954303 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:21:30.955850 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:21:30.955931 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:21:30.965817 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:21:30.967834 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:21:30.967944 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:21:30.973274 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 12:21:30.973378 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:21:30.975815 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:21:30.975905 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:21:30.979813 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:21:30.979907 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:21:30.982630 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:21:30.982798 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:21:30.984776 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:21:30.992788 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:21:31.023830 systemd[1]: Switching root. Jan 17 12:21:31.060539 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 17 12:21:31.060649 systemd-journald[183]: Journal stopped Jan 17 12:21:33.018719 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:21:33.018847 kernel: SELinux: policy capability open_perms=1 Jan 17 12:21:33.018863 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:21:33.018874 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:21:33.018885 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:21:33.018896 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:21:33.018908 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:21:33.018925 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:21:33.018937 kernel: audit: type=1403 audit(1737116491.454:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:21:33.018958 systemd[1]: Successfully loaded SELinux policy in 50.952ms. Jan 17 12:21:33.018991 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.966ms. Jan 17 12:21:33.019015 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:21:33.019036 systemd[1]: Detected virtualization kvm. Jan 17 12:21:33.019049 systemd[1]: Detected architecture x86-64. Jan 17 12:21:33.019062 systemd[1]: Detected first boot. Jan 17 12:21:33.019077 systemd[1]: Hostname set to . Jan 17 12:21:33.019089 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:21:33.019102 zram_generator::config[1050]: No configuration found. Jan 17 12:21:33.019120 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:21:33.019132 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:21:33.019143 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 12:21:33.019162 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:21:33.019178 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:21:33.019194 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:21:33.019207 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:21:33.019220 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:21:33.019232 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:21:33.019250 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:21:33.019266 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:21:33.019277 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:21:33.019290 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:21:33.019304 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:21:33.019320 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:21:33.019333 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:21:33.019345 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:21:33.019356 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:21:33.019368 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:21:33.019380 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:21:33.019391 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:21:33.019408 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:21:33.019423 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:21:33.019435 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:21:33.019447 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:21:33.020517 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:21:33.020535 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:21:33.020547 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:21:33.020558 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:21:33.020571 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:21:33.020594 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:21:33.020609 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:21:33.020620 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:21:33.020633 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:21:33.020646 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:21:33.020657 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:33.020669 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:21:33.020683 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:21:33.020695 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:21:33.020710 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:21:33.020722 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:21:33.020733 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:21:33.020745 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:21:33.020757 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:21:33.020769 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:21:33.020780 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:21:33.020793 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:21:33.020808 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:21:33.020819 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:21:33.020831 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 17 12:21:33.020844 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 17 12:21:33.020857 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:21:33.020876 kernel: loop: module loaded Jan 17 12:21:33.020893 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:21:33.020910 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:21:33.020931 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:21:33.020947 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:21:33.020964 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:33.020983 kernel: ACPI: bus type drm_connector registered Jan 17 12:21:33.021000 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:21:33.021019 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:21:33.021035 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:21:33.021052 kernel: fuse: init (API version 7.39) Jan 17 12:21:33.021069 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:21:33.021093 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:21:33.021111 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:21:33.021131 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:21:33.021149 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:21:33.021168 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:21:33.021237 systemd-journald[1141]: Collecting audit messages is disabled. Jan 17 12:21:33.021279 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:21:33.021306 systemd-journald[1141]: Journal started Jan 17 12:21:33.021342 systemd-journald[1141]: Runtime Journal (/run/log/journal/35fdd3ff048c4099a18fcd3e349c6e56) is 4.9M, max 39.3M, 34.4M free. Jan 17 12:21:33.024114 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:21:33.026570 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:21:33.037738 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:21:33.038074 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:21:33.041735 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:21:33.042037 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:21:33.045513 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:21:33.045794 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:21:33.047121 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:21:33.049846 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:21:33.054412 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:21:33.058406 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:21:33.061649 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:21:33.084145 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:21:33.093815 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:21:33.102703 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:21:33.105967 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:21:33.116825 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:21:33.134845 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:21:33.135973 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:21:33.143830 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:21:33.146776 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:21:33.156799 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:21:33.168742 systemd-journald[1141]: Time spent on flushing to /var/log/journal/35fdd3ff048c4099a18fcd3e349c6e56 is 101.539ms for 971 entries. Jan 17 12:21:33.168742 systemd-journald[1141]: System Journal (/var/log/journal/35fdd3ff048c4099a18fcd3e349c6e56) is 8.0M, max 195.6M, 187.6M free. Jan 17 12:21:33.296407 systemd-journald[1141]: Received client request to flush runtime journal. Jan 17 12:21:33.172923 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:21:33.185956 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:21:33.187340 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:21:33.188537 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:21:33.191088 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:21:33.200214 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:21:33.206911 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:21:33.220844 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:21:33.271430 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:21:33.287754 udevadm[1201]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 12:21:33.300183 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:21:33.303281 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jan 17 12:21:33.303307 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jan 17 12:21:33.313824 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:21:33.324056 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:21:33.384603 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:21:33.393911 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:21:33.427011 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Jan 17 12:21:33.427042 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Jan 17 12:21:33.435068 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:21:34.298064 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:21:34.314392 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:21:34.386815 systemd-udevd[1221]: Using default interface naming scheme 'v255'. Jan 17 12:21:34.425368 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:21:34.436324 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:21:34.464852 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:21:34.545936 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:21:34.574393 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 17 12:21:34.664673 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:34.664942 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:21:34.675515 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1233) Jan 17 12:21:34.676694 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:21:34.688744 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:21:34.699736 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:21:34.703637 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:21:34.703807 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:21:34.703888 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:34.708890 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:21:34.709195 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:21:34.726780 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:21:34.727028 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:21:34.757264 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:21:34.758794 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:21:34.790427 systemd-networkd[1223]: lo: Link UP Jan 17 12:21:34.790444 systemd-networkd[1223]: lo: Gained carrier Jan 17 12:21:34.793320 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:21:34.793370 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:21:34.793982 systemd-networkd[1223]: Enumeration completed Jan 17 12:21:34.794544 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:21:34.794552 systemd-networkd[1223]: eth0: Configuring with /run/systemd/network/10-56:cb:b9:a5:c6:e4.network. Jan 17 12:21:34.795542 systemd-networkd[1223]: eth1: Configuring with /run/systemd/network/10-52:af:62:2d:c7:3c.network. Jan 17 12:21:34.796317 systemd-networkd[1223]: eth0: Link UP Jan 17 12:21:34.796322 systemd-networkd[1223]: eth0: Gained carrier Jan 17 12:21:34.802759 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:21:34.804090 systemd-networkd[1223]: eth1: Link UP Jan 17 12:21:34.804096 systemd-networkd[1223]: eth1: Gained carrier Jan 17 12:21:34.858042 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 12:21:34.871527 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 17 12:21:34.870881 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:21:34.881850 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 12:21:34.894511 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:21:34.977117 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:21:34.993628 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:21:35.082933 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 17 12:21:35.088689 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 17 12:21:35.097492 kernel: Console: switching to colour dummy device 80x25 Jan 17 12:21:35.099811 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 17 12:21:35.099914 kernel: [drm] features: -context_init Jan 17 12:21:35.104494 kernel: [drm] number of scanouts: 1 Jan 17 12:21:35.104626 kernel: [drm] number of cap sets: 0 Jan 17 12:21:35.110580 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 17 12:21:35.128845 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 17 12:21:35.129012 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 12:21:35.144583 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 17 12:21:35.164389 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:21:35.164934 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:21:35.186387 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:21:35.192485 kernel: EDAC MC: Ver: 3.0.0 Jan 17 12:21:35.194296 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:21:35.195446 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:21:35.209171 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:21:35.218284 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:21:35.233888 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:21:35.258648 lvm[1284]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:21:35.262385 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:21:35.292908 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:21:35.294137 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:21:35.301926 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:21:35.315426 lvm[1292]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:21:35.348578 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:21:35.350239 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:21:35.359865 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 17 12:21:35.362174 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:21:35.362244 systemd[1]: Reached target machines.target - Containers. Jan 17 12:21:35.375944 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:21:35.398346 kernel: ISO 9660 Extensions: RRIP_1991A Jan 17 12:21:35.398051 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 17 12:21:35.401317 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:21:35.405081 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:21:35.415803 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:21:35.419392 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:21:35.422327 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:21:35.436284 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:21:35.448149 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:21:35.453813 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:21:35.458297 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:21:35.487908 kernel: loop0: detected capacity change from 0 to 211296 Jan 17 12:21:35.487537 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:21:35.489443 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:21:35.519521 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:21:35.542580 kernel: loop1: detected capacity change from 0 to 142488 Jan 17 12:21:35.584530 kernel: loop2: detected capacity change from 0 to 8 Jan 17 12:21:35.605153 kernel: loop3: detected capacity change from 0 to 140768 Jan 17 12:21:35.648839 kernel: loop4: detected capacity change from 0 to 211296 Jan 17 12:21:35.674154 kernel: loop5: detected capacity change from 0 to 142488 Jan 17 12:21:35.694911 kernel: loop6: detected capacity change from 0 to 8 Jan 17 12:21:35.699911 kernel: loop7: detected capacity change from 0 to 140768 Jan 17 12:21:35.714906 (sd-merge)[1317]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 17 12:21:35.715771 (sd-merge)[1317]: Merged extensions into '/usr'. Jan 17 12:21:35.729118 systemd[1]: Reloading requested from client PID 1306 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:21:35.729145 systemd[1]: Reloading... Jan 17 12:21:35.874584 zram_generator::config[1357]: No configuration found. Jan 17 12:21:36.054759 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:21:36.162990 systemd[1]: Reloading finished in 432 ms. Jan 17 12:21:36.182975 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:21:36.195810 systemd[1]: Starting ensure-sysext.service... Jan 17 12:21:36.207897 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:21:36.217501 ldconfig[1303]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:21:36.225390 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:21:36.230726 systemd[1]: Reloading requested from client PID 1393 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:21:36.230753 systemd[1]: Reloading... Jan 17 12:21:36.264728 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:21:36.266342 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:21:36.267395 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:21:36.267813 systemd-tmpfiles[1394]: ACLs are not supported, ignoring. Jan 17 12:21:36.267889 systemd-tmpfiles[1394]: ACLs are not supported, ignoring. Jan 17 12:21:36.271944 systemd-tmpfiles[1394]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:21:36.272153 systemd-tmpfiles[1394]: Skipping /boot Jan 17 12:21:36.287054 systemd-tmpfiles[1394]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:21:36.287276 systemd-tmpfiles[1394]: Skipping /boot Jan 17 12:21:36.306584 systemd-networkd[1223]: eth1: Gained IPv6LL Jan 17 12:21:36.355524 zram_generator::config[1427]: No configuration found. Jan 17 12:21:36.526648 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:21:36.604101 systemd[1]: Reloading finished in 372 ms. Jan 17 12:21:36.624057 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:21:36.625929 systemd-networkd[1223]: eth0: Gained IPv6LL Jan 17 12:21:36.632530 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:21:36.656268 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:21:36.661717 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:21:36.677751 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:21:36.692927 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:21:36.708862 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:21:36.724285 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:36.725543 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:21:36.729617 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:21:36.745892 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:21:36.763853 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:21:36.765894 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:21:36.766080 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:36.776338 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:21:36.776594 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:21:36.785859 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:21:36.805196 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:21:36.805409 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:21:36.822285 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:21:36.828388 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:21:36.831816 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:21:36.843297 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:36.843661 augenrules[1510]: No rules Jan 17 12:21:36.845364 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:21:36.855627 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:21:36.872001 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:21:36.874790 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:21:36.881372 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:21:36.893184 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:21:36.893380 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:36.895111 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:21:36.899864 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:21:36.900309 systemd-resolved[1488]: Positive Trust Anchors: Jan 17 12:21:36.900328 systemd-resolved[1488]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:21:36.900365 systemd-resolved[1488]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:21:36.903013 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:21:36.905819 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:21:36.909363 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:21:36.913450 systemd-resolved[1488]: Using system hostname 'ci-4081.3.0-1-b9b10bea58'. Jan 17 12:21:36.914776 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:21:36.922094 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:21:36.926546 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:21:36.937251 systemd[1]: Reached target network.target - Network. Jan 17 12:21:36.940251 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:21:36.940874 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:21:36.941647 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:36.941888 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:21:36.948108 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:21:36.961076 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:21:36.967957 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:21:36.976905 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:21:36.980916 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:21:36.982084 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:21:36.982577 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:36.984876 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:21:36.985280 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:21:36.990154 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:21:36.990345 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:21:36.994649 systemd[1]: Finished ensure-sysext.service. Jan 17 12:21:36.996786 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:21:36.996985 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:21:37.003143 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:21:37.003447 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:21:37.014229 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:21:37.014388 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:21:37.019867 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 12:21:37.111627 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 12:21:37.113675 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:21:37.116649 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:21:37.117285 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:21:37.117868 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:21:37.118385 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:21:37.118432 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:21:37.120621 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:21:37.122123 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:21:37.124404 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:21:37.124930 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:21:37.127612 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:21:37.131734 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:21:37.135854 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:21:37.139001 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:21:37.864457 systemd-timesyncd[1548]: Contacted time server 5.78.62.36:123 (0.flatcar.pool.ntp.org). Jan 17 12:21:37.864520 systemd-resolved[1488]: Clock change detected. Flushing caches. Jan 17 12:21:37.864529 systemd-timesyncd[1548]: Initial clock synchronization to Fri 2025-01-17 12:21:37.864162 UTC. Jan 17 12:21:37.866159 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:21:37.867966 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:21:37.869935 systemd[1]: System is tainted: cgroupsv1 Jan 17 12:21:37.870080 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:21:37.870118 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:21:37.876381 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:21:37.887378 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 12:21:37.898362 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:21:37.909228 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:21:37.923263 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:21:37.925390 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:21:37.931110 coreos-metadata[1553]: Jan 17 12:21:37.931 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:21:37.937260 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:21:37.968470 coreos-metadata[1553]: Jan 17 12:21:37.949 INFO Fetch successful Jan 17 12:21:37.968651 jq[1558]: false Jan 17 12:21:37.949788 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:21:37.991300 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:21:38.000957 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:21:38.013572 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:21:38.023733 dbus-daemon[1555]: [system] SELinux support is enabled Jan 17 12:21:38.026117 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:21:38.045279 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:21:38.052310 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:21:38.075403 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:21:38.080130 extend-filesystems[1559]: Found loop4 Jan 17 12:21:38.080130 extend-filesystems[1559]: Found loop5 Jan 17 12:21:38.080130 extend-filesystems[1559]: Found loop6 Jan 17 12:21:38.080130 extend-filesystems[1559]: Found loop7 Jan 17 12:21:38.080130 extend-filesystems[1559]: Found vda Jan 17 12:21:38.080130 extend-filesystems[1559]: Found vda1 Jan 17 12:21:38.080130 extend-filesystems[1559]: Found vda2 Jan 17 12:21:38.080130 extend-filesystems[1559]: Found vda3 Jan 17 12:21:38.165858 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:21:38.181694 extend-filesystems[1559]: Found usr Jan 17 12:21:38.181694 extend-filesystems[1559]: Found vda4 Jan 17 12:21:38.181694 extend-filesystems[1559]: Found vda6 Jan 17 12:21:38.181694 extend-filesystems[1559]: Found vda7 Jan 17 12:21:38.181694 extend-filesystems[1559]: Found vda9 Jan 17 12:21:38.181694 extend-filesystems[1559]: Checking size of /dev/vda9 Jan 17 12:21:38.181881 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:21:38.212726 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:21:38.218597 extend-filesystems[1559]: Resized partition /dev/vda9 Jan 17 12:21:38.219290 jq[1587]: true Jan 17 12:21:38.215162 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:21:38.230282 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:21:38.235233 update_engine[1580]: I20250117 12:21:38.234587 1580 main.cc:92] Flatcar Update Engine starting Jan 17 12:21:38.246245 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 17 12:21:38.230595 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:21:38.246370 extend-filesystems[1598]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:21:38.247458 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:21:38.263160 update_engine[1580]: I20250117 12:21:38.251263 1580 update_check_scheduler.cc:74] Next update check in 3m40s Jan 17 12:21:38.264672 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:21:38.264959 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:21:38.300952 jq[1602]: true Jan 17 12:21:38.326517 (ntainerd)[1603]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:21:38.367512 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:21:38.373932 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:21:38.376769 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:21:38.379253 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 17 12:21:38.379283 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:21:38.384323 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:21:38.386908 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:21:38.403345 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 17 12:21:38.396389 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:21:38.397754 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 12:21:39.403095 tar[1601]: linux-amd64/helm Jan 17 12:21:38.414794 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:21:39.400866 systemd-logind[1574]: New seat seat0. Jan 17 12:21:39.418328 extend-filesystems[1598]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 12:21:39.418328 extend-filesystems[1598]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 17 12:21:39.418328 extend-filesystems[1598]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 17 12:21:39.459214 extend-filesystems[1559]: Resized filesystem in /dev/vda9 Jan 17 12:21:39.459214 extend-filesystems[1559]: Found vdb Jan 17 12:21:39.502179 sshd_keygen[1599]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:21:39.419663 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:21:39.502437 bash[1644]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:21:39.420075 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:21:39.445895 systemd-logind[1574]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 12:21:39.445930 systemd-logind[1574]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:21:39.448951 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:21:39.487787 systemd[1]: Starting sshkeys.service... Jan 17 12:21:39.503747 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:21:39.580458 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 12:21:39.594572 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 12:21:39.620338 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:21:39.647518 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1628) Jan 17 12:21:39.656830 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:21:39.733513 coreos-metadata[1664]: Jan 17 12:21:39.733 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:21:39.739977 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:21:39.740403 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:21:39.752623 coreos-metadata[1664]: Jan 17 12:21:39.751 INFO Fetch successful Jan 17 12:21:39.768238 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:21:39.812257 unknown[1664]: wrote ssh authorized keys file for user: core Jan 17 12:21:39.860084 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:21:39.878292 locksmithd[1623]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:21:39.881513 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:21:39.902784 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:21:39.913697 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:21:39.947858 update-ssh-keys[1688]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:21:39.953894 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 12:21:39.961524 systemd[1]: Finished sshkeys.service. Jan 17 12:21:40.037067 containerd[1603]: time="2025-01-17T12:21:40.036062398Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:21:40.122792 containerd[1603]: time="2025-01-17T12:21:40.122726271Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:40.127314 containerd[1603]: time="2025-01-17T12:21:40.127244849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:21:40.127977 containerd[1603]: time="2025-01-17T12:21:40.127471321Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:21:40.127977 containerd[1603]: time="2025-01-17T12:21:40.127505383Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:21:40.127977 containerd[1603]: time="2025-01-17T12:21:40.127720598Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:21:40.127977 containerd[1603]: time="2025-01-17T12:21:40.127761553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:40.127977 containerd[1603]: time="2025-01-17T12:21:40.127846898Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:21:40.127977 containerd[1603]: time="2025-01-17T12:21:40.127866816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:40.128619 containerd[1603]: time="2025-01-17T12:21:40.128582306Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:21:40.128696 containerd[1603]: time="2025-01-17T12:21:40.128679545Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:40.128766 containerd[1603]: time="2025-01-17T12:21:40.128749098Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:21:40.128841 containerd[1603]: time="2025-01-17T12:21:40.128824023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:40.129430 containerd[1603]: time="2025-01-17T12:21:40.129076436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:40.129430 containerd[1603]: time="2025-01-17T12:21:40.129380615Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:40.129815 containerd[1603]: time="2025-01-17T12:21:40.129787284Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:21:40.129889 containerd[1603]: time="2025-01-17T12:21:40.129874352Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:21:40.130069 containerd[1603]: time="2025-01-17T12:21:40.130050818Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:21:40.130189 containerd[1603]: time="2025-01-17T12:21:40.130173298Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:21:40.145048 containerd[1603]: time="2025-01-17T12:21:40.143091497Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:21:40.145048 containerd[1603]: time="2025-01-17T12:21:40.143204595Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:21:40.145048 containerd[1603]: time="2025-01-17T12:21:40.143232060Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:21:40.145048 containerd[1603]: time="2025-01-17T12:21:40.143399329Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:21:40.145048 containerd[1603]: time="2025-01-17T12:21:40.143454808Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:21:40.145048 containerd[1603]: time="2025-01-17T12:21:40.143695839Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:21:40.145048 containerd[1603]: time="2025-01-17T12:21:40.144898929Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:21:40.147721 containerd[1603]: time="2025-01-17T12:21:40.147674743Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:21:40.148209 containerd[1603]: time="2025-01-17T12:21:40.148160332Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:21:40.149635 containerd[1603]: time="2025-01-17T12:21:40.149603989Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:21:40.152043 containerd[1603]: time="2025-01-17T12:21:40.149745528Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:21:40.152043 containerd[1603]: time="2025-01-17T12:21:40.149822977Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:21:40.152043 containerd[1603]: time="2025-01-17T12:21:40.149859472Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:21:40.152043 containerd[1603]: time="2025-01-17T12:21:40.149901228Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:21:40.152043 containerd[1603]: time="2025-01-17T12:21:40.149938322Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:21:40.152043 containerd[1603]: time="2025-01-17T12:21:40.149964612Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:21:40.152043 containerd[1603]: time="2025-01-17T12:21:40.149996242Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:21:40.152043 containerd[1603]: time="2025-01-17T12:21:40.150051091Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:21:40.152043 containerd[1603]: time="2025-01-17T12:21:40.150115306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:21:40.152043 containerd[1603]: time="2025-01-17T12:21:40.150150352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:21:40.152043 containerd[1603]: time="2025-01-17T12:21:40.150179320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:21:40.152043 containerd[1603]: time="2025-01-17T12:21:40.150210618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:21:40.152043 containerd[1603]: time="2025-01-17T12:21:40.150234083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:21:40.152043 containerd[1603]: time="2025-01-17T12:21:40.150263990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:21:40.152612 containerd[1603]: time="2025-01-17T12:21:40.150291568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:21:40.152612 containerd[1603]: time="2025-01-17T12:21:40.150320119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:21:40.152612 containerd[1603]: time="2025-01-17T12:21:40.150349357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:21:40.152612 containerd[1603]: time="2025-01-17T12:21:40.150409366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:21:40.152612 containerd[1603]: time="2025-01-17T12:21:40.150439074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:21:40.152612 containerd[1603]: time="2025-01-17T12:21:40.150469990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:21:40.152612 containerd[1603]: time="2025-01-17T12:21:40.150499186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:21:40.152612 containerd[1603]: time="2025-01-17T12:21:40.150533071Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:21:40.152612 containerd[1603]: time="2025-01-17T12:21:40.150585512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:21:40.152612 containerd[1603]: time="2025-01-17T12:21:40.150614038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:21:40.152612 containerd[1603]: time="2025-01-17T12:21:40.150639254Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:21:40.152612 containerd[1603]: time="2025-01-17T12:21:40.150722810Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:21:40.152612 containerd[1603]: time="2025-01-17T12:21:40.150758606Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:21:40.152612 containerd[1603]: time="2025-01-17T12:21:40.150779399Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:21:40.153077 containerd[1603]: time="2025-01-17T12:21:40.150808823Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:21:40.153077 containerd[1603]: time="2025-01-17T12:21:40.150834794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:21:40.153077 containerd[1603]: time="2025-01-17T12:21:40.150879826Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:21:40.153077 containerd[1603]: time="2025-01-17T12:21:40.150902551Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:21:40.153077 containerd[1603]: time="2025-01-17T12:21:40.150928576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:21:40.153272 containerd[1603]: time="2025-01-17T12:21:40.151469124Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:21:40.153272 containerd[1603]: time="2025-01-17T12:21:40.151596265Z" level=info msg="Connect containerd service" Jan 17 12:21:40.153272 containerd[1603]: time="2025-01-17T12:21:40.151696501Z" level=info msg="using legacy CRI server" Jan 17 12:21:40.153272 containerd[1603]: time="2025-01-17T12:21:40.151710823Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:21:40.153272 containerd[1603]: time="2025-01-17T12:21:40.151879574Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:21:40.159076 containerd[1603]: time="2025-01-17T12:21:40.158075543Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:21:40.159076 containerd[1603]: time="2025-01-17T12:21:40.158508627Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:21:40.159076 containerd[1603]: time="2025-01-17T12:21:40.158570453Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:21:40.159076 containerd[1603]: time="2025-01-17T12:21:40.158673707Z" level=info msg="Start subscribing containerd event" Jan 17 12:21:40.162061 containerd[1603]: time="2025-01-17T12:21:40.161991211Z" level=info msg="Start recovering state" Jan 17 12:21:40.163040 containerd[1603]: time="2025-01-17T12:21:40.162323497Z" level=info msg="Start event monitor" Jan 17 12:21:40.163040 containerd[1603]: time="2025-01-17T12:21:40.162367312Z" level=info msg="Start snapshots syncer" Jan 17 12:21:40.163040 containerd[1603]: time="2025-01-17T12:21:40.162385832Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:21:40.163040 containerd[1603]: time="2025-01-17T12:21:40.162399478Z" level=info msg="Start streaming server" Jan 17 12:21:40.163719 containerd[1603]: time="2025-01-17T12:21:40.163569910Z" level=info msg="containerd successfully booted in 0.129111s" Jan 17 12:21:40.163765 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:21:40.628798 tar[1601]: linux-amd64/LICENSE Jan 17 12:21:40.628798 tar[1601]: linux-amd64/README.md Jan 17 12:21:40.654924 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:21:41.047405 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:21:41.059589 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:21:41.059791 (kubelet)[1715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:21:41.064935 systemd[1]: Startup finished in 8.230s (kernel) + 8.933s (userspace) = 17.163s. Jan 17 12:21:42.213191 kubelet[1715]: E0117 12:21:42.213056 1715 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:21:42.216116 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:21:42.216716 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:21:46.106430 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:21:46.117523 systemd[1]: Started sshd@0-137.184.236.252:22-139.178.68.195:48746.service - OpenSSH per-connection server daemon (139.178.68.195:48746). Jan 17 12:21:46.223446 sshd[1728]: Accepted publickey for core from 139.178.68.195 port 48746 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:21:46.226162 sshd[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:46.240884 systemd-logind[1574]: New session 1 of user core. Jan 17 12:21:46.242400 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:21:46.248668 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:21:46.270150 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:21:46.280705 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:21:46.287247 (systemd)[1734]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:21:46.454221 systemd[1734]: Queued start job for default target default.target. Jan 17 12:21:46.455398 systemd[1734]: Created slice app.slice - User Application Slice. Jan 17 12:21:46.455563 systemd[1734]: Reached target paths.target - Paths. Jan 17 12:21:46.455643 systemd[1734]: Reached target timers.target - Timers. Jan 17 12:21:46.471263 systemd[1734]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:21:46.482812 systemd[1734]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:21:46.484134 systemd[1734]: Reached target sockets.target - Sockets. Jan 17 12:21:46.484586 systemd[1734]: Reached target basic.target - Basic System. Jan 17 12:21:46.484682 systemd[1734]: Reached target default.target - Main User Target. Jan 17 12:21:46.484725 systemd[1734]: Startup finished in 187ms. Jan 17 12:21:46.485141 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:21:46.489472 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:21:46.561675 systemd[1]: Started sshd@1-137.184.236.252:22-139.178.68.195:48758.service - OpenSSH per-connection server daemon (139.178.68.195:48758). Jan 17 12:21:46.616136 sshd[1746]: Accepted publickey for core from 139.178.68.195 port 48758 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:21:46.618366 sshd[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:46.627639 systemd-logind[1574]: New session 2 of user core. Jan 17 12:21:46.633661 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:21:46.703402 sshd[1746]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:46.708823 systemd[1]: sshd@1-137.184.236.252:22-139.178.68.195:48758.service: Deactivated successfully. Jan 17 12:21:46.714253 systemd-logind[1574]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:21:46.724915 systemd[1]: Started sshd@2-137.184.236.252:22-139.178.68.195:48766.service - OpenSSH per-connection server daemon (139.178.68.195:48766). Jan 17 12:21:46.725644 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:21:46.727559 systemd-logind[1574]: Removed session 2. Jan 17 12:21:46.773102 sshd[1754]: Accepted publickey for core from 139.178.68.195 port 48766 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:21:46.775272 sshd[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:46.783370 systemd-logind[1574]: New session 3 of user core. Jan 17 12:21:46.790665 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:21:46.853455 sshd[1754]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:46.868726 systemd[1]: Started sshd@3-137.184.236.252:22-139.178.68.195:48778.service - OpenSSH per-connection server daemon (139.178.68.195:48778). Jan 17 12:21:46.869904 systemd[1]: sshd@2-137.184.236.252:22-139.178.68.195:48766.service: Deactivated successfully. Jan 17 12:21:46.873865 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:21:46.883147 systemd-logind[1574]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:21:46.887484 systemd-logind[1574]: Removed session 3. Jan 17 12:21:46.930054 sshd[1760]: Accepted publickey for core from 139.178.68.195 port 48778 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:21:46.932698 sshd[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:46.944715 systemd-logind[1574]: New session 4 of user core. Jan 17 12:21:46.957006 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:21:47.025560 sshd[1760]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:47.030964 systemd[1]: sshd@3-137.184.236.252:22-139.178.68.195:48778.service: Deactivated successfully. Jan 17 12:21:47.035074 systemd-logind[1574]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:21:47.035868 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:21:47.041521 systemd[1]: Started sshd@4-137.184.236.252:22-139.178.68.195:48792.service - OpenSSH per-connection server daemon (139.178.68.195:48792). Jan 17 12:21:47.043239 systemd-logind[1574]: Removed session 4. Jan 17 12:21:47.093900 sshd[1770]: Accepted publickey for core from 139.178.68.195 port 48792 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:21:47.096438 sshd[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:47.103836 systemd-logind[1574]: New session 5 of user core. Jan 17 12:21:47.113494 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:21:47.192990 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:21:47.194138 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:21:47.209287 sudo[1774]: pam_unix(sudo:session): session closed for user root Jan 17 12:21:47.215057 sshd[1770]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:47.234450 systemd[1]: Started sshd@5-137.184.236.252:22-139.178.68.195:48796.service - OpenSSH per-connection server daemon (139.178.68.195:48796). Jan 17 12:21:47.235080 systemd[1]: sshd@4-137.184.236.252:22-139.178.68.195:48792.service: Deactivated successfully. Jan 17 12:21:47.243559 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:21:47.246517 systemd-logind[1574]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:21:47.248622 systemd-logind[1574]: Removed session 5. Jan 17 12:21:47.280965 sshd[1776]: Accepted publickey for core from 139.178.68.195 port 48796 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:21:47.282164 sshd[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:47.287496 systemd-logind[1574]: New session 6 of user core. Jan 17 12:21:47.299874 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:21:47.367859 sudo[1784]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:21:47.368799 sudo[1784]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:21:47.375310 sudo[1784]: pam_unix(sudo:session): session closed for user root Jan 17 12:21:47.389042 sudo[1783]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:21:47.389440 sudo[1783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:21:47.414611 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:21:47.417056 auditctl[1787]: No rules Jan 17 12:21:47.417600 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:21:47.417990 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:21:47.433209 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:21:47.473432 augenrules[1806]: No rules Jan 17 12:21:47.475641 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:21:47.479180 sudo[1783]: pam_unix(sudo:session): session closed for user root Jan 17 12:21:47.484145 sshd[1776]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:47.502045 systemd[1]: Started sshd@6-137.184.236.252:22-139.178.68.195:48810.service - OpenSSH per-connection server daemon (139.178.68.195:48810). Jan 17 12:21:47.502796 systemd[1]: sshd@5-137.184.236.252:22-139.178.68.195:48796.service: Deactivated successfully. Jan 17 12:21:47.509677 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:21:47.512215 systemd-logind[1574]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:21:47.516371 systemd-logind[1574]: Removed session 6. Jan 17 12:21:47.557971 sshd[1812]: Accepted publickey for core from 139.178.68.195 port 48810 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:21:47.559673 sshd[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:47.569722 systemd-logind[1574]: New session 7 of user core. Jan 17 12:21:47.582668 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:21:47.645882 sudo[1819]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:21:47.646925 sudo[1819]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:21:48.213881 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:21:48.215579 (dockerd)[1836]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:21:48.791517 dockerd[1836]: time="2025-01-17T12:21:48.791429520Z" level=info msg="Starting up" Jan 17 12:21:48.963833 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1134655652-merged.mount: Deactivated successfully. Jan 17 12:21:49.103684 systemd[1]: var-lib-docker-metacopy\x2dcheck4062889357-merged.mount: Deactivated successfully. Jan 17 12:21:49.133293 dockerd[1836]: time="2025-01-17T12:21:49.133132008Z" level=info msg="Loading containers: start." Jan 17 12:21:49.326721 kernel: Initializing XFRM netlink socket Jan 17 12:21:49.459246 systemd-networkd[1223]: docker0: Link UP Jan 17 12:21:49.485879 dockerd[1836]: time="2025-01-17T12:21:49.485809642Z" level=info msg="Loading containers: done." Jan 17 12:21:49.516813 dockerd[1836]: time="2025-01-17T12:21:49.516727944Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:21:49.517242 dockerd[1836]: time="2025-01-17T12:21:49.516912532Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:21:49.517242 dockerd[1836]: time="2025-01-17T12:21:49.517163863Z" level=info msg="Daemon has completed initialization" Jan 17 12:21:49.575434 dockerd[1836]: time="2025-01-17T12:21:49.574717416Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:21:49.575370 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:21:49.957705 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck757441931-merged.mount: Deactivated successfully. Jan 17 12:21:50.741633 containerd[1603]: time="2025-01-17T12:21:50.740815678Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\"" Jan 17 12:21:51.449897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount193888801.mount: Deactivated successfully. Jan 17 12:21:52.466683 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:21:52.480721 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:21:52.727578 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:21:52.732675 (kubelet)[2055]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:21:52.883041 kubelet[2055]: E0117 12:21:52.881722 2055 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:21:52.895536 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:21:52.895814 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:21:54.045544 containerd[1603]: time="2025-01-17T12:21:54.043277458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:54.047391 containerd[1603]: time="2025-01-17T12:21:54.047269461Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.13: active requests=0, bytes read=35140730" Jan 17 12:21:54.048734 containerd[1603]: time="2025-01-17T12:21:54.048681354Z" level=info msg="ImageCreate event name:\"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:54.055902 containerd[1603]: time="2025-01-17T12:21:54.054251896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:54.055902 containerd[1603]: time="2025-01-17T12:21:54.055605454Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.13\" with image id \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\", size \"35137530\" in 3.31473791s" Jan 17 12:21:54.055902 containerd[1603]: time="2025-01-17T12:21:54.055661657Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\" returns image reference \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\"" Jan 17 12:21:54.091514 containerd[1603]: time="2025-01-17T12:21:54.091452240Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\"" Jan 17 12:21:56.275887 containerd[1603]: time="2025-01-17T12:21:56.274150434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:56.275887 containerd[1603]: time="2025-01-17T12:21:56.275196382Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.13: active requests=0, bytes read=32216641" Jan 17 12:21:56.276883 containerd[1603]: time="2025-01-17T12:21:56.276843768Z" level=info msg="ImageCreate event name:\"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:56.281515 containerd[1603]: time="2025-01-17T12:21:56.281447054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:56.283459 containerd[1603]: time="2025-01-17T12:21:56.283396411Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.13\" with image id \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\", size \"33663223\" in 2.191627487s" Jan 17 12:21:56.283459 containerd[1603]: time="2025-01-17T12:21:56.283462406Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\" returns image reference \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\"" Jan 17 12:21:56.322533 containerd[1603]: time="2025-01-17T12:21:56.322475856Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\"" Jan 17 12:21:57.660062 containerd[1603]: time="2025-01-17T12:21:57.659929300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:57.661723 containerd[1603]: time="2025-01-17T12:21:57.661660633Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.13: active requests=0, bytes read=17332841" Jan 17 12:21:57.663824 containerd[1603]: time="2025-01-17T12:21:57.663105166Z" level=info msg="ImageCreate event name:\"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:57.726416 containerd[1603]: time="2025-01-17T12:21:57.726335971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:57.727514 containerd[1603]: time="2025-01-17T12:21:57.727231551Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.13\" with image id \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\", size \"18779441\" in 1.404697437s" Jan 17 12:21:57.727514 containerd[1603]: time="2025-01-17T12:21:57.727269615Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\" returns image reference \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\"" Jan 17 12:21:57.768880 containerd[1603]: time="2025-01-17T12:21:57.768524380Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\"" Jan 17 12:21:57.770528 systemd-resolved[1488]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 17 12:21:59.075945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount327873753.mount: Deactivated successfully. Jan 17 12:21:59.817288 containerd[1603]: time="2025-01-17T12:21:59.817183620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:59.819178 containerd[1603]: time="2025-01-17T12:21:59.819103974Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.13: active requests=0, bytes read=28620941" Jan 17 12:21:59.819942 containerd[1603]: time="2025-01-17T12:21:59.819869378Z" level=info msg="ImageCreate event name:\"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:59.823468 containerd[1603]: time="2025-01-17T12:21:59.822482006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:59.823468 containerd[1603]: time="2025-01-17T12:21:59.823308096Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.13\" with image id \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\", repo tag \"registry.k8s.io/kube-proxy:v1.29.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\", size \"28619960\" in 2.054733s" Jan 17 12:21:59.823468 containerd[1603]: time="2025-01-17T12:21:59.823347576Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\" returns image reference \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\"" Jan 17 12:21:59.863820 containerd[1603]: time="2025-01-17T12:21:59.863765896Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:22:00.443136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1796192505.mount: Deactivated successfully. Jan 17 12:22:00.837460 systemd-resolved[1488]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 17 12:22:02.439622 containerd[1603]: time="2025-01-17T12:22:02.439552480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:02.443350 containerd[1603]: time="2025-01-17T12:22:02.443273839Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 17 12:22:02.444493 containerd[1603]: time="2025-01-17T12:22:02.444409757Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:02.447909 containerd[1603]: time="2025-01-17T12:22:02.447827896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:02.450769 containerd[1603]: time="2025-01-17T12:22:02.450464166Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.586646287s" Jan 17 12:22:02.450769 containerd[1603]: time="2025-01-17T12:22:02.450531137Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 17 12:22:02.510953 containerd[1603]: time="2025-01-17T12:22:02.510899504Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 12:22:03.103602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:22:03.119251 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:22:03.156666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount64133821.mount: Deactivated successfully. Jan 17 12:22:03.282576 containerd[1603]: time="2025-01-17T12:22:03.281028069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:03.285854 containerd[1603]: time="2025-01-17T12:22:03.285756972Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 17 12:22:03.317113 containerd[1603]: time="2025-01-17T12:22:03.315601773Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:03.325952 containerd[1603]: time="2025-01-17T12:22:03.325883191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:03.330866 containerd[1603]: time="2025-01-17T12:22:03.329696777Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 818.739472ms" Jan 17 12:22:03.331119 containerd[1603]: time="2025-01-17T12:22:03.331099937Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 17 12:22:03.417230 containerd[1603]: time="2025-01-17T12:22:03.416589225Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 17 12:22:03.455312 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:22:03.469808 (kubelet)[2168]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:22:03.622866 kubelet[2168]: E0117 12:22:03.622710 2168 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:22:03.628800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:22:03.630121 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:22:03.965406 systemd-resolved[1488]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Jan 17 12:22:04.078307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2293684176.mount: Deactivated successfully. Jan 17 12:22:06.777671 containerd[1603]: time="2025-01-17T12:22:06.777582400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:06.779577 containerd[1603]: time="2025-01-17T12:22:06.779462278Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jan 17 12:22:06.781464 containerd[1603]: time="2025-01-17T12:22:06.780494614Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:06.785807 containerd[1603]: time="2025-01-17T12:22:06.785743660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:06.787492 containerd[1603]: time="2025-01-17T12:22:06.787438174Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.370777686s" Jan 17 12:22:06.787647 containerd[1603]: time="2025-01-17T12:22:06.787631943Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 17 12:22:10.648669 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:22:10.660805 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:22:10.696321 systemd[1]: Reloading requested from client PID 2290 ('systemctl') (unit session-7.scope)... Jan 17 12:22:10.696558 systemd[1]: Reloading... Jan 17 12:22:10.858427 zram_generator::config[2330]: No configuration found. Jan 17 12:22:11.026097 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:22:11.121708 systemd[1]: Reloading finished in 424 ms. Jan 17 12:22:11.175872 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:22:11.176426 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:22:11.177102 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:22:11.181492 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:22:11.360338 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:22:11.373815 (kubelet)[2392]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:22:11.461725 kubelet[2392]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:22:11.461725 kubelet[2392]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:22:11.461725 kubelet[2392]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:22:11.463359 kubelet[2392]: I0117 12:22:11.463272 2392 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:22:12.215830 kubelet[2392]: I0117 12:22:12.215766 2392 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:22:12.215830 kubelet[2392]: I0117 12:22:12.215818 2392 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:22:12.216301 kubelet[2392]: I0117 12:22:12.216196 2392 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:22:12.252367 kubelet[2392]: I0117 12:22:12.251718 2392 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:22:12.252367 kubelet[2392]: E0117 12:22:12.252300 2392 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://137.184.236.252:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 137.184.236.252:6443: connect: connection refused Jan 17 12:22:12.275077 kubelet[2392]: I0117 12:22:12.275031 2392 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:22:12.277323 kubelet[2392]: I0117 12:22:12.277274 2392 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:22:12.278326 kubelet[2392]: I0117 12:22:12.278289 2392 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:22:12.278576 kubelet[2392]: I0117 12:22:12.278337 2392 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:22:12.278576 kubelet[2392]: I0117 12:22:12.278349 2392 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:22:12.278576 kubelet[2392]: I0117 12:22:12.278486 2392 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:22:12.280655 kubelet[2392]: W0117 12:22:12.280574 2392 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://137.184.236.252:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-1-b9b10bea58&limit=500&resourceVersion=0": dial tcp 137.184.236.252:6443: connect: connection refused Jan 17 12:22:12.280869 kubelet[2392]: E0117 12:22:12.280843 2392 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://137.184.236.252:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-1-b9b10bea58&limit=500&resourceVersion=0": dial tcp 137.184.236.252:6443: connect: connection refused Jan 17 12:22:12.280996 kubelet[2392]: I0117 12:22:12.280963 2392 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:22:12.281064 kubelet[2392]: I0117 12:22:12.281049 2392 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:22:12.281251 kubelet[2392]: I0117 12:22:12.281094 2392 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:22:12.281251 kubelet[2392]: I0117 12:22:12.281109 2392 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:22:12.284052 kubelet[2392]: W0117 12:22:12.282908 2392 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://137.184.236.252:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 137.184.236.252:6443: connect: connection refused Jan 17 12:22:12.284052 kubelet[2392]: E0117 12:22:12.282970 2392 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://137.184.236.252:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 137.184.236.252:6443: connect: connection refused Jan 17 12:22:12.284052 kubelet[2392]: I0117 12:22:12.283278 2392 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:22:12.289304 kubelet[2392]: I0117 12:22:12.289249 2392 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:22:12.290854 kubelet[2392]: W0117 12:22:12.290801 2392 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:22:12.293527 kubelet[2392]: I0117 12:22:12.293485 2392 server.go:1256] "Started kubelet" Jan 17 12:22:12.301756 kubelet[2392]: I0117 12:22:12.301591 2392 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:22:12.304506 kubelet[2392]: E0117 12:22:12.304464 2392 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://137.184.236.252:6443/api/v1/namespaces/default/events\": dial tcp 137.184.236.252:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-1-b9b10bea58.181b7a4850cf1894 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-1-b9b10bea58,UID:ci-4081.3.0-1-b9b10bea58,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-1-b9b10bea58,},FirstTimestamp:2025-01-17 12:22:12.293441684 +0000 UTC m=+0.912374034,LastTimestamp:2025-01-17 12:22:12.293441684 +0000 UTC m=+0.912374034,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-1-b9b10bea58,}" Jan 17 12:22:12.310053 kubelet[2392]: I0117 12:22:12.309987 2392 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:22:12.312814 kubelet[2392]: I0117 12:22:12.312758 2392 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:22:12.313689 kubelet[2392]: I0117 12:22:12.313637 2392 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:22:12.316061 kubelet[2392]: I0117 12:22:12.315212 2392 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:22:12.316435 kubelet[2392]: I0117 12:22:12.316395 2392 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:22:12.316519 kubelet[2392]: I0117 12:22:12.316485 2392 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:22:12.317135 kubelet[2392]: I0117 12:22:12.317103 2392 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:22:12.317656 kubelet[2392]: W0117 12:22:12.317562 2392 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://137.184.236.252:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.236.252:6443: connect: connection refused Jan 17 12:22:12.317656 kubelet[2392]: E0117 12:22:12.317640 2392 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://137.184.236.252:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.236.252:6443: connect: connection refused Jan 17 12:22:12.317816 kubelet[2392]: E0117 12:22:12.317744 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.236.252:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-1-b9b10bea58?timeout=10s\": dial tcp 137.184.236.252:6443: connect: connection refused" interval="200ms" Jan 17 12:22:12.320449 kubelet[2392]: I0117 12:22:12.320218 2392 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:22:12.320449 kubelet[2392]: I0117 12:22:12.320353 2392 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:22:12.323991 kubelet[2392]: I0117 12:22:12.323950 2392 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:22:12.351304 kubelet[2392]: I0117 12:22:12.351261 2392 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:22:12.353958 kubelet[2392]: I0117 12:22:12.353917 2392 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:22:12.354199 kubelet[2392]: I0117 12:22:12.354186 2392 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:22:12.354279 kubelet[2392]: I0117 12:22:12.354272 2392 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:22:12.354424 kubelet[2392]: E0117 12:22:12.354409 2392 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:22:12.369101 kubelet[2392]: E0117 12:22:12.369059 2392 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:22:12.369315 kubelet[2392]: W0117 12:22:12.369258 2392 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://137.184.236.252:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.236.252:6443: connect: connection refused Jan 17 12:22:12.369366 kubelet[2392]: E0117 12:22:12.369337 2392 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://137.184.236.252:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.236.252:6443: connect: connection refused Jan 17 12:22:12.376577 kubelet[2392]: I0117 12:22:12.376538 2392 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:22:12.376888 kubelet[2392]: I0117 12:22:12.376825 2392 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:22:12.376888 kubelet[2392]: I0117 12:22:12.376857 2392 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:22:12.380968 kubelet[2392]: I0117 12:22:12.380757 2392 policy_none.go:49] "None policy: Start" Jan 17 12:22:12.382052 kubelet[2392]: I0117 12:22:12.381958 2392 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:22:12.382052 kubelet[2392]: I0117 12:22:12.382035 2392 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:22:12.396054 kubelet[2392]: I0117 12:22:12.395263 2392 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:22:12.396054 kubelet[2392]: I0117 12:22:12.395683 2392 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:22:12.404001 kubelet[2392]: E0117 12:22:12.403961 2392 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-1-b9b10bea58\" not found" Jan 17 12:22:12.416520 kubelet[2392]: I0117 12:22:12.416459 2392 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:12.417140 kubelet[2392]: E0117 12:22:12.417108 2392 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://137.184.236.252:6443/api/v1/nodes\": dial tcp 137.184.236.252:6443: connect: connection refused" node="ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:12.456416 kubelet[2392]: I0117 12:22:12.455507 2392 topology_manager.go:215] "Topology Admit Handler" podUID="cb68003cb4e049be2c8226452af5cc29" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:12.457464 kubelet[2392]: I0117 12:22:12.457427 2392 topology_manager.go:215] "Topology Admit Handler" podUID="7cf95ffa07a2cccc7d1c20a3f68ea2a8" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:12.458794 kubelet[2392]: I0117 12:22:12.458759 2392 topology_manager.go:215] "Topology Admit Handler" podUID="c6d0e3e7b6cb2dbf1717566952b2a9c2" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:12.519259 kubelet[2392]: E0117 12:22:12.519096 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.236.252:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-1-b9b10bea58?timeout=10s\": dial tcp 137.184.236.252:6443: connect: connection refused" interval="400ms" Jan 17 12:22:12.618158 kubelet[2392]: I0117 12:22:12.617753 2392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6d0e3e7b6cb2dbf1717566952b2a9c2-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-1-b9b10bea58\" (UID: \"c6d0e3e7b6cb2dbf1717566952b2a9c2\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:12.618158 kubelet[2392]: I0117 12:22:12.617815 2392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6d0e3e7b6cb2dbf1717566952b2a9c2-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-1-b9b10bea58\" (UID: \"c6d0e3e7b6cb2dbf1717566952b2a9c2\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:12.618158 kubelet[2392]: I0117 12:22:12.617849 2392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7cf95ffa07a2cccc7d1c20a3f68ea2a8-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-1-b9b10bea58\" (UID: \"7cf95ffa07a2cccc7d1c20a3f68ea2a8\") " pod="kube-system/kube-apiserver-ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:12.618158 kubelet[2392]: I0117 12:22:12.617869 2392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7cf95ffa07a2cccc7d1c20a3f68ea2a8-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-1-b9b10bea58\" (UID: \"7cf95ffa07a2cccc7d1c20a3f68ea2a8\") " pod="kube-system/kube-apiserver-ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:12.618158 kubelet[2392]: I0117 12:22:12.617894 2392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7cf95ffa07a2cccc7d1c20a3f68ea2a8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-1-b9b10bea58\" (UID: \"7cf95ffa07a2cccc7d1c20a3f68ea2a8\") " pod="kube-system/kube-apiserver-ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:12.619348 kubelet[2392]: I0117 12:22:12.617914 2392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb68003cb4e049be2c8226452af5cc29-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-1-b9b10bea58\" (UID: \"cb68003cb4e049be2c8226452af5cc29\") " pod="kube-system/kube-scheduler-ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:12.619348 kubelet[2392]: I0117 12:22:12.617934 2392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6d0e3e7b6cb2dbf1717566952b2a9c2-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-1-b9b10bea58\" (UID: \"c6d0e3e7b6cb2dbf1717566952b2a9c2\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:12.619348 kubelet[2392]: I0117 12:22:12.617969 2392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6d0e3e7b6cb2dbf1717566952b2a9c2-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-1-b9b10bea58\" (UID: \"c6d0e3e7b6cb2dbf1717566952b2a9c2\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:12.619348 kubelet[2392]: I0117 12:22:12.617995 2392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6d0e3e7b6cb2dbf1717566952b2a9c2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-1-b9b10bea58\" (UID: \"c6d0e3e7b6cb2dbf1717566952b2a9c2\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:12.619348 kubelet[2392]: I0117 12:22:12.618785 2392 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:12.619348 kubelet[2392]: E0117 12:22:12.619244 2392 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://137.184.236.252:6443/api/v1/nodes\": dial tcp 137.184.236.252:6443: connect: connection refused" node="ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:12.762840 kubelet[2392]: E0117 12:22:12.762677 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:12.764132 containerd[1603]: time="2025-01-17T12:22:12.763757502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-1-b9b10bea58,Uid:cb68003cb4e049be2c8226452af5cc29,Namespace:kube-system,Attempt:0,}" Jan 17 12:22:12.768816 kubelet[2392]: E0117 12:22:12.768738 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:12.769858 containerd[1603]: time="2025-01-17T12:22:12.769373953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-1-b9b10bea58,Uid:7cf95ffa07a2cccc7d1c20a3f68ea2a8,Namespace:kube-system,Attempt:0,}" Jan 17 12:22:12.779223 kubelet[2392]: E0117 12:22:12.778682 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:12.784692 containerd[1603]: time="2025-01-17T12:22:12.784626595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-1-b9b10bea58,Uid:c6d0e3e7b6cb2dbf1717566952b2a9c2,Namespace:kube-system,Attempt:0,}" Jan 17 12:22:12.920294 kubelet[2392]: E0117 12:22:12.920242 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.236.252:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-1-b9b10bea58?timeout=10s\": dial tcp 137.184.236.252:6443: connect: connection refused" interval="800ms" Jan 17 12:22:13.021346 kubelet[2392]: I0117 12:22:13.021172 2392 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:13.021735 kubelet[2392]: E0117 12:22:13.021693 2392 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://137.184.236.252:6443/api/v1/nodes\": dial tcp 137.184.236.252:6443: connect: connection refused" node="ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:13.283359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount746010395.mount: Deactivated successfully. Jan 17 12:22:13.293435 containerd[1603]: time="2025-01-17T12:22:13.293358712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:22:13.294876 containerd[1603]: time="2025-01-17T12:22:13.294822624Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:22:13.296896 containerd[1603]: time="2025-01-17T12:22:13.296479949Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 12:22:13.296896 containerd[1603]: time="2025-01-17T12:22:13.296838213Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:22:13.302111 containerd[1603]: time="2025-01-17T12:22:13.302002879Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:22:13.303186 containerd[1603]: time="2025-01-17T12:22:13.303134972Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:22:13.305234 containerd[1603]: time="2025-01-17T12:22:13.304682114Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:22:13.306300 containerd[1603]: time="2025-01-17T12:22:13.306234574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:22:13.310598 containerd[1603]: time="2025-01-17T12:22:13.310529234Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 546.661884ms" Jan 17 12:22:13.317865 containerd[1603]: time="2025-01-17T12:22:13.317798048Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 532.903535ms" Jan 17 12:22:13.324799 containerd[1603]: time="2025-01-17T12:22:13.322550887Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 553.081976ms" Jan 17 12:22:13.396777 kubelet[2392]: W0117 12:22:13.393654 2392 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://137.184.236.252:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 137.184.236.252:6443: connect: connection refused Jan 17 12:22:13.396777 kubelet[2392]: E0117 12:22:13.393746 2392 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://137.184.236.252:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 137.184.236.252:6443: connect: connection refused Jan 17 12:22:13.516892 kubelet[2392]: W0117 12:22:13.516632 2392 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://137.184.236.252:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-1-b9b10bea58&limit=500&resourceVersion=0": dial tcp 137.184.236.252:6443: connect: connection refused Jan 17 12:22:13.517354 kubelet[2392]: E0117 12:22:13.517328 2392 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://137.184.236.252:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-1-b9b10bea58&limit=500&resourceVersion=0": dial tcp 137.184.236.252:6443: connect: connection refused Jan 17 12:22:13.554165 containerd[1603]: time="2025-01-17T12:22:13.553868222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:13.554165 containerd[1603]: time="2025-01-17T12:22:13.553945309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:13.554165 containerd[1603]: time="2025-01-17T12:22:13.553975346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:13.556053 containerd[1603]: time="2025-01-17T12:22:13.555915228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:13.567789 containerd[1603]: time="2025-01-17T12:22:13.567362556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:13.567789 containerd[1603]: time="2025-01-17T12:22:13.567449100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:13.567789 containerd[1603]: time="2025-01-17T12:22:13.567474868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:13.567789 containerd[1603]: time="2025-01-17T12:22:13.567606882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:13.574060 containerd[1603]: time="2025-01-17T12:22:13.573480057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:13.574060 containerd[1603]: time="2025-01-17T12:22:13.573549951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:13.574060 containerd[1603]: time="2025-01-17T12:22:13.573569909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:13.574060 containerd[1603]: time="2025-01-17T12:22:13.573694673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:13.712402 kubelet[2392]: W0117 12:22:13.712223 2392 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://137.184.236.252:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.236.252:6443: connect: connection refused Jan 17 12:22:13.712402 kubelet[2392]: E0117 12:22:13.712359 2392 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://137.184.236.252:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.236.252:6443: connect: connection refused Jan 17 12:22:13.723185 kubelet[2392]: E0117 12:22:13.722627 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.236.252:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-1-b9b10bea58?timeout=10s\": dial tcp 137.184.236.252:6443: connect: connection refused" interval="1.6s" Jan 17 12:22:13.730119 containerd[1603]: time="2025-01-17T12:22:13.730057265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-1-b9b10bea58,Uid:cb68003cb4e049be2c8226452af5cc29,Namespace:kube-system,Attempt:0,} returns sandbox id \"7475153814749e5eb51646127a4b8977b7eb0823b708c13280fc972ab7762bdd\"" Jan 17 12:22:13.732722 kubelet[2392]: E0117 12:22:13.732685 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:13.742974 containerd[1603]: time="2025-01-17T12:22:13.742126648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-1-b9b10bea58,Uid:7cf95ffa07a2cccc7d1c20a3f68ea2a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"96f304fe9fd75b4634261b8cea0b7db41d456229ea02d99469f4e4d667630b33\"" Jan 17 12:22:13.742974 containerd[1603]: time="2025-01-17T12:22:13.742626959Z" level=info msg="CreateContainer within sandbox \"7475153814749e5eb51646127a4b8977b7eb0823b708c13280fc972ab7762bdd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:22:13.744484 kubelet[2392]: E0117 12:22:13.744337 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:13.751489 containerd[1603]: time="2025-01-17T12:22:13.751410619Z" level=info msg="CreateContainer within sandbox \"96f304fe9fd75b4634261b8cea0b7db41d456229ea02d99469f4e4d667630b33\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:22:13.752564 containerd[1603]: time="2025-01-17T12:22:13.752292649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-1-b9b10bea58,Uid:c6d0e3e7b6cb2dbf1717566952b2a9c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4622e7a1d2fd5cbd3796dbc5fdb628136df2eeaa295aa9216b23b32ce94e752\"" Jan 17 12:22:13.754542 kubelet[2392]: E0117 12:22:13.754499 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:13.760239 containerd[1603]: time="2025-01-17T12:22:13.760181930Z" level=info msg="CreateContainer within sandbox \"b4622e7a1d2fd5cbd3796dbc5fdb628136df2eeaa295aa9216b23b32ce94e752\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:22:13.790385 containerd[1603]: time="2025-01-17T12:22:13.790281621Z" level=info msg="CreateContainer within sandbox \"7475153814749e5eb51646127a4b8977b7eb0823b708c13280fc972ab7762bdd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a558cba3e67a57e9d85a2db2fd1364f475e5c382b5711fba2f6b59bad71d5f38\"" Jan 17 12:22:13.791186 containerd[1603]: time="2025-01-17T12:22:13.791138655Z" level=info msg="CreateContainer within sandbox \"96f304fe9fd75b4634261b8cea0b7db41d456229ea02d99469f4e4d667630b33\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1f54ba5498c46df84badf953ba62727c0c02e24dba1622c255f15e3baa0ff70f\"" Jan 17 12:22:13.792989 containerd[1603]: time="2025-01-17T12:22:13.792887741Z" level=info msg="StartContainer for \"1f54ba5498c46df84badf953ba62727c0c02e24dba1622c255f15e3baa0ff70f\"" Jan 17 12:22:13.800056 containerd[1603]: time="2025-01-17T12:22:13.799224692Z" level=info msg="CreateContainer within sandbox \"b4622e7a1d2fd5cbd3796dbc5fdb628136df2eeaa295aa9216b23b32ce94e752\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2b4c4338d7111184eb27cff9427ad442202e59f75b74c7917d4910349fcebe6f\"" Jan 17 12:22:13.800056 containerd[1603]: time="2025-01-17T12:22:13.799505459Z" level=info msg="StartContainer for \"a558cba3e67a57e9d85a2db2fd1364f475e5c382b5711fba2f6b59bad71d5f38\"" Jan 17 12:22:13.806392 containerd[1603]: time="2025-01-17T12:22:13.806237989Z" level=info msg="StartContainer for \"2b4c4338d7111184eb27cff9427ad442202e59f75b74c7917d4910349fcebe6f\"" Jan 17 12:22:13.823880 kubelet[2392]: I0117 12:22:13.823839 2392 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:13.825055 kubelet[2392]: E0117 12:22:13.825002 2392 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://137.184.236.252:6443/api/v1/nodes\": dial tcp 137.184.236.252:6443: connect: connection refused" node="ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:13.970313 kubelet[2392]: W0117 12:22:13.970155 2392 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://137.184.236.252:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.236.252:6443: connect: connection refused Jan 17 12:22:13.970494 kubelet[2392]: E0117 12:22:13.970323 2392 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://137.184.236.252:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.236.252:6443: connect: connection refused Jan 17 12:22:14.006220 containerd[1603]: time="2025-01-17T12:22:14.005600018Z" level=info msg="StartContainer for \"a558cba3e67a57e9d85a2db2fd1364f475e5c382b5711fba2f6b59bad71d5f38\" returns successfully" Jan 17 12:22:14.006634 containerd[1603]: time="2025-01-17T12:22:14.006505846Z" level=info msg="StartContainer for \"2b4c4338d7111184eb27cff9427ad442202e59f75b74c7917d4910349fcebe6f\" returns successfully" Jan 17 12:22:14.008676 containerd[1603]: time="2025-01-17T12:22:14.006510620Z" level=info msg="StartContainer for \"1f54ba5498c46df84badf953ba62727c0c02e24dba1622c255f15e3baa0ff70f\" returns successfully" Jan 17 12:22:14.397055 kubelet[2392]: E0117 12:22:14.395638 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:14.402870 kubelet[2392]: E0117 12:22:14.401359 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:14.410294 kubelet[2392]: E0117 12:22:14.410247 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:14.424806 kubelet[2392]: E0117 12:22:14.424627 2392 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://137.184.236.252:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 137.184.236.252:6443: connect: connection refused Jan 17 12:22:15.416195 kubelet[2392]: E0117 12:22:15.416156 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:15.421246 kubelet[2392]: E0117 12:22:15.421210 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:15.428590 kubelet[2392]: I0117 12:22:15.428549 2392 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:16.712088 kubelet[2392]: I0117 12:22:16.712034 2392 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:16.731042 kubelet[2392]: E0117 12:22:16.727965 2392 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.0-1-b9b10bea58.181b7a4850cf1894 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-1-b9b10bea58,UID:ci-4081.3.0-1-b9b10bea58,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-1-b9b10bea58,},FirstTimestamp:2025-01-17 12:22:12.293441684 +0000 UTC m=+0.912374034,LastTimestamp:2025-01-17 12:22:12.293441684 +0000 UTC m=+0.912374034,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-1-b9b10bea58,}" Jan 17 12:22:17.286090 kubelet[2392]: I0117 12:22:17.285939 2392 apiserver.go:52] "Watching apiserver" Jan 17 12:22:17.317165 kubelet[2392]: I0117 12:22:17.317071 2392 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:22:18.325590 kubelet[2392]: W0117 12:22:18.324576 2392 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:22:18.325590 kubelet[2392]: E0117 12:22:18.325174 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:18.328817 kubelet[2392]: W0117 12:22:18.328614 2392 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:22:18.332626 kubelet[2392]: E0117 12:22:18.332562 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:18.428170 kubelet[2392]: E0117 12:22:18.427885 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:18.430005 kubelet[2392]: E0117 12:22:18.429853 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:19.564581 systemd[1]: Reloading requested from client PID 2664 ('systemctl') (unit session-7.scope)... Jan 17 12:22:19.565118 systemd[1]: Reloading... Jan 17 12:22:19.697074 zram_generator::config[2706]: No configuration found. Jan 17 12:22:19.877139 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:22:20.002348 systemd[1]: Reloading finished in 436 ms. Jan 17 12:22:20.057085 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:22:20.058052 kubelet[2392]: I0117 12:22:20.057913 2392 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:22:20.067693 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:22:20.068537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:22:20.077820 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:22:20.276403 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:22:20.291192 (kubelet)[2764]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:22:20.393827 kubelet[2764]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:22:20.396057 kubelet[2764]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:22:20.396057 kubelet[2764]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:22:20.396057 kubelet[2764]: I0117 12:22:20.394336 2764 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:22:20.402532 kubelet[2764]: I0117 12:22:20.402463 2764 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:22:20.403476 kubelet[2764]: I0117 12:22:20.403447 2764 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:22:20.403985 kubelet[2764]: I0117 12:22:20.403958 2764 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:22:20.407176 kubelet[2764]: I0117 12:22:20.407138 2764 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:22:20.424819 kubelet[2764]: I0117 12:22:20.424764 2764 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:22:20.444273 kubelet[2764]: I0117 12:22:20.444226 2764 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:22:20.445420 kubelet[2764]: I0117 12:22:20.445388 2764 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:22:20.445855 kubelet[2764]: I0117 12:22:20.445821 2764 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:22:20.446333 kubelet[2764]: I0117 12:22:20.446081 2764 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:22:20.446333 kubelet[2764]: I0117 12:22:20.446110 2764 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:22:20.446333 kubelet[2764]: I0117 12:22:20.446181 2764 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:22:20.446520 kubelet[2764]: I0117 12:22:20.446508 2764 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:22:20.447176 kubelet[2764]: I0117 12:22:20.447144 2764 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:22:20.447298 kubelet[2764]: I0117 12:22:20.447287 2764 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:22:20.448211 kubelet[2764]: I0117 12:22:20.447360 2764 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:22:20.449166 kubelet[2764]: I0117 12:22:20.449129 2764 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:22:20.449554 kubelet[2764]: I0117 12:22:20.449537 2764 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:22:20.450231 kubelet[2764]: I0117 12:22:20.450214 2764 server.go:1256] "Started kubelet" Jan 17 12:22:20.470349 kubelet[2764]: I0117 12:22:20.468786 2764 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:22:20.486086 kubelet[2764]: I0117 12:22:20.486047 2764 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:22:20.487581 kubelet[2764]: I0117 12:22:20.487542 2764 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:22:20.495064 kubelet[2764]: I0117 12:22:20.495007 2764 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:22:20.495958 kubelet[2764]: I0117 12:22:20.495511 2764 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:22:20.501209 kubelet[2764]: I0117 12:22:20.500983 2764 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:22:20.503525 kubelet[2764]: I0117 12:22:20.503145 2764 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:22:20.503525 kubelet[2764]: I0117 12:22:20.503359 2764 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:22:20.511706 kubelet[2764]: I0117 12:22:20.511676 2764 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:22:20.512383 kubelet[2764]: I0117 12:22:20.511993 2764 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:22:20.520109 kubelet[2764]: I0117 12:22:20.519279 2764 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:22:20.532923 kubelet[2764]: E0117 12:22:20.532761 2764 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:22:20.551373 kubelet[2764]: I0117 12:22:20.551084 2764 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:22:20.554057 kubelet[2764]: I0117 12:22:20.553476 2764 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:22:20.554057 kubelet[2764]: I0117 12:22:20.553526 2764 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:22:20.554057 kubelet[2764]: I0117 12:22:20.553555 2764 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:22:20.554057 kubelet[2764]: E0117 12:22:20.553639 2764 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:22:20.603065 kubelet[2764]: I0117 12:22:20.602834 2764 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:20.624680 kubelet[2764]: I0117 12:22:20.622682 2764 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:20.624680 kubelet[2764]: I0117 12:22:20.623209 2764 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:20.654215 kubelet[2764]: E0117 12:22:20.653891 2764 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:22:20.701186 kubelet[2764]: I0117 12:22:20.701130 2764 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:22:20.702104 kubelet[2764]: I0117 12:22:20.701406 2764 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:22:20.702104 kubelet[2764]: I0117 12:22:20.701443 2764 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:22:20.702104 kubelet[2764]: I0117 12:22:20.701677 2764 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:22:20.702104 kubelet[2764]: I0117 12:22:20.701712 2764 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:22:20.703961 kubelet[2764]: I0117 12:22:20.703934 2764 policy_none.go:49] "None policy: Start" Jan 17 12:22:20.707280 kubelet[2764]: I0117 12:22:20.706613 2764 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:22:20.707448 kubelet[2764]: I0117 12:22:20.707348 2764 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:22:20.708037 kubelet[2764]: I0117 12:22:20.707834 2764 state_mem.go:75] "Updated machine memory state" Jan 17 12:22:20.716515 kubelet[2764]: I0117 12:22:20.712399 2764 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:22:20.716515 kubelet[2764]: I0117 12:22:20.714624 2764 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:22:20.855779 kubelet[2764]: I0117 12:22:20.854220 2764 topology_manager.go:215] "Topology Admit Handler" podUID="7cf95ffa07a2cccc7d1c20a3f68ea2a8" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:20.855779 kubelet[2764]: I0117 12:22:20.854356 2764 topology_manager.go:215] "Topology Admit Handler" podUID="c6d0e3e7b6cb2dbf1717566952b2a9c2" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:20.855779 kubelet[2764]: I0117 12:22:20.854402 2764 topology_manager.go:215] "Topology Admit Handler" podUID="cb68003cb4e049be2c8226452af5cc29" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:20.868098 kubelet[2764]: W0117 12:22:20.867421 2764 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:22:20.868098 kubelet[2764]: E0117 12:22:20.867972 2764 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.0-1-b9b10bea58\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:20.877082 kubelet[2764]: W0117 12:22:20.875357 2764 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:22:20.877082 kubelet[2764]: W0117 12:22:20.875553 2764 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:22:20.877082 kubelet[2764]: E0117 12:22:20.875944 2764 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.0-1-b9b10bea58\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:20.905760 kubelet[2764]: I0117 12:22:20.905325 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7cf95ffa07a2cccc7d1c20a3f68ea2a8-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-1-b9b10bea58\" (UID: \"7cf95ffa07a2cccc7d1c20a3f68ea2a8\") " pod="kube-system/kube-apiserver-ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:20.905760 kubelet[2764]: I0117 12:22:20.905397 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6d0e3e7b6cb2dbf1717566952b2a9c2-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-1-b9b10bea58\" (UID: \"c6d0e3e7b6cb2dbf1717566952b2a9c2\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:20.905760 kubelet[2764]: I0117 12:22:20.905435 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb68003cb4e049be2c8226452af5cc29-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-1-b9b10bea58\" (UID: \"cb68003cb4e049be2c8226452af5cc29\") " pod="kube-system/kube-scheduler-ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:20.905760 kubelet[2764]: I0117 12:22:20.905466 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6d0e3e7b6cb2dbf1717566952b2a9c2-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-1-b9b10bea58\" (UID: \"c6d0e3e7b6cb2dbf1717566952b2a9c2\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:20.905760 kubelet[2764]: I0117 12:22:20.905500 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6d0e3e7b6cb2dbf1717566952b2a9c2-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-1-b9b10bea58\" (UID: \"c6d0e3e7b6cb2dbf1717566952b2a9c2\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:20.906332 kubelet[2764]: I0117 12:22:20.905543 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6d0e3e7b6cb2dbf1717566952b2a9c2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-1-b9b10bea58\" (UID: \"c6d0e3e7b6cb2dbf1717566952b2a9c2\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:20.906332 kubelet[2764]: I0117 12:22:20.905578 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7cf95ffa07a2cccc7d1c20a3f68ea2a8-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-1-b9b10bea58\" (UID: \"7cf95ffa07a2cccc7d1c20a3f68ea2a8\") " pod="kube-system/kube-apiserver-ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:20.906332 kubelet[2764]: I0117 12:22:20.905610 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7cf95ffa07a2cccc7d1c20a3f68ea2a8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-1-b9b10bea58\" (UID: \"7cf95ffa07a2cccc7d1c20a3f68ea2a8\") " pod="kube-system/kube-apiserver-ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:20.906332 kubelet[2764]: I0117 12:22:20.905645 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6d0e3e7b6cb2dbf1717566952b2a9c2-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-1-b9b10bea58\" (UID: \"c6d0e3e7b6cb2dbf1717566952b2a9c2\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:21.170522 kubelet[2764]: E0117 12:22:21.169860 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:21.179060 kubelet[2764]: E0117 12:22:21.178559 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:21.179060 kubelet[2764]: E0117 12:22:21.178984 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:21.463131 kubelet[2764]: I0117 12:22:21.462686 2764 apiserver.go:52] "Watching apiserver" Jan 17 12:22:21.503986 kubelet[2764]: I0117 12:22:21.503906 2764 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:22:21.612055 kubelet[2764]: E0117 12:22:21.609585 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:21.615061 kubelet[2764]: E0117 12:22:21.613175 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:21.641603 kubelet[2764]: W0117 12:22:21.639460 2764 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:22:21.641603 kubelet[2764]: E0117 12:22:21.639542 2764 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.0-1-b9b10bea58\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.0-1-b9b10bea58" Jan 17 12:22:21.641603 kubelet[2764]: E0117 12:22:21.639857 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:21.706819 kubelet[2764]: I0117 12:22:21.706777 2764 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-1-b9b10bea58" podStartSLOduration=3.706728102 podStartE2EDuration="3.706728102s" podCreationTimestamp="2025-01-17 12:22:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:22:21.690034136 +0000 UTC m=+1.389439538" watchObservedRunningTime="2025-01-17 12:22:21.706728102 +0000 UTC m=+1.406133492" Jan 17 12:22:21.729198 kubelet[2764]: I0117 12:22:21.728695 2764 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-1-b9b10bea58" podStartSLOduration=3.728646908 podStartE2EDuration="3.728646908s" podCreationTimestamp="2025-01-17 12:22:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:22:21.710905888 +0000 UTC m=+1.410311302" watchObservedRunningTime="2025-01-17 12:22:21.728646908 +0000 UTC m=+1.428052305" Jan 17 12:22:22.614059 kubelet[2764]: E0117 12:22:22.613971 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:22.617062 kubelet[2764]: E0117 12:22:22.616982 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:23.544742 kubelet[2764]: I0117 12:22:23.544249 2764 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-1-b9b10bea58" podStartSLOduration=3.544174941 podStartE2EDuration="3.544174941s" podCreationTimestamp="2025-01-17 12:22:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:22:21.730931106 +0000 UTC m=+1.430336504" watchObservedRunningTime="2025-01-17 12:22:23.544174941 +0000 UTC m=+3.243580337" Jan 17 12:22:23.622966 kubelet[2764]: E0117 12:22:23.622914 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:23.936217 update_engine[1580]: I20250117 12:22:23.936080 1580 update_attempter.cc:509] Updating boot flags... Jan 17 12:22:24.016853 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2822) Jan 17 12:22:24.098196 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2819) Jan 17 12:22:24.624617 kubelet[2764]: E0117 12:22:24.622482 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:25.624060 kubelet[2764]: E0117 12:22:25.623908 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:26.865308 sudo[1819]: pam_unix(sudo:session): session closed for user root Jan 17 12:22:26.872972 sshd[1812]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:26.881529 systemd[1]: sshd@6-137.184.236.252:22-139.178.68.195:48810.service: Deactivated successfully. Jan 17 12:22:26.886610 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:22:26.888356 systemd-logind[1574]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:22:26.890229 systemd-logind[1574]: Removed session 7. Jan 17 12:22:29.606313 kubelet[2764]: E0117 12:22:29.606133 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:29.635843 kubelet[2764]: E0117 12:22:29.635792 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:31.304969 kubelet[2764]: E0117 12:22:31.304860 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:33.895035 kubelet[2764]: I0117 12:22:33.894980 2764 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:22:33.898718 containerd[1603]: time="2025-01-17T12:22:33.898507333Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:22:33.899492 kubelet[2764]: I0117 12:22:33.898853 2764 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:22:34.598665 kubelet[2764]: I0117 12:22:34.598537 2764 topology_manager.go:215] "Topology Admit Handler" podUID="fe97e247-627e-4a0f-bb56-c215817121d6" podNamespace="kube-system" podName="kube-proxy-nvklj" Jan 17 12:22:34.775320 kubelet[2764]: I0117 12:22:34.775046 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fe97e247-627e-4a0f-bb56-c215817121d6-kube-proxy\") pod \"kube-proxy-nvklj\" (UID: \"fe97e247-627e-4a0f-bb56-c215817121d6\") " pod="kube-system/kube-proxy-nvklj" Jan 17 12:22:34.775320 kubelet[2764]: I0117 12:22:34.775133 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe97e247-627e-4a0f-bb56-c215817121d6-lib-modules\") pod \"kube-proxy-nvklj\" (UID: \"fe97e247-627e-4a0f-bb56-c215817121d6\") " pod="kube-system/kube-proxy-nvklj" Jan 17 12:22:34.775320 kubelet[2764]: I0117 12:22:34.775168 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe97e247-627e-4a0f-bb56-c215817121d6-xtables-lock\") pod \"kube-proxy-nvklj\" (UID: \"fe97e247-627e-4a0f-bb56-c215817121d6\") " pod="kube-system/kube-proxy-nvklj" Jan 17 12:22:34.775320 kubelet[2764]: I0117 12:22:34.775205 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c75zf\" (UniqueName: \"kubernetes.io/projected/fe97e247-627e-4a0f-bb56-c215817121d6-kube-api-access-c75zf\") pod \"kube-proxy-nvklj\" (UID: \"fe97e247-627e-4a0f-bb56-c215817121d6\") " pod="kube-system/kube-proxy-nvklj" Jan 17 12:22:34.955853 kubelet[2764]: I0117 12:22:34.955330 2764 topology_manager.go:215] "Topology Admit Handler" podUID="23ff6683-3f53-4f54-a3a2-4c372685c404" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-w64f2" Jan 17 12:22:34.979764 kubelet[2764]: I0117 12:22:34.978129 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/23ff6683-3f53-4f54-a3a2-4c372685c404-var-lib-calico\") pod \"tigera-operator-c7ccbd65-w64f2\" (UID: \"23ff6683-3f53-4f54-a3a2-4c372685c404\") " pod="tigera-operator/tigera-operator-c7ccbd65-w64f2" Jan 17 12:22:34.979764 kubelet[2764]: I0117 12:22:34.978197 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qsvh\" (UniqueName: \"kubernetes.io/projected/23ff6683-3f53-4f54-a3a2-4c372685c404-kube-api-access-5qsvh\") pod \"tigera-operator-c7ccbd65-w64f2\" (UID: \"23ff6683-3f53-4f54-a3a2-4c372685c404\") " pod="tigera-operator/tigera-operator-c7ccbd65-w64f2" Jan 17 12:22:35.209291 kubelet[2764]: E0117 12:22:35.208993 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:35.211697 containerd[1603]: time="2025-01-17T12:22:35.211453990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nvklj,Uid:fe97e247-627e-4a0f-bb56-c215817121d6,Namespace:kube-system,Attempt:0,}" Jan 17 12:22:35.260183 containerd[1603]: time="2025-01-17T12:22:35.259496302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:35.260183 containerd[1603]: time="2025-01-17T12:22:35.259590244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:35.260183 containerd[1603]: time="2025-01-17T12:22:35.259609284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:35.260183 containerd[1603]: time="2025-01-17T12:22:35.259751756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:35.270205 containerd[1603]: time="2025-01-17T12:22:35.268005175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-w64f2,Uid:23ff6683-3f53-4f54-a3a2-4c372685c404,Namespace:tigera-operator,Attempt:0,}" Jan 17 12:22:35.338178 containerd[1603]: time="2025-01-17T12:22:35.338121418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nvklj,Uid:fe97e247-627e-4a0f-bb56-c215817121d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a0b2ddd78f0a1e7bc6a384c3edf37b604b7c8d0de68fdc18a10edf3f0144fd9\"" Jan 17 12:22:35.339739 kubelet[2764]: E0117 12:22:35.339704 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:35.354421 containerd[1603]: time="2025-01-17T12:22:35.354354450Z" level=info msg="CreateContainer within sandbox \"4a0b2ddd78f0a1e7bc6a384c3edf37b604b7c8d0de68fdc18a10edf3f0144fd9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:22:35.361693 containerd[1603]: time="2025-01-17T12:22:35.361037041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:35.361693 containerd[1603]: time="2025-01-17T12:22:35.361243197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:35.361693 containerd[1603]: time="2025-01-17T12:22:35.361299519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:35.361693 containerd[1603]: time="2025-01-17T12:22:35.361573022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:35.378978 containerd[1603]: time="2025-01-17T12:22:35.378688476Z" level=info msg="CreateContainer within sandbox \"4a0b2ddd78f0a1e7bc6a384c3edf37b604b7c8d0de68fdc18a10edf3f0144fd9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b00f9184760bab6d2f4d26255ec2ebafb27beffc1ba1731a51f54692b197215c\"" Jan 17 12:22:35.381378 containerd[1603]: time="2025-01-17T12:22:35.380052682Z" level=info msg="StartContainer for \"b00f9184760bab6d2f4d26255ec2ebafb27beffc1ba1731a51f54692b197215c\"" Jan 17 12:22:35.510594 containerd[1603]: time="2025-01-17T12:22:35.510156248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-w64f2,Uid:23ff6683-3f53-4f54-a3a2-4c372685c404,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"09ae24627267290b59a5c616e60f9174dac6384f284809b4ce73c0c69f7e6a15\"" Jan 17 12:22:35.515168 containerd[1603]: time="2025-01-17T12:22:35.515049965Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 17 12:22:35.529625 containerd[1603]: time="2025-01-17T12:22:35.529491326Z" level=info msg="StartContainer for \"b00f9184760bab6d2f4d26255ec2ebafb27beffc1ba1731a51f54692b197215c\" returns successfully" Jan 17 12:22:35.654737 kubelet[2764]: E0117 12:22:35.652894 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:35.679176 kubelet[2764]: I0117 12:22:35.676647 2764 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-nvklj" podStartSLOduration=1.67658822 podStartE2EDuration="1.67658822s" podCreationTimestamp="2025-01-17 12:22:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:22:35.674163004 +0000 UTC m=+15.373568402" watchObservedRunningTime="2025-01-17 12:22:35.67658822 +0000 UTC m=+15.375993623" Jan 17 12:22:37.038915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2995870556.mount: Deactivated successfully. Jan 17 12:22:38.860599 containerd[1603]: time="2025-01-17T12:22:38.860072671Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:38.862375 containerd[1603]: time="2025-01-17T12:22:38.862267381Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764301" Jan 17 12:22:38.867069 containerd[1603]: time="2025-01-17T12:22:38.864119550Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:38.869661 containerd[1603]: time="2025-01-17T12:22:38.869586787Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:38.873470 containerd[1603]: time="2025-01-17T12:22:38.873401914Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.358290121s" Jan 17 12:22:38.873755 containerd[1603]: time="2025-01-17T12:22:38.873726794Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 17 12:22:38.879789 containerd[1603]: time="2025-01-17T12:22:38.879727234Z" level=info msg="CreateContainer within sandbox \"09ae24627267290b59a5c616e60f9174dac6384f284809b4ce73c0c69f7e6a15\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 12:22:38.915466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2792814939.mount: Deactivated successfully. Jan 17 12:22:38.921121 containerd[1603]: time="2025-01-17T12:22:38.920988579Z" level=info msg="CreateContainer within sandbox \"09ae24627267290b59a5c616e60f9174dac6384f284809b4ce73c0c69f7e6a15\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a6bca9aba3e5e4946c5deb99c784e54a7a55b431a99ce80f8649cd46a1de5c75\"" Jan 17 12:22:38.924930 containerd[1603]: time="2025-01-17T12:22:38.923363757Z" level=info msg="StartContainer for \"a6bca9aba3e5e4946c5deb99c784e54a7a55b431a99ce80f8649cd46a1de5c75\"" Jan 17 12:22:38.980097 systemd[1]: run-containerd-runc-k8s.io-a6bca9aba3e5e4946c5deb99c784e54a7a55b431a99ce80f8649cd46a1de5c75-runc.CX0Ryf.mount: Deactivated successfully. Jan 17 12:22:39.047168 containerd[1603]: time="2025-01-17T12:22:39.047098206Z" level=info msg="StartContainer for \"a6bca9aba3e5e4946c5deb99c784e54a7a55b431a99ce80f8649cd46a1de5c75\" returns successfully" Jan 17 12:22:39.700991 kubelet[2764]: I0117 12:22:39.700398 2764 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-w64f2" podStartSLOduration=2.337291264 podStartE2EDuration="5.698494005s" podCreationTimestamp="2025-01-17 12:22:34 +0000 UTC" firstStartedPulling="2025-01-17 12:22:35.513176666 +0000 UTC m=+15.212582037" lastFinishedPulling="2025-01-17 12:22:38.874379389 +0000 UTC m=+18.573784778" observedRunningTime="2025-01-17 12:22:39.698468509 +0000 UTC m=+19.397873902" watchObservedRunningTime="2025-01-17 12:22:39.698494005 +0000 UTC m=+19.397899408" Jan 17 12:22:42.637352 kubelet[2764]: I0117 12:22:42.637196 2764 topology_manager.go:215] "Topology Admit Handler" podUID="b71a8357-0820-4095-ae55-91d9bf439b98" podNamespace="calico-system" podName="calico-typha-7dff5c4bd8-9vd4j" Jan 17 12:22:42.660659 kubelet[2764]: I0117 12:22:42.658381 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjs4x\" (UniqueName: \"kubernetes.io/projected/b71a8357-0820-4095-ae55-91d9bf439b98-kube-api-access-hjs4x\") pod \"calico-typha-7dff5c4bd8-9vd4j\" (UID: \"b71a8357-0820-4095-ae55-91d9bf439b98\") " pod="calico-system/calico-typha-7dff5c4bd8-9vd4j" Jan 17 12:22:42.660659 kubelet[2764]: I0117 12:22:42.658490 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b71a8357-0820-4095-ae55-91d9bf439b98-tigera-ca-bundle\") pod \"calico-typha-7dff5c4bd8-9vd4j\" (UID: \"b71a8357-0820-4095-ae55-91d9bf439b98\") " pod="calico-system/calico-typha-7dff5c4bd8-9vd4j" Jan 17 12:22:42.660659 kubelet[2764]: I0117 12:22:42.658536 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b71a8357-0820-4095-ae55-91d9bf439b98-typha-certs\") pod \"calico-typha-7dff5c4bd8-9vd4j\" (UID: \"b71a8357-0820-4095-ae55-91d9bf439b98\") " pod="calico-system/calico-typha-7dff5c4bd8-9vd4j" Jan 17 12:22:42.954442 kubelet[2764]: E0117 12:22:42.952980 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:42.959422 containerd[1603]: time="2025-01-17T12:22:42.959367740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7dff5c4bd8-9vd4j,Uid:b71a8357-0820-4095-ae55-91d9bf439b98,Namespace:calico-system,Attempt:0,}" Jan 17 12:22:43.009967 kubelet[2764]: I0117 12:22:43.008979 2764 topology_manager.go:215] "Topology Admit Handler" podUID="3599793b-4c6b-4515-9dcb-1a39f803b4c5" podNamespace="calico-system" podName="calico-node-4nbjz" Jan 17 12:22:43.051818 containerd[1603]: time="2025-01-17T12:22:43.051548792Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:43.051818 containerd[1603]: time="2025-01-17T12:22:43.051654546Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:43.051818 containerd[1603]: time="2025-01-17T12:22:43.051697260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:43.052818 containerd[1603]: time="2025-01-17T12:22:43.051850511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:43.066621 kubelet[2764]: I0117 12:22:43.064923 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3599793b-4c6b-4515-9dcb-1a39f803b4c5-xtables-lock\") pod \"calico-node-4nbjz\" (UID: \"3599793b-4c6b-4515-9dcb-1a39f803b4c5\") " pod="calico-system/calico-node-4nbjz" Jan 17 12:22:43.066621 kubelet[2764]: I0117 12:22:43.064974 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3599793b-4c6b-4515-9dcb-1a39f803b4c5-policysync\") pod \"calico-node-4nbjz\" (UID: \"3599793b-4c6b-4515-9dcb-1a39f803b4c5\") " pod="calico-system/calico-node-4nbjz" Jan 17 12:22:43.066621 kubelet[2764]: I0117 12:22:43.065002 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3599793b-4c6b-4515-9dcb-1a39f803b4c5-cni-net-dir\") pod \"calico-node-4nbjz\" (UID: \"3599793b-4c6b-4515-9dcb-1a39f803b4c5\") " pod="calico-system/calico-node-4nbjz" Jan 17 12:22:43.066621 kubelet[2764]: I0117 12:22:43.065075 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3599793b-4c6b-4515-9dcb-1a39f803b4c5-lib-modules\") pod \"calico-node-4nbjz\" (UID: \"3599793b-4c6b-4515-9dcb-1a39f803b4c5\") " pod="calico-system/calico-node-4nbjz" Jan 17 12:22:43.066621 kubelet[2764]: I0117 12:22:43.065096 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3599793b-4c6b-4515-9dcb-1a39f803b4c5-cni-log-dir\") pod \"calico-node-4nbjz\" (UID: \"3599793b-4c6b-4515-9dcb-1a39f803b4c5\") " pod="calico-system/calico-node-4nbjz" Jan 17 12:22:43.067090 kubelet[2764]: I0117 12:22:43.065124 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3599793b-4c6b-4515-9dcb-1a39f803b4c5-var-lib-calico\") pod \"calico-node-4nbjz\" (UID: \"3599793b-4c6b-4515-9dcb-1a39f803b4c5\") " pod="calico-system/calico-node-4nbjz" Jan 17 12:22:43.067090 kubelet[2764]: I0117 12:22:43.065148 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gldgl\" (UniqueName: \"kubernetes.io/projected/3599793b-4c6b-4515-9dcb-1a39f803b4c5-kube-api-access-gldgl\") pod \"calico-node-4nbjz\" (UID: \"3599793b-4c6b-4515-9dcb-1a39f803b4c5\") " pod="calico-system/calico-node-4nbjz" Jan 17 12:22:43.067090 kubelet[2764]: I0117 12:22:43.065168 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3599793b-4c6b-4515-9dcb-1a39f803b4c5-var-run-calico\") pod \"calico-node-4nbjz\" (UID: \"3599793b-4c6b-4515-9dcb-1a39f803b4c5\") " pod="calico-system/calico-node-4nbjz" Jan 17 12:22:43.067090 kubelet[2764]: I0117 12:22:43.065190 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3599793b-4c6b-4515-9dcb-1a39f803b4c5-cni-bin-dir\") pod \"calico-node-4nbjz\" (UID: \"3599793b-4c6b-4515-9dcb-1a39f803b4c5\") " pod="calico-system/calico-node-4nbjz" Jan 17 12:22:43.067090 kubelet[2764]: I0117 12:22:43.065211 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3599793b-4c6b-4515-9dcb-1a39f803b4c5-flexvol-driver-host\") pod \"calico-node-4nbjz\" (UID: \"3599793b-4c6b-4515-9dcb-1a39f803b4c5\") " pod="calico-system/calico-node-4nbjz" Jan 17 12:22:43.067379 kubelet[2764]: I0117 12:22:43.065231 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3599793b-4c6b-4515-9dcb-1a39f803b4c5-tigera-ca-bundle\") pod \"calico-node-4nbjz\" (UID: \"3599793b-4c6b-4515-9dcb-1a39f803b4c5\") " pod="calico-system/calico-node-4nbjz" Jan 17 12:22:43.067379 kubelet[2764]: I0117 12:22:43.065252 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3599793b-4c6b-4515-9dcb-1a39f803b4c5-node-certs\") pod \"calico-node-4nbjz\" (UID: \"3599793b-4c6b-4515-9dcb-1a39f803b4c5\") " pod="calico-system/calico-node-4nbjz" Jan 17 12:22:43.148358 kubelet[2764]: I0117 12:22:43.145172 2764 topology_manager.go:215] "Topology Admit Handler" podUID="9e48819f-106c-43b3-89f6-2976b3a7c1c2" podNamespace="calico-system" podName="csi-node-driver-mvjx9" Jan 17 12:22:43.148358 kubelet[2764]: E0117 12:22:43.145925 2764 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvjx9" podUID="9e48819f-106c-43b3-89f6-2976b3a7c1c2" Jan 17 12:22:43.167427 kubelet[2764]: I0117 12:22:43.165998 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9e48819f-106c-43b3-89f6-2976b3a7c1c2-registration-dir\") pod \"csi-node-driver-mvjx9\" (UID: \"9e48819f-106c-43b3-89f6-2976b3a7c1c2\") " pod="calico-system/csi-node-driver-mvjx9" Jan 17 12:22:43.176055 kubelet[2764]: I0117 12:22:43.175690 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfskt\" (UniqueName: \"kubernetes.io/projected/9e48819f-106c-43b3-89f6-2976b3a7c1c2-kube-api-access-dfskt\") pod \"csi-node-driver-mvjx9\" (UID: \"9e48819f-106c-43b3-89f6-2976b3a7c1c2\") " pod="calico-system/csi-node-driver-mvjx9" Jan 17 12:22:43.181086 kubelet[2764]: I0117 12:22:43.179502 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9e48819f-106c-43b3-89f6-2976b3a7c1c2-varrun\") pod \"csi-node-driver-mvjx9\" (UID: \"9e48819f-106c-43b3-89f6-2976b3a7c1c2\") " pod="calico-system/csi-node-driver-mvjx9" Jan 17 12:22:43.182575 kubelet[2764]: I0117 12:22:43.182499 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e48819f-106c-43b3-89f6-2976b3a7c1c2-kubelet-dir\") pod \"csi-node-driver-mvjx9\" (UID: \"9e48819f-106c-43b3-89f6-2976b3a7c1c2\") " pod="calico-system/csi-node-driver-mvjx9" Jan 17 12:22:43.182887 kubelet[2764]: I0117 12:22:43.182834 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9e48819f-106c-43b3-89f6-2976b3a7c1c2-socket-dir\") pod \"csi-node-driver-mvjx9\" (UID: \"9e48819f-106c-43b3-89f6-2976b3a7c1c2\") " pod="calico-system/csi-node-driver-mvjx9" Jan 17 12:22:43.190616 kubelet[2764]: E0117 12:22:43.190429 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.190616 kubelet[2764]: W0117 12:22:43.190503 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.192387 kubelet[2764]: E0117 12:22:43.190555 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.193197 kubelet[2764]: E0117 12:22:43.193147 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.193197 kubelet[2764]: W0117 12:22:43.193189 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.194290 kubelet[2764]: E0117 12:22:43.193459 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.195269 kubelet[2764]: E0117 12:22:43.195143 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.195269 kubelet[2764]: W0117 12:22:43.195269 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.195420 kubelet[2764]: E0117 12:22:43.195346 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.196789 kubelet[2764]: E0117 12:22:43.196746 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.197699 kubelet[2764]: W0117 12:22:43.196780 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.200066 kubelet[2764]: E0117 12:22:43.199097 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.200573 kubelet[2764]: E0117 12:22:43.200536 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.200573 kubelet[2764]: W0117 12:22:43.200567 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.200705 kubelet[2764]: E0117 12:22:43.200677 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.200947 kubelet[2764]: E0117 12:22:43.200930 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.200947 kubelet[2764]: W0117 12:22:43.200946 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.201130 kubelet[2764]: E0117 12:22:43.201055 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.201464 kubelet[2764]: E0117 12:22:43.201238 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.201464 kubelet[2764]: W0117 12:22:43.201251 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.201464 kubelet[2764]: E0117 12:22:43.201299 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.201464 kubelet[2764]: E0117 12:22:43.201441 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.201464 kubelet[2764]: W0117 12:22:43.201451 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.201679 kubelet[2764]: E0117 12:22:43.201541 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.203680 kubelet[2764]: E0117 12:22:43.203470 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.203680 kubelet[2764]: W0117 12:22:43.203494 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.206491 kubelet[2764]: E0117 12:22:43.205266 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.206491 kubelet[2764]: W0117 12:22:43.205293 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.206491 kubelet[2764]: E0117 12:22:43.205523 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.206491 kubelet[2764]: W0117 12:22:43.205536 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.206491 kubelet[2764]: E0117 12:22:43.205717 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.206491 kubelet[2764]: W0117 12:22:43.205728 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.206491 kubelet[2764]: E0117 12:22:43.205902 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.206491 kubelet[2764]: E0117 12:22:43.205931 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.206491 kubelet[2764]: E0117 12:22:43.205949 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.206491 kubelet[2764]: E0117 12:22:43.205981 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.210328 kubelet[2764]: E0117 12:22:43.207130 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.210328 kubelet[2764]: W0117 12:22:43.207153 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.210328 kubelet[2764]: E0117 12:22:43.207196 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.210328 kubelet[2764]: E0117 12:22:43.209152 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.210328 kubelet[2764]: W0117 12:22:43.209169 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.210328 kubelet[2764]: E0117 12:22:43.209192 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.212175 kubelet[2764]: E0117 12:22:43.212089 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.212175 kubelet[2764]: W0117 12:22:43.212118 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.212338 kubelet[2764]: E0117 12:22:43.212226 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.214195 kubelet[2764]: E0117 12:22:43.213619 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.214195 kubelet[2764]: W0117 12:22:43.213661 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.214370 kubelet[2764]: E0117 12:22:43.214246 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.214370 kubelet[2764]: W0117 12:22:43.214277 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.216527 kubelet[2764]: E0117 12:22:43.214650 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.216527 kubelet[2764]: W0117 12:22:43.214791 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.216527 kubelet[2764]: E0117 12:22:43.214829 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.216527 kubelet[2764]: E0117 12:22:43.215970 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.216527 kubelet[2764]: W0117 12:22:43.215984 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.216527 kubelet[2764]: E0117 12:22:43.216041 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.218874 kubelet[2764]: E0117 12:22:43.218117 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.218874 kubelet[2764]: E0117 12:22:43.218222 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.221193 kubelet[2764]: E0117 12:22:43.221152 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.221316 kubelet[2764]: W0117 12:22:43.221292 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.221364 kubelet[2764]: E0117 12:22:43.221343 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.224400 kubelet[2764]: E0117 12:22:43.223757 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.224400 kubelet[2764]: W0117 12:22:43.223789 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.224400 kubelet[2764]: E0117 12:22:43.224275 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.227130 kubelet[2764]: E0117 12:22:43.226346 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.227130 kubelet[2764]: W0117 12:22:43.226376 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.227130 kubelet[2764]: E0117 12:22:43.226404 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.227381 kubelet[2764]: E0117 12:22:43.227222 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.227381 kubelet[2764]: W0117 12:22:43.227236 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.227381 kubelet[2764]: E0117 12:22:43.227261 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.227519 kubelet[2764]: E0117 12:22:43.227473 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.227519 kubelet[2764]: W0117 12:22:43.227482 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.227519 kubelet[2764]: E0117 12:22:43.227496 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.286594 kubelet[2764]: E0117 12:22:43.286519 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.286594 kubelet[2764]: W0117 12:22:43.286557 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.289270 kubelet[2764]: E0117 12:22:43.286902 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.289270 kubelet[2764]: E0117 12:22:43.289093 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.289270 kubelet[2764]: W0117 12:22:43.289144 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.289394 containerd[1603]: time="2025-01-17T12:22:43.287396288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7dff5c4bd8-9vd4j,Uid:b71a8357-0820-4095-ae55-91d9bf439b98,Namespace:calico-system,Attempt:0,} returns sandbox id \"4d736d51415926f93cea1afd2d1a26eaf387c653ea4ec5d7c27333af3d82d5a8\"" Jan 17 12:22:43.289456 kubelet[2764]: E0117 12:22:43.289186 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.291398 kubelet[2764]: E0117 12:22:43.289594 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.291398 kubelet[2764]: W0117 12:22:43.289663 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.291398 kubelet[2764]: E0117 12:22:43.289681 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.291398 kubelet[2764]: E0117 12:22:43.290556 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.291398 kubelet[2764]: W0117 12:22:43.290572 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.291398 kubelet[2764]: E0117 12:22:43.290588 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.291398 kubelet[2764]: E0117 12:22:43.290857 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.291398 kubelet[2764]: W0117 12:22:43.290866 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.291398 kubelet[2764]: E0117 12:22:43.290880 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.291398 kubelet[2764]: E0117 12:22:43.291403 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.309904 kubelet[2764]: W0117 12:22:43.291415 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.309904 kubelet[2764]: E0117 12:22:43.291429 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.309904 kubelet[2764]: E0117 12:22:43.291765 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.309904 kubelet[2764]: W0117 12:22:43.291810 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.309904 kubelet[2764]: E0117 12:22:43.291827 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.309904 kubelet[2764]: E0117 12:22:43.292042 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.309904 kubelet[2764]: W0117 12:22:43.292051 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.309904 kubelet[2764]: E0117 12:22:43.292061 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.309904 kubelet[2764]: E0117 12:22:43.292230 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.309904 kubelet[2764]: W0117 12:22:43.292240 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.311924 containerd[1603]: time="2025-01-17T12:22:43.303195938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 17 12:22:43.311975 kubelet[2764]: E0117 12:22:43.292255 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.311975 kubelet[2764]: E0117 12:22:43.292695 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.311975 kubelet[2764]: W0117 12:22:43.292706 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.311975 kubelet[2764]: E0117 12:22:43.292721 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.311975 kubelet[2764]: E0117 12:22:43.292934 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.311975 kubelet[2764]: W0117 12:22:43.292942 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.311975 kubelet[2764]: E0117 12:22:43.292954 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.311975 kubelet[2764]: E0117 12:22:43.293166 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.311975 kubelet[2764]: W0117 12:22:43.293176 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.311975 kubelet[2764]: E0117 12:22:43.293192 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.315200 kubelet[2764]: E0117 12:22:43.293404 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.315200 kubelet[2764]: W0117 12:22:43.293412 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.315200 kubelet[2764]: E0117 12:22:43.293424 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.315200 kubelet[2764]: E0117 12:22:43.293651 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.315200 kubelet[2764]: W0117 12:22:43.293658 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.315200 kubelet[2764]: E0117 12:22:43.293668 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.315200 kubelet[2764]: E0117 12:22:43.293884 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.315200 kubelet[2764]: W0117 12:22:43.293896 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.315200 kubelet[2764]: E0117 12:22:43.293936 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.315200 kubelet[2764]: E0117 12:22:43.294222 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.315469 kubelet[2764]: W0117 12:22:43.294376 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.315469 kubelet[2764]: E0117 12:22:43.294420 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.315469 kubelet[2764]: E0117 12:22:43.294713 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.315469 kubelet[2764]: W0117 12:22:43.294721 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.315469 kubelet[2764]: E0117 12:22:43.294732 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.315469 kubelet[2764]: E0117 12:22:43.295198 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.315469 kubelet[2764]: W0117 12:22:43.295207 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.315469 kubelet[2764]: E0117 12:22:43.295219 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.315469 kubelet[2764]: E0117 12:22:43.295385 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.315469 kubelet[2764]: W0117 12:22:43.295397 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.315743 kubelet[2764]: E0117 12:22:43.295411 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.315743 kubelet[2764]: E0117 12:22:43.295686 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.315743 kubelet[2764]: W0117 12:22:43.295700 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.315743 kubelet[2764]: E0117 12:22:43.295715 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.315743 kubelet[2764]: E0117 12:22:43.295945 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.315743 kubelet[2764]: W0117 12:22:43.295953 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.315743 kubelet[2764]: E0117 12:22:43.295969 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.315743 kubelet[2764]: E0117 12:22:43.296160 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.315743 kubelet[2764]: W0117 12:22:43.296167 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.315743 kubelet[2764]: E0117 12:22:43.296177 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.318280 kubelet[2764]: E0117 12:22:43.296308 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.318280 kubelet[2764]: W0117 12:22:43.296314 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.318280 kubelet[2764]: E0117 12:22:43.296322 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.318280 kubelet[2764]: E0117 12:22:43.296436 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.318280 kubelet[2764]: W0117 12:22:43.296457 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.318280 kubelet[2764]: E0117 12:22:43.296470 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.318280 kubelet[2764]: E0117 12:22:43.299536 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.318280 kubelet[2764]: W0117 12:22:43.299626 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.318280 kubelet[2764]: E0117 12:22:43.299739 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.318280 kubelet[2764]: E0117 12:22:43.300769 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:43.318607 kubelet[2764]: E0117 12:22:43.317502 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:43.322654 containerd[1603]: time="2025-01-17T12:22:43.322259065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4nbjz,Uid:3599793b-4c6b-4515-9dcb-1a39f803b4c5,Namespace:calico-system,Attempt:0,}" Jan 17 12:22:43.347814 kubelet[2764]: E0117 12:22:43.346980 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:43.347814 kubelet[2764]: W0117 12:22:43.347512 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:43.347814 kubelet[2764]: E0117 12:22:43.347563 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:43.388232 containerd[1603]: time="2025-01-17T12:22:43.387976450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:43.388232 containerd[1603]: time="2025-01-17T12:22:43.388172776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:43.388232 containerd[1603]: time="2025-01-17T12:22:43.388192151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:43.390040 containerd[1603]: time="2025-01-17T12:22:43.389137971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:43.483088 containerd[1603]: time="2025-01-17T12:22:43.482805913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4nbjz,Uid:3599793b-4c6b-4515-9dcb-1a39f803b4c5,Namespace:calico-system,Attempt:0,} returns sandbox id \"8eac3d1d00ae153258c469b5c9bf52dfd7bf308dca62f68a82baffc6d2f30e3c\"" Jan 17 12:22:43.486031 kubelet[2764]: E0117 12:22:43.485306 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:44.555759 kubelet[2764]: E0117 12:22:44.555578 2764 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvjx9" podUID="9e48819f-106c-43b3-89f6-2976b3a7c1c2" Jan 17 12:22:44.767054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2587894256.mount: Deactivated successfully. Jan 17 12:22:45.674078 containerd[1603]: time="2025-01-17T12:22:45.673681768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:45.676251 containerd[1603]: time="2025-01-17T12:22:45.676152354Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 17 12:22:45.677601 containerd[1603]: time="2025-01-17T12:22:45.677536067Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:45.681228 containerd[1603]: time="2025-01-17T12:22:45.681070249Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:45.687358 containerd[1603]: time="2025-01-17T12:22:45.687272629Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.379545573s" Jan 17 12:22:45.687358 containerd[1603]: time="2025-01-17T12:22:45.687354308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 17 12:22:45.688879 containerd[1603]: time="2025-01-17T12:22:45.688162277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 17 12:22:45.723299 containerd[1603]: time="2025-01-17T12:22:45.723235934Z" level=info msg="CreateContainer within sandbox \"4d736d51415926f93cea1afd2d1a26eaf387c653ea4ec5d7c27333af3d82d5a8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 12:22:45.800650 containerd[1603]: time="2025-01-17T12:22:45.800543854Z" level=info msg="CreateContainer within sandbox \"4d736d51415926f93cea1afd2d1a26eaf387c653ea4ec5d7c27333af3d82d5a8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9f9370cbd30d3c17176b5f8ed403657799c3dbf37299ddc3bc2e5f90752e95c3\"" Jan 17 12:22:45.804062 containerd[1603]: time="2025-01-17T12:22:45.803247986Z" level=info msg="StartContainer for \"9f9370cbd30d3c17176b5f8ed403657799c3dbf37299ddc3bc2e5f90752e95c3\"" Jan 17 12:22:45.917389 containerd[1603]: time="2025-01-17T12:22:45.917164535Z" level=info msg="StartContainer for \"9f9370cbd30d3c17176b5f8ed403657799c3dbf37299ddc3bc2e5f90752e95c3\" returns successfully" Jan 17 12:22:46.555973 kubelet[2764]: E0117 12:22:46.555914 2764 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvjx9" podUID="9e48819f-106c-43b3-89f6-2976b3a7c1c2" Jan 17 12:22:46.750137 kubelet[2764]: E0117 12:22:46.749263 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:46.803219 kubelet[2764]: E0117 12:22:46.797523 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.803219 kubelet[2764]: W0117 12:22:46.797558 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.803219 kubelet[2764]: E0117 12:22:46.797599 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.809747 kubelet[2764]: E0117 12:22:46.809551 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.809747 kubelet[2764]: W0117 12:22:46.809608 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.809747 kubelet[2764]: E0117 12:22:46.809649 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.815854 kubelet[2764]: E0117 12:22:46.815811 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.815854 kubelet[2764]: W0117 12:22:46.815847 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.816117 kubelet[2764]: E0117 12:22:46.815885 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.820211 kubelet[2764]: E0117 12:22:46.820083 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.820211 kubelet[2764]: W0117 12:22:46.820111 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.820211 kubelet[2764]: E0117 12:22:46.820144 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.821516 kubelet[2764]: E0117 12:22:46.821262 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.821516 kubelet[2764]: W0117 12:22:46.821293 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.821516 kubelet[2764]: E0117 12:22:46.821376 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.821908 kubelet[2764]: E0117 12:22:46.821891 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.822176 kubelet[2764]: W0117 12:22:46.821991 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.822176 kubelet[2764]: E0117 12:22:46.822050 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.824955 kubelet[2764]: E0117 12:22:46.824738 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.824955 kubelet[2764]: W0117 12:22:46.824766 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.824955 kubelet[2764]: E0117 12:22:46.824800 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.825622 kubelet[2764]: E0117 12:22:46.825599 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.825817 kubelet[2764]: W0117 12:22:46.825738 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.825817 kubelet[2764]: E0117 12:22:46.825775 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.826535 kubelet[2764]: E0117 12:22:46.826394 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.826535 kubelet[2764]: W0117 12:22:46.826411 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.826535 kubelet[2764]: E0117 12:22:46.826432 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.830947 kubelet[2764]: E0117 12:22:46.830872 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.831186 kubelet[2764]: W0117 12:22:46.831156 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.831329 kubelet[2764]: E0117 12:22:46.831313 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.832414 kubelet[2764]: E0117 12:22:46.832387 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.832848 kubelet[2764]: W0117 12:22:46.832655 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.832848 kubelet[2764]: E0117 12:22:46.832695 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.837466 kubelet[2764]: E0117 12:22:46.837099 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.837466 kubelet[2764]: W0117 12:22:46.837135 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.837466 kubelet[2764]: E0117 12:22:46.837170 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.837856 kubelet[2764]: E0117 12:22:46.837837 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.838117 kubelet[2764]: W0117 12:22:46.837946 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.838117 kubelet[2764]: E0117 12:22:46.837999 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.838530 kubelet[2764]: E0117 12:22:46.838510 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.839049 kubelet[2764]: W0117 12:22:46.838825 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.839049 kubelet[2764]: E0117 12:22:46.838861 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.839690 kubelet[2764]: E0117 12:22:46.839671 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.840392 kubelet[2764]: W0117 12:22:46.840270 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.840392 kubelet[2764]: E0117 12:22:46.840311 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.928570 kubelet[2764]: E0117 12:22:46.928250 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.928570 kubelet[2764]: W0117 12:22:46.928277 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.928570 kubelet[2764]: E0117 12:22:46.928328 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.929211 kubelet[2764]: E0117 12:22:46.928953 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.929211 kubelet[2764]: W0117 12:22:46.928992 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.929211 kubelet[2764]: E0117 12:22:46.929064 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.932597 kubelet[2764]: E0117 12:22:46.929665 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.932597 kubelet[2764]: W0117 12:22:46.930002 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.932597 kubelet[2764]: E0117 12:22:46.932260 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.935659 kubelet[2764]: E0117 12:22:46.935617 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.936182 kubelet[2764]: W0117 12:22:46.935877 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.936182 kubelet[2764]: E0117 12:22:46.936121 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.936619 kubelet[2764]: E0117 12:22:46.936602 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.936758 kubelet[2764]: W0117 12:22:46.936740 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.937002 kubelet[2764]: E0117 12:22:46.936988 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.937223 kubelet[2764]: E0117 12:22:46.937210 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.937347 kubelet[2764]: W0117 12:22:46.937331 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.937645 kubelet[2764]: E0117 12:22:46.937609 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.937982 kubelet[2764]: E0117 12:22:46.937961 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.941263 kubelet[2764]: W0117 12:22:46.940159 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.941263 kubelet[2764]: E0117 12:22:46.940321 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.941263 kubelet[2764]: E0117 12:22:46.940817 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.941263 kubelet[2764]: W0117 12:22:46.940834 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.941263 kubelet[2764]: E0117 12:22:46.940990 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.941840 kubelet[2764]: E0117 12:22:46.941727 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.941840 kubelet[2764]: W0117 12:22:46.941746 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.941968 kubelet[2764]: E0117 12:22:46.941912 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.942370 kubelet[2764]: E0117 12:22:46.942340 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.942513 kubelet[2764]: W0117 12:22:46.942494 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.942964 kubelet[2764]: E0117 12:22:46.942930 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.945623 kubelet[2764]: W0117 12:22:46.945200 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.945623 kubelet[2764]: E0117 12:22:46.943112 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.945623 kubelet[2764]: E0117 12:22:46.945540 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.946302 kubelet[2764]: E0117 12:22:46.946001 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.946302 kubelet[2764]: W0117 12:22:46.946090 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.946302 kubelet[2764]: E0117 12:22:46.946181 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.948669 kubelet[2764]: E0117 12:22:46.948230 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.948669 kubelet[2764]: W0117 12:22:46.948258 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.948669 kubelet[2764]: E0117 12:22:46.948292 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.950751 kubelet[2764]: E0117 12:22:46.950432 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.950751 kubelet[2764]: W0117 12:22:46.950457 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.950751 kubelet[2764]: E0117 12:22:46.950522 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.951071 kubelet[2764]: E0117 12:22:46.951054 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.951160 kubelet[2764]: W0117 12:22:46.951147 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.951552 kubelet[2764]: E0117 12:22:46.951539 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.951651 kubelet[2764]: W0117 12:22:46.951641 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.952266 kubelet[2764]: E0117 12:22:46.951713 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.952542 kubelet[2764]: E0117 12:22:46.952524 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.952658 kubelet[2764]: W0117 12:22:46.952641 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.952773 kubelet[2764]: E0117 12:22:46.952762 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.952920 kubelet[2764]: E0117 12:22:46.952885 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:46.954594 kubelet[2764]: E0117 12:22:46.954497 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:46.954594 kubelet[2764]: W0117 12:22:46.954512 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:46.954594 kubelet[2764]: E0117 12:22:46.954550 2764 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:47.428637 containerd[1603]: time="2025-01-17T12:22:47.427863059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:47.430217 containerd[1603]: time="2025-01-17T12:22:47.430143169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 17 12:22:47.430962 containerd[1603]: time="2025-01-17T12:22:47.430812607Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:47.438343 containerd[1603]: time="2025-01-17T12:22:47.437365164Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:47.439401 containerd[1603]: time="2025-01-17T12:22:47.439334040Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.751114899s" Jan 17 12:22:47.439605 containerd[1603]: time="2025-01-17T12:22:47.439575470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 17 12:22:47.442230 containerd[1603]: time="2025-01-17T12:22:47.442186681Z" level=info msg="CreateContainer within sandbox \"8eac3d1d00ae153258c469b5c9bf52dfd7bf308dca62f68a82baffc6d2f30e3c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:22:47.525436 containerd[1603]: time="2025-01-17T12:22:47.525364389Z" level=info msg="CreateContainer within sandbox \"8eac3d1d00ae153258c469b5c9bf52dfd7bf308dca62f68a82baffc6d2f30e3c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3d05349b1e21bbc7f0ac266db5eef750f3454e445ea54cb6af855e9373788b0f\"" Jan 17 12:22:47.531100 containerd[1603]: time="2025-01-17T12:22:47.529270685Z" level=info msg="StartContainer for \"3d05349b1e21bbc7f0ac266db5eef750f3454e445ea54cb6af855e9373788b0f\"" Jan 17 12:22:47.661679 containerd[1603]: time="2025-01-17T12:22:47.661474670Z" level=info msg="StartContainer for \"3d05349b1e21bbc7f0ac266db5eef750f3454e445ea54cb6af855e9373788b0f\" returns successfully" Jan 17 12:22:47.723774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d05349b1e21bbc7f0ac266db5eef750f3454e445ea54cb6af855e9373788b0f-rootfs.mount: Deactivated successfully. Jan 17 12:22:47.757056 kubelet[2764]: E0117 12:22:47.756286 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:47.759171 kubelet[2764]: E0117 12:22:47.759073 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:47.760818 containerd[1603]: time="2025-01-17T12:22:47.726195269Z" level=info msg="shim disconnected" id=3d05349b1e21bbc7f0ac266db5eef750f3454e445ea54cb6af855e9373788b0f namespace=k8s.io Jan 17 12:22:47.761058 containerd[1603]: time="2025-01-17T12:22:47.761030329Z" level=warning msg="cleaning up after shim disconnected" id=3d05349b1e21bbc7f0ac266db5eef750f3454e445ea54cb6af855e9373788b0f namespace=k8s.io Jan 17 12:22:47.761129 containerd[1603]: time="2025-01-17T12:22:47.761114651Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:22:47.793236 kubelet[2764]: I0117 12:22:47.793178 2764 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-7dff5c4bd8-9vd4j" podStartSLOduration=3.407858997 podStartE2EDuration="5.793106222s" podCreationTimestamp="2025-01-17 12:22:42 +0000 UTC" firstStartedPulling="2025-01-17 12:22:43.30260292 +0000 UTC m=+23.002008292" lastFinishedPulling="2025-01-17 12:22:45.687850133 +0000 UTC m=+25.387255517" observedRunningTime="2025-01-17 12:22:46.770165293 +0000 UTC m=+26.469570694" watchObservedRunningTime="2025-01-17 12:22:47.793106222 +0000 UTC m=+27.492511626" Jan 17 12:22:48.555897 kubelet[2764]: E0117 12:22:48.555448 2764 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvjx9" podUID="9e48819f-106c-43b3-89f6-2976b3a7c1c2" Jan 17 12:22:48.761084 kubelet[2764]: E0117 12:22:48.761055 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:48.763274 kubelet[2764]: E0117 12:22:48.761191 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:48.765376 containerd[1603]: time="2025-01-17T12:22:48.764743436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 17 12:22:49.766590 kubelet[2764]: E0117 12:22:49.766547 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:50.560131 kubelet[2764]: E0117 12:22:50.560078 2764 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvjx9" podUID="9e48819f-106c-43b3-89f6-2976b3a7c1c2" Jan 17 12:22:52.556036 kubelet[2764]: E0117 12:22:52.555867 2764 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvjx9" podUID="9e48819f-106c-43b3-89f6-2976b3a7c1c2" Jan 17 12:22:53.567871 containerd[1603]: time="2025-01-17T12:22:53.567797910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:53.569970 containerd[1603]: time="2025-01-17T12:22:53.569626770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 17 12:22:53.572204 containerd[1603]: time="2025-01-17T12:22:53.571905959Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:53.627774 containerd[1603]: time="2025-01-17T12:22:53.627615889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:53.633446 containerd[1603]: time="2025-01-17T12:22:53.629199047Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.864398489s" Jan 17 12:22:53.633446 containerd[1603]: time="2025-01-17T12:22:53.629238212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 17 12:22:53.637325 containerd[1603]: time="2025-01-17T12:22:53.637276402Z" level=info msg="CreateContainer within sandbox \"8eac3d1d00ae153258c469b5c9bf52dfd7bf308dca62f68a82baffc6d2f30e3c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:22:53.679419 containerd[1603]: time="2025-01-17T12:22:53.679324481Z" level=info msg="CreateContainer within sandbox \"8eac3d1d00ae153258c469b5c9bf52dfd7bf308dca62f68a82baffc6d2f30e3c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d165381e711391198ca9a65b07ccdf243464e08739842c811658da5c91b7d699\"" Jan 17 12:22:53.680422 containerd[1603]: time="2025-01-17T12:22:53.680346708Z" level=info msg="StartContainer for \"d165381e711391198ca9a65b07ccdf243464e08739842c811658da5c91b7d699\"" Jan 17 12:22:53.805503 containerd[1603]: time="2025-01-17T12:22:53.805436180Z" level=info msg="StartContainer for \"d165381e711391198ca9a65b07ccdf243464e08739842c811658da5c91b7d699\" returns successfully" Jan 17 12:22:54.559084 kubelet[2764]: E0117 12:22:54.556040 2764 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvjx9" podUID="9e48819f-106c-43b3-89f6-2976b3a7c1c2" Jan 17 12:22:54.670226 containerd[1603]: time="2025-01-17T12:22:54.670093568Z" level=info msg="shim disconnected" id=d165381e711391198ca9a65b07ccdf243464e08739842c811658da5c91b7d699 namespace=k8s.io Jan 17 12:22:54.670226 containerd[1603]: time="2025-01-17T12:22:54.670175831Z" level=warning msg="cleaning up after shim disconnected" id=d165381e711391198ca9a65b07ccdf243464e08739842c811658da5c91b7d699 namespace=k8s.io Jan 17 12:22:54.670226 containerd[1603]: time="2025-01-17T12:22:54.670191011Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:22:54.673140 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d165381e711391198ca9a65b07ccdf243464e08739842c811658da5c91b7d699-rootfs.mount: Deactivated successfully. Jan 17 12:22:54.716132 kubelet[2764]: I0117 12:22:54.715420 2764 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:22:54.756634 kubelet[2764]: I0117 12:22:54.756571 2764 topology_manager.go:215] "Topology Admit Handler" podUID="21096fca-e879-4e15-89db-72e1ab742bae" podNamespace="kube-system" podName="coredns-76f75df574-tftgd" Jan 17 12:22:54.773247 kubelet[2764]: I0117 12:22:54.768098 2764 topology_manager.go:215] "Topology Admit Handler" podUID="c6d9fd0c-351d-4397-ad95-002c18dff9fb" podNamespace="kube-system" podName="coredns-76f75df574-t9p6v" Jan 17 12:22:54.774387 kubelet[2764]: I0117 12:22:54.773688 2764 topology_manager.go:215] "Topology Admit Handler" podUID="fafa34d6-67fc-4cb4-83a2-49e3ad56846d" podNamespace="calico-system" podName="calico-kube-controllers-6946578766-9thqb" Jan 17 12:22:54.775003 kubelet[2764]: I0117 12:22:54.774796 2764 topology_manager.go:215] "Topology Admit Handler" podUID="21685820-0784-4b7f-bf71-b7f2faefd98c" podNamespace="calico-apiserver" podName="calico-apiserver-7d6ff6796c-mj8x2" Jan 17 12:22:54.775211 kubelet[2764]: I0117 12:22:54.775123 2764 topology_manager.go:215] "Topology Admit Handler" podUID="c96ff941-06c3-4d81-9057-dc8dac75c1c4" podNamespace="calico-apiserver" podName="calico-apiserver-7d6ff6796c-vmbmt" Jan 17 12:22:54.825327 kubelet[2764]: E0117 12:22:54.822969 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:54.829934 containerd[1603]: time="2025-01-17T12:22:54.829820242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 17 12:22:54.902802 kubelet[2764]: I0117 12:22:54.902749 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c5ts\" (UniqueName: \"kubernetes.io/projected/c96ff941-06c3-4d81-9057-dc8dac75c1c4-kube-api-access-4c5ts\") pod \"calico-apiserver-7d6ff6796c-vmbmt\" (UID: \"c96ff941-06c3-4d81-9057-dc8dac75c1c4\") " pod="calico-apiserver/calico-apiserver-7d6ff6796c-vmbmt" Jan 17 12:22:54.903161 kubelet[2764]: I0117 12:22:54.903113 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6d9fd0c-351d-4397-ad95-002c18dff9fb-config-volume\") pod \"coredns-76f75df574-t9p6v\" (UID: \"c6d9fd0c-351d-4397-ad95-002c18dff9fb\") " pod="kube-system/coredns-76f75df574-t9p6v" Jan 17 12:22:54.903426 kubelet[2764]: I0117 12:22:54.903229 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fafa34d6-67fc-4cb4-83a2-49e3ad56846d-tigera-ca-bundle\") pod \"calico-kube-controllers-6946578766-9thqb\" (UID: \"fafa34d6-67fc-4cb4-83a2-49e3ad56846d\") " pod="calico-system/calico-kube-controllers-6946578766-9thqb" Jan 17 12:22:54.903426 kubelet[2764]: I0117 12:22:54.903281 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s25k\" (UniqueName: \"kubernetes.io/projected/c6d9fd0c-351d-4397-ad95-002c18dff9fb-kube-api-access-7s25k\") pod \"coredns-76f75df574-t9p6v\" (UID: \"c6d9fd0c-351d-4397-ad95-002c18dff9fb\") " pod="kube-system/coredns-76f75df574-t9p6v" Jan 17 12:22:54.903426 kubelet[2764]: I0117 12:22:54.903322 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrnrq\" (UniqueName: \"kubernetes.io/projected/fafa34d6-67fc-4cb4-83a2-49e3ad56846d-kube-api-access-jrnrq\") pod \"calico-kube-controllers-6946578766-9thqb\" (UID: \"fafa34d6-67fc-4cb4-83a2-49e3ad56846d\") " pod="calico-system/calico-kube-controllers-6946578766-9thqb" Jan 17 12:22:54.903426 kubelet[2764]: I0117 12:22:54.903347 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/21685820-0784-4b7f-bf71-b7f2faefd98c-calico-apiserver-certs\") pod \"calico-apiserver-7d6ff6796c-mj8x2\" (UID: \"21685820-0784-4b7f-bf71-b7f2faefd98c\") " pod="calico-apiserver/calico-apiserver-7d6ff6796c-mj8x2" Jan 17 12:22:54.903426 kubelet[2764]: I0117 12:22:54.903368 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z787s\" (UniqueName: \"kubernetes.io/projected/21685820-0784-4b7f-bf71-b7f2faefd98c-kube-api-access-z787s\") pod \"calico-apiserver-7d6ff6796c-mj8x2\" (UID: \"21685820-0784-4b7f-bf71-b7f2faefd98c\") " pod="calico-apiserver/calico-apiserver-7d6ff6796c-mj8x2" Jan 17 12:22:54.903564 kubelet[2764]: I0117 12:22:54.903398 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21096fca-e879-4e15-89db-72e1ab742bae-config-volume\") pod \"coredns-76f75df574-tftgd\" (UID: \"21096fca-e879-4e15-89db-72e1ab742bae\") " pod="kube-system/coredns-76f75df574-tftgd" Jan 17 12:22:54.903564 kubelet[2764]: I0117 12:22:54.903419 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jrn7\" (UniqueName: \"kubernetes.io/projected/21096fca-e879-4e15-89db-72e1ab742bae-kube-api-access-9jrn7\") pod \"coredns-76f75df574-tftgd\" (UID: \"21096fca-e879-4e15-89db-72e1ab742bae\") " pod="kube-system/coredns-76f75df574-tftgd" Jan 17 12:22:54.903564 kubelet[2764]: I0117 12:22:54.903467 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c96ff941-06c3-4d81-9057-dc8dac75c1c4-calico-apiserver-certs\") pod \"calico-apiserver-7d6ff6796c-vmbmt\" (UID: \"c96ff941-06c3-4d81-9057-dc8dac75c1c4\") " pod="calico-apiserver/calico-apiserver-7d6ff6796c-vmbmt" Jan 17 12:22:55.100254 kubelet[2764]: E0117 12:22:55.098340 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:55.101830 containerd[1603]: time="2025-01-17T12:22:55.101284892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tftgd,Uid:21096fca-e879-4e15-89db-72e1ab742bae,Namespace:kube-system,Attempt:0,}" Jan 17 12:22:55.113612 kubelet[2764]: E0117 12:22:55.112989 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:22:55.114527 containerd[1603]: time="2025-01-17T12:22:55.114451496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6ff6796c-vmbmt,Uid:c96ff941-06c3-4d81-9057-dc8dac75c1c4,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:22:55.114822 containerd[1603]: time="2025-01-17T12:22:55.114778858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-t9p6v,Uid:c6d9fd0c-351d-4397-ad95-002c18dff9fb,Namespace:kube-system,Attempt:0,}" Jan 17 12:22:55.118093 containerd[1603]: time="2025-01-17T12:22:55.115656148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6ff6796c-mj8x2,Uid:21685820-0784-4b7f-bf71-b7f2faefd98c,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:22:55.120221 containerd[1603]: time="2025-01-17T12:22:55.118942252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6946578766-9thqb,Uid:fafa34d6-67fc-4cb4-83a2-49e3ad56846d,Namespace:calico-system,Attempt:0,}" Jan 17 12:22:55.619169 containerd[1603]: time="2025-01-17T12:22:55.619064439Z" level=error msg="Failed to destroy network for sandbox \"b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:55.629062 containerd[1603]: time="2025-01-17T12:22:55.627267338Z" level=error msg="encountered an error cleaning up failed sandbox \"b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:55.633756 containerd[1603]: time="2025-01-17T12:22:55.633511318Z" level=error msg="Failed to destroy network for sandbox \"9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:55.636283 containerd[1603]: time="2025-01-17T12:22:55.636210418Z" level=error msg="encountered an error cleaning up failed sandbox \"9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:55.640556 containerd[1603]: time="2025-01-17T12:22:55.639830769Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6ff6796c-mj8x2,Uid:21685820-0784-4b7f-bf71-b7f2faefd98c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:55.653637 containerd[1603]: time="2025-01-17T12:22:55.652402549Z" level=error msg="Failed to destroy network for sandbox \"9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:55.653637 containerd[1603]: time="2025-01-17T12:22:55.653000206Z" level=error msg="encountered an error cleaning up failed sandbox \"9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:55.653637 containerd[1603]: time="2025-01-17T12:22:55.653074114Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6ff6796c-vmbmt,Uid:c96ff941-06c3-4d81-9057-dc8dac75c1c4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:55.653637 containerd[1603]: time="2025-01-17T12:22:55.653215698Z" level=error msg="Failed to destroy network for sandbox \"490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:55.653637 containerd[1603]: time="2025-01-17T12:22:55.653520198Z" level=error msg="encountered an error cleaning up failed sandbox \"490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:55.653637 containerd[1603]: time="2025-01-17T12:22:55.653573699Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6946578766-9thqb,Uid:fafa34d6-67fc-4cb4-83a2-49e3ad56846d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:55.654077 containerd[1603]: time="2025-01-17T12:22:55.653704908Z" level=error msg="Failed to destroy network for sandbox \"bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:55.654077 containerd[1603]: time="2025-01-17T12:22:55.654052074Z" level=error msg="encountered an error cleaning up failed sandbox \"bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:55.654144 containerd[1603]: time="2025-01-17T12:22:55.654098501Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-t9p6v,Uid:c6d9fd0c-351d-4397-ad95-002c18dff9fb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:55.654456 kubelet[2764]: E0117 12:22:55.654422 2764 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:55.655057 kubelet[2764]: E0117 12:22:55.655032 2764 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d6ff6796c-vmbmt" Jan 17 12:22:55.655179 kubelet[2764]: E0117 12:22:55.655169 2764 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d6ff6796c-vmbmt" Jan 17 12:22:55.655325 kubelet[2764]: E0117 12:22:55.655312 2764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d6ff6796c-vmbmt_calico-apiserver(c96ff941-06c3-4d81-9057-dc8dac75c1c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d6ff6796c-vmbmt_calico-apiserver(c96ff941-06c3-4d81-9057-dc8dac75c1c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d6ff6796c-vmbmt" podUID="c96ff941-06c3-4d81-9057-dc8dac75c1c4" Jan 17 12:22:55.655770 kubelet[2764]: E0117 12:22:55.654454 2764 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:55.655900 kubelet[2764]: E0117 12:22:55.655888 2764 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-t9p6v" Jan 17 12:22:55.655992 kubelet[2764]: E0117 12:22:55.655983 2764 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-t9p6v" Jan 17 12:22:55.656125 kubelet[2764]: E0117 12:22:55.656115 2764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-t9p6v_kube-system(c6d9fd0c-351d-4397-ad95-002c18dff9fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-t9p6v_kube-system(c6d9fd0c-351d-4397-ad95-002c18dff9fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-t9p6v" podUID="c6d9fd0c-351d-4397-ad95-002c18dff9fb" Jan 17 12:22:55.656313 kubelet[2764]: E0117 12:22:55.654488 2764 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:55.656417 kubelet[2764]: E0117 12:22:55.656407 2764 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d6ff6796c-mj8x2" Jan 17 12:22:55.656525 kubelet[2764]: E0117 12:22:55.656512 2764 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d6ff6796c-mj8x2" Jan 17 12:22:55.656638 kubelet[2764]: E0117 12:22:55.656625 2764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d6ff6796c-mj8x2_calico-apiserver(21685820-0784-4b7f-bf71-b7f2faefd98c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d6ff6796c-mj8x2_calico-apiserver(21685820-0784-4b7f-bf71-b7f2faefd98c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d6ff6796c-mj8x2" podUID="21685820-0784-4b7f-bf71-b7f2faefd98c" Jan 17 12:22:55.656738 kubelet[2764]: E0117 12:22:55.654523 2764 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:55.656834 kubelet[2764]: E0117 12:22:55.656822 2764 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6946578766-9thqb" Jan 17 12:22:55.656903 kubelet[2764]: E0117 12:22:55.656896 2764 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6946578766-9thqb" Jan 17 12:22:55.656993 kubelet[2764]: E0117 12:22:55.656984 2764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6946578766-9thqb_calico-system(fafa34d6-67fc-4cb4-83a2-49e3ad56846d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6946578766-9thqb_calico-system(fafa34d6-67fc-4cb4-83a2-49e3ad56846d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6946578766-9thqb" podUID="fafa34d6-67fc-4cb4-83a2-49e3ad56846d" Jan 17 12:22:55.672526 containerd[1603]: time="2025-01-17T12:22:55.672415672Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tftgd,Uid:21096fca-e879-4e15-89db-72e1ab742bae,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:55.673280 kubelet[2764]: E0117 12:22:55.673242 2764 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:55.673401 kubelet[2764]: E0117 12:22:55.673303 2764 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-tftgd" Jan 17 12:22:55.673401 kubelet[2764]: E0117 12:22:55.673326 2764 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-tftgd" Jan 17 12:22:55.673401 kubelet[2764]: E0117 12:22:55.673381 2764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-tftgd_kube-system(21096fca-e879-4e15-89db-72e1ab742bae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-tftgd_kube-system(21096fca-e879-4e15-89db-72e1ab742bae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-tftgd" podUID="21096fca-e879-4e15-89db-72e1ab742bae" Jan 17 12:22:55.824906 kubelet[2764]: I0117 12:22:55.824232 2764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" Jan 17 12:22:55.826728 kubelet[2764]: I0117 12:22:55.826615 2764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" Jan 17 12:22:55.834507 containerd[1603]: time="2025-01-17T12:22:55.832973800Z" level=info msg="StopPodSandbox for \"b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c\"" Jan 17 12:22:55.835976 containerd[1603]: time="2025-01-17T12:22:55.834907686Z" level=info msg="StopPodSandbox for \"9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66\"" Jan 17 12:22:55.835976 containerd[1603]: time="2025-01-17T12:22:55.835291238Z" level=info msg="Ensure that sandbox b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c in task-service has been cleanup successfully" Jan 17 12:22:55.837463 containerd[1603]: time="2025-01-17T12:22:55.837199503Z" level=info msg="Ensure that sandbox 9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66 in task-service has been cleanup successfully" Jan 17 12:22:55.839370 kubelet[2764]: I0117 12:22:55.838309 2764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" Jan 17 12:22:55.840257 kubelet[2764]: I0117 12:22:55.839895 2764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" Jan 17 12:22:55.841246 containerd[1603]: time="2025-01-17T12:22:55.840492405Z" level=info msg="StopPodSandbox for \"490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c\"" Jan 17 12:22:55.841246 containerd[1603]: time="2025-01-17T12:22:55.841187383Z" level=info msg="StopPodSandbox for \"9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac\"" Jan 17 12:22:55.841613 containerd[1603]: time="2025-01-17T12:22:55.841377368Z" level=info msg="Ensure that sandbox 490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c in task-service has been cleanup successfully" Jan 17 12:22:55.841613 containerd[1603]: time="2025-01-17T12:22:55.841505883Z" level=info msg="Ensure that sandbox 9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac in task-service has been cleanup successfully" Jan 17 12:22:55.853923 kubelet[2764]: I0117 12:22:55.853627 2764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" Jan 17 12:22:55.856661 containerd[1603]: time="2025-01-17T12:22:55.856525772Z" level=info msg="StopPodSandbox for \"bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d\"" Jan 17 12:22:55.856802 containerd[1603]: time="2025-01-17T12:22:55.856713958Z" level=info msg="Ensure that sandbox bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d in task-service has been cleanup successfully" Jan 17 12:22:55.956669 containerd[1603]: time="2025-01-17T12:22:55.956375189Z" level=error msg="StopPodSandbox for \"b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c\" failed" error="failed to destroy network for sandbox \"b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:55.958210 kubelet[2764]: E0117 12:22:55.958052 2764 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" Jan 17 12:22:55.958210 kubelet[2764]: E0117 12:22:55.958138 2764 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c"} Jan 17 12:22:55.958210 kubelet[2764]: E0117 12:22:55.958174 2764 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"21096fca-e879-4e15-89db-72e1ab742bae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:22:55.958210 kubelet[2764]: E0117 12:22:55.958206 2764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"21096fca-e879-4e15-89db-72e1ab742bae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-tftgd" podUID="21096fca-e879-4e15-89db-72e1ab742bae" Jan 17 12:22:55.958835 containerd[1603]: time="2025-01-17T12:22:55.958787467Z" level=error msg="StopPodSandbox for \"9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac\" failed" error="failed to destroy network for sandbox \"9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:55.959672 kubelet[2764]: E0117 12:22:55.959488 2764 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" Jan 17 12:22:55.959672 kubelet[2764]: E0117 12:22:55.959541 2764 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac"} Jan 17 12:22:55.959672 kubelet[2764]: E0117 12:22:55.959600 2764 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"21685820-0784-4b7f-bf71-b7f2faefd98c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:22:55.959672 kubelet[2764]: E0117 12:22:55.959642 2764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"21685820-0784-4b7f-bf71-b7f2faefd98c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d6ff6796c-mj8x2" podUID="21685820-0784-4b7f-bf71-b7f2faefd98c" Jan 17 12:22:55.982801 containerd[1603]: time="2025-01-17T12:22:55.982708206Z" level=error msg="StopPodSandbox for \"9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66\" failed" error="failed to destroy network for sandbox \"9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:55.983375 kubelet[2764]: E0117 12:22:55.983334 2764 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" Jan 17 12:22:55.983487 kubelet[2764]: E0117 12:22:55.983399 2764 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66"} Jan 17 12:22:55.983487 kubelet[2764]: E0117 12:22:55.983475 2764 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c96ff941-06c3-4d81-9057-dc8dac75c1c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:22:55.983597 kubelet[2764]: E0117 12:22:55.983532 2764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c96ff941-06c3-4d81-9057-dc8dac75c1c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d6ff6796c-vmbmt" podUID="c96ff941-06c3-4d81-9057-dc8dac75c1c4" Jan 17 12:22:55.984979 containerd[1603]: time="2025-01-17T12:22:55.984780873Z" level=error msg="StopPodSandbox for \"bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d\" failed" error="failed to destroy network for sandbox \"bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:55.986253 kubelet[2764]: E0117 12:22:55.986192 2764 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" Jan 17 12:22:55.986369 kubelet[2764]: E0117 12:22:55.986261 2764 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d"} Jan 17 12:22:55.986369 kubelet[2764]: E0117 12:22:55.986326 2764 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c6d9fd0c-351d-4397-ad95-002c18dff9fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:22:55.986505 kubelet[2764]: E0117 12:22:55.986382 2764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c6d9fd0c-351d-4397-ad95-002c18dff9fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-t9p6v" podUID="c6d9fd0c-351d-4397-ad95-002c18dff9fb" Jan 17 12:22:55.986643 containerd[1603]: time="2025-01-17T12:22:55.986540142Z" level=error msg="StopPodSandbox for \"490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c\" failed" error="failed to destroy network for sandbox \"490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:55.986829 kubelet[2764]: E0117 12:22:55.986811 2764 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" Jan 17 12:22:55.986920 kubelet[2764]: E0117 12:22:55.986844 2764 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c"} Jan 17 12:22:55.986920 kubelet[2764]: E0117 12:22:55.986911 2764 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fafa34d6-67fc-4cb4-83a2-49e3ad56846d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:22:55.986999 kubelet[2764]: E0117 12:22:55.986956 2764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fafa34d6-67fc-4cb4-83a2-49e3ad56846d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6946578766-9thqb" podUID="fafa34d6-67fc-4cb4-83a2-49e3ad56846d" Jan 17 12:22:56.563905 containerd[1603]: time="2025-01-17T12:22:56.563295007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mvjx9,Uid:9e48819f-106c-43b3-89f6-2976b3a7c1c2,Namespace:calico-system,Attempt:0,}" Jan 17 12:22:56.720427 containerd[1603]: time="2025-01-17T12:22:56.720352896Z" level=error msg="Failed to destroy network for sandbox \"78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:56.722300 containerd[1603]: time="2025-01-17T12:22:56.721541254Z" level=error msg="encountered an error cleaning up failed sandbox \"78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:56.722589 containerd[1603]: time="2025-01-17T12:22:56.722540527Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mvjx9,Uid:9e48819f-106c-43b3-89f6-2976b3a7c1c2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:56.727320 kubelet[2764]: E0117 12:22:56.727268 2764 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:56.728498 kubelet[2764]: E0117 12:22:56.728295 2764 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mvjx9" Jan 17 12:22:56.728498 kubelet[2764]: E0117 12:22:56.728371 2764 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mvjx9" Jan 17 12:22:56.729788 kubelet[2764]: E0117 12:22:56.728827 2764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mvjx9_calico-system(9e48819f-106c-43b3-89f6-2976b3a7c1c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mvjx9_calico-system(9e48819f-106c-43b3-89f6-2976b3a7c1c2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mvjx9" podUID="9e48819f-106c-43b3-89f6-2976b3a7c1c2" Jan 17 12:22:56.729580 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06-shm.mount: Deactivated successfully. Jan 17 12:22:56.857804 kubelet[2764]: I0117 12:22:56.857591 2764 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" Jan 17 12:22:56.861638 containerd[1603]: time="2025-01-17T12:22:56.860462409Z" level=info msg="StopPodSandbox for \"78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06\"" Jan 17 12:22:56.862977 containerd[1603]: time="2025-01-17T12:22:56.862352988Z" level=info msg="Ensure that sandbox 78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06 in task-service has been cleanup successfully" Jan 17 12:22:56.933072 containerd[1603]: time="2025-01-17T12:22:56.932958285Z" level=error msg="StopPodSandbox for \"78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06\" failed" error="failed to destroy network for sandbox \"78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:56.933765 kubelet[2764]: E0117 12:22:56.933569 2764 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" Jan 17 12:22:56.933765 kubelet[2764]: E0117 12:22:56.933642 2764 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06"} Jan 17 12:22:56.933765 kubelet[2764]: E0117 12:22:56.933682 2764 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9e48819f-106c-43b3-89f6-2976b3a7c1c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:22:56.933765 kubelet[2764]: E0117 12:22:56.933712 2764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9e48819f-106c-43b3-89f6-2976b3a7c1c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mvjx9" podUID="9e48819f-106c-43b3-89f6-2976b3a7c1c2" Jan 17 12:23:04.375358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3104337719.mount: Deactivated successfully. Jan 17 12:23:04.635282 containerd[1603]: time="2025-01-17T12:23:04.569553145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 17 12:23:04.695154 containerd[1603]: time="2025-01-17T12:23:04.694403102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:04.799086 containerd[1603]: time="2025-01-17T12:23:04.797197714Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:04.803156 containerd[1603]: time="2025-01-17T12:23:04.803064584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:04.812310 containerd[1603]: time="2025-01-17T12:23:04.812039716Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 9.969673716s" Jan 17 12:23:04.812310 containerd[1603]: time="2025-01-17T12:23:04.812137281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 17 12:23:04.974391 containerd[1603]: time="2025-01-17T12:23:04.973637128Z" level=info msg="CreateContainer within sandbox \"8eac3d1d00ae153258c469b5c9bf52dfd7bf308dca62f68a82baffc6d2f30e3c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:23:05.105204 containerd[1603]: time="2025-01-17T12:23:05.105136970Z" level=info msg="CreateContainer within sandbox \"8eac3d1d00ae153258c469b5c9bf52dfd7bf308dca62f68a82baffc6d2f30e3c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b4dfb947d93215380113fd0ada6c38793c5c1b6d75fe8242357646610c8d21a3\"" Jan 17 12:23:05.127820 containerd[1603]: time="2025-01-17T12:23:05.126942133Z" level=info msg="StartContainer for \"b4dfb947d93215380113fd0ada6c38793c5c1b6d75fe8242357646610c8d21a3\"" Jan 17 12:23:05.554043 containerd[1603]: time="2025-01-17T12:23:05.553949879Z" level=info msg="StartContainer for \"b4dfb947d93215380113fd0ada6c38793c5c1b6d75fe8242357646610c8d21a3\" returns successfully" Jan 17 12:23:05.694983 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 12:23:05.696705 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 12:23:05.799093 systemd-journald[1141]: Under memory pressure, flushing caches. Jan 17 12:23:05.798312 systemd-resolved[1488]: Under memory pressure, flushing caches. Jan 17 12:23:05.798423 systemd-resolved[1488]: Flushed all caches. Jan 17 12:23:05.989901 kubelet[2764]: E0117 12:23:05.988490 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:06.096367 kubelet[2764]: I0117 12:23:06.095908 2764 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-4nbjz" podStartSLOduration=2.764967703 podStartE2EDuration="24.094258483s" podCreationTimestamp="2025-01-17 12:22:42 +0000 UTC" firstStartedPulling="2025-01-17 12:22:43.488069252 +0000 UTC m=+23.187474624" lastFinishedPulling="2025-01-17 12:23:04.817360014 +0000 UTC m=+44.516765404" observedRunningTime="2025-01-17 12:23:06.079094704 +0000 UTC m=+45.778500106" watchObservedRunningTime="2025-01-17 12:23:06.094258483 +0000 UTC m=+45.793663883" Jan 17 12:23:06.556968 containerd[1603]: time="2025-01-17T12:23:06.556913749Z" level=info msg="StopPodSandbox for \"bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d\"" Jan 17 12:23:06.882483 containerd[1603]: 2025-01-17 12:23:06.661 [INFO][3898] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" Jan 17 12:23:06.882483 containerd[1603]: 2025-01-17 12:23:06.661 [INFO][3898] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" iface="eth0" netns="/var/run/netns/cni-41e77d65-8955-4ebb-f159-2874fa641808" Jan 17 12:23:06.882483 containerd[1603]: 2025-01-17 12:23:06.662 [INFO][3898] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" iface="eth0" netns="/var/run/netns/cni-41e77d65-8955-4ebb-f159-2874fa641808" Jan 17 12:23:06.882483 containerd[1603]: 2025-01-17 12:23:06.663 [INFO][3898] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" iface="eth0" netns="/var/run/netns/cni-41e77d65-8955-4ebb-f159-2874fa641808" Jan 17 12:23:06.882483 containerd[1603]: 2025-01-17 12:23:06.663 [INFO][3898] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" Jan 17 12:23:06.882483 containerd[1603]: 2025-01-17 12:23:06.663 [INFO][3898] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" Jan 17 12:23:06.882483 containerd[1603]: 2025-01-17 12:23:06.856 [INFO][3904] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" HandleID="k8s-pod-network.bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" Workload="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--t9p6v-eth0" Jan 17 12:23:06.882483 containerd[1603]: 2025-01-17 12:23:06.857 [INFO][3904] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:06.882483 containerd[1603]: 2025-01-17 12:23:06.857 [INFO][3904] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:06.882483 containerd[1603]: 2025-01-17 12:23:06.873 [WARNING][3904] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" HandleID="k8s-pod-network.bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" Workload="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--t9p6v-eth0" Jan 17 12:23:06.882483 containerd[1603]: 2025-01-17 12:23:06.873 [INFO][3904] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" HandleID="k8s-pod-network.bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" Workload="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--t9p6v-eth0" Jan 17 12:23:06.882483 containerd[1603]: 2025-01-17 12:23:06.875 [INFO][3904] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:06.882483 containerd[1603]: 2025-01-17 12:23:06.879 [INFO][3898] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" Jan 17 12:23:06.887526 containerd[1603]: time="2025-01-17T12:23:06.886157113Z" level=info msg="TearDown network for sandbox \"bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d\" successfully" Jan 17 12:23:06.887526 containerd[1603]: time="2025-01-17T12:23:06.886217831Z" level=info msg="StopPodSandbox for \"bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d\" returns successfully" Jan 17 12:23:06.890514 systemd[1]: run-netns-cni\x2d41e77d65\x2d8955\x2d4ebb\x2df159\x2d2874fa641808.mount: Deactivated successfully. Jan 17 12:23:06.893643 kubelet[2764]: E0117 12:23:06.893299 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:06.894647 containerd[1603]: time="2025-01-17T12:23:06.894333112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-t9p6v,Uid:c6d9fd0c-351d-4397-ad95-002c18dff9fb,Namespace:kube-system,Attempt:1,}" Jan 17 12:23:06.924778 kubelet[2764]: I0117 12:23:06.923749 2764 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:23:06.926699 kubelet[2764]: E0117 12:23:06.926670 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:07.184250 systemd-networkd[1223]: cali0bb7602db75: Link UP Jan 17 12:23:07.185665 systemd-networkd[1223]: cali0bb7602db75: Gained carrier Jan 17 12:23:07.198675 containerd[1603]: 2025-01-17 12:23:06.994 [INFO][3911] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 12:23:07.198675 containerd[1603]: 2025-01-17 12:23:07.018 [INFO][3911] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--t9p6v-eth0 coredns-76f75df574- kube-system c6d9fd0c-351d-4397-ad95-002c18dff9fb 826 0 2025-01-17 12:22:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-1-b9b10bea58 coredns-76f75df574-t9p6v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0bb7602db75 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="59f1d732df8bbf15c79768030edceaa06d03a7c52632da1085a83f4790bc07d6" Namespace="kube-system" Pod="coredns-76f75df574-t9p6v" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--t9p6v-" Jan 17 12:23:07.198675 containerd[1603]: 2025-01-17 12:23:07.018 [INFO][3911] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="59f1d732df8bbf15c79768030edceaa06d03a7c52632da1085a83f4790bc07d6" Namespace="kube-system" Pod="coredns-76f75df574-t9p6v" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--t9p6v-eth0" Jan 17 12:23:07.198675 containerd[1603]: 2025-01-17 12:23:07.076 [INFO][3922] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="59f1d732df8bbf15c79768030edceaa06d03a7c52632da1085a83f4790bc07d6" HandleID="k8s-pod-network.59f1d732df8bbf15c79768030edceaa06d03a7c52632da1085a83f4790bc07d6" Workload="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--t9p6v-eth0" Jan 17 12:23:07.198675 containerd[1603]: 2025-01-17 12:23:07.091 [INFO][3922] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="59f1d732df8bbf15c79768030edceaa06d03a7c52632da1085a83f4790bc07d6" HandleID="k8s-pod-network.59f1d732df8bbf15c79768030edceaa06d03a7c52632da1085a83f4790bc07d6" Workload="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--t9p6v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318120), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-1-b9b10bea58", "pod":"coredns-76f75df574-t9p6v", "timestamp":"2025-01-17 12:23:07.076590673 +0000 UTC"}, Hostname:"ci-4081.3.0-1-b9b10bea58", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:23:07.198675 containerd[1603]: 2025-01-17 12:23:07.092 [INFO][3922] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:07.198675 containerd[1603]: 2025-01-17 12:23:07.092 [INFO][3922] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:07.198675 containerd[1603]: 2025-01-17 12:23:07.092 [INFO][3922] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-1-b9b10bea58' Jan 17 12:23:07.198675 containerd[1603]: 2025-01-17 12:23:07.101 [INFO][3922] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.59f1d732df8bbf15c79768030edceaa06d03a7c52632da1085a83f4790bc07d6" host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:07.198675 containerd[1603]: 2025-01-17 12:23:07.112 [INFO][3922] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:07.198675 containerd[1603]: 2025-01-17 12:23:07.120 [INFO][3922] ipam/ipam.go 489: Trying affinity for 192.168.56.128/26 host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:07.198675 containerd[1603]: 2025-01-17 12:23:07.123 [INFO][3922] ipam/ipam.go 155: Attempting to load block cidr=192.168.56.128/26 host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:07.198675 containerd[1603]: 2025-01-17 12:23:07.126 [INFO][3922] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.56.128/26 host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:07.198675 containerd[1603]: 2025-01-17 12:23:07.126 [INFO][3922] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.56.128/26 handle="k8s-pod-network.59f1d732df8bbf15c79768030edceaa06d03a7c52632da1085a83f4790bc07d6" host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:07.198675 containerd[1603]: 2025-01-17 12:23:07.129 [INFO][3922] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.59f1d732df8bbf15c79768030edceaa06d03a7c52632da1085a83f4790bc07d6 Jan 17 12:23:07.198675 containerd[1603]: 2025-01-17 12:23:07.136 [INFO][3922] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.56.128/26 handle="k8s-pod-network.59f1d732df8bbf15c79768030edceaa06d03a7c52632da1085a83f4790bc07d6" host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:07.198675 containerd[1603]: 2025-01-17 12:23:07.147 [INFO][3922] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.56.129/26] block=192.168.56.128/26 handle="k8s-pod-network.59f1d732df8bbf15c79768030edceaa06d03a7c52632da1085a83f4790bc07d6" host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:07.198675 containerd[1603]: 2025-01-17 12:23:07.147 [INFO][3922] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.56.129/26] handle="k8s-pod-network.59f1d732df8bbf15c79768030edceaa06d03a7c52632da1085a83f4790bc07d6" host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:07.198675 containerd[1603]: 2025-01-17 12:23:07.147 [INFO][3922] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:07.198675 containerd[1603]: 2025-01-17 12:23:07.147 [INFO][3922] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.56.129/26] IPv6=[] ContainerID="59f1d732df8bbf15c79768030edceaa06d03a7c52632da1085a83f4790bc07d6" HandleID="k8s-pod-network.59f1d732df8bbf15c79768030edceaa06d03a7c52632da1085a83f4790bc07d6" Workload="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--t9p6v-eth0" Jan 17 12:23:07.199745 containerd[1603]: 2025-01-17 12:23:07.155 [INFO][3911] cni-plugin/k8s.go 386: Populated endpoint ContainerID="59f1d732df8bbf15c79768030edceaa06d03a7c52632da1085a83f4790bc07d6" Namespace="kube-system" Pod="coredns-76f75df574-t9p6v" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--t9p6v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--t9p6v-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c6d9fd0c-351d-4397-ad95-002c18dff9fb", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-1-b9b10bea58", ContainerID:"", Pod:"coredns-76f75df574-t9p6v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0bb7602db75", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:07.199745 containerd[1603]: 2025-01-17 12:23:07.155 [INFO][3911] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.56.129/32] ContainerID="59f1d732df8bbf15c79768030edceaa06d03a7c52632da1085a83f4790bc07d6" Namespace="kube-system" Pod="coredns-76f75df574-t9p6v" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--t9p6v-eth0" Jan 17 12:23:07.199745 containerd[1603]: 2025-01-17 12:23:07.156 [INFO][3911] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0bb7602db75 ContainerID="59f1d732df8bbf15c79768030edceaa06d03a7c52632da1085a83f4790bc07d6" Namespace="kube-system" Pod="coredns-76f75df574-t9p6v" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--t9p6v-eth0" Jan 17 12:23:07.199745 containerd[1603]: 2025-01-17 12:23:07.174 [INFO][3911] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="59f1d732df8bbf15c79768030edceaa06d03a7c52632da1085a83f4790bc07d6" Namespace="kube-system" Pod="coredns-76f75df574-t9p6v" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--t9p6v-eth0" Jan 17 12:23:07.199745 containerd[1603]: 2025-01-17 12:23:07.174 [INFO][3911] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="59f1d732df8bbf15c79768030edceaa06d03a7c52632da1085a83f4790bc07d6" Namespace="kube-system" Pod="coredns-76f75df574-t9p6v" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--t9p6v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--t9p6v-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c6d9fd0c-351d-4397-ad95-002c18dff9fb", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-1-b9b10bea58", ContainerID:"59f1d732df8bbf15c79768030edceaa06d03a7c52632da1085a83f4790bc07d6", Pod:"coredns-76f75df574-t9p6v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0bb7602db75", MAC:"be:70:9c:c8:f1:c0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:07.199745 containerd[1603]: 2025-01-17 12:23:07.193 [INFO][3911] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="59f1d732df8bbf15c79768030edceaa06d03a7c52632da1085a83f4790bc07d6" Namespace="kube-system" Pod="coredns-76f75df574-t9p6v" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--t9p6v-eth0" Jan 17 12:23:07.259116 containerd[1603]: time="2025-01-17T12:23:07.258336347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:23:07.259116 containerd[1603]: time="2025-01-17T12:23:07.258435994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:23:07.259116 containerd[1603]: time="2025-01-17T12:23:07.258459980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:07.260763 containerd[1603]: time="2025-01-17T12:23:07.259069918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:07.456140 containerd[1603]: time="2025-01-17T12:23:07.455399838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-t9p6v,Uid:c6d9fd0c-351d-4397-ad95-002c18dff9fb,Namespace:kube-system,Attempt:1,} returns sandbox id \"59f1d732df8bbf15c79768030edceaa06d03a7c52632da1085a83f4790bc07d6\"" Jan 17 12:23:07.462790 kubelet[2764]: E0117 12:23:07.461659 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:07.497087 containerd[1603]: time="2025-01-17T12:23:07.496205949Z" level=info msg="CreateContainer within sandbox \"59f1d732df8bbf15c79768030edceaa06d03a7c52632da1085a83f4790bc07d6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:23:07.549545 containerd[1603]: time="2025-01-17T12:23:07.549462100Z" level=info msg="CreateContainer within sandbox \"59f1d732df8bbf15c79768030edceaa06d03a7c52632da1085a83f4790bc07d6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"25506ce4743d5876120db0cce017b667998d1700dbfd4c088b65822d24927487\"" Jan 17 12:23:07.553545 containerd[1603]: time="2025-01-17T12:23:07.552764350Z" level=info msg="StartContainer for \"25506ce4743d5876120db0cce017b667998d1700dbfd4c088b65822d24927487\"" Jan 17 12:23:07.555709 containerd[1603]: time="2025-01-17T12:23:07.555638332Z" level=info msg="StopPodSandbox for \"9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66\"" Jan 17 12:23:07.852069 systemd-journald[1141]: Under memory pressure, flushing caches. Jan 17 12:23:07.845998 systemd-resolved[1488]: Under memory pressure, flushing caches. Jan 17 12:23:07.846009 systemd-resolved[1488]: Flushed all caches. Jan 17 12:23:07.878320 containerd[1603]: time="2025-01-17T12:23:07.878254802Z" level=info msg="StartContainer for \"25506ce4743d5876120db0cce017b667998d1700dbfd4c088b65822d24927487\" returns successfully" Jan 17 12:23:07.986319 kubelet[2764]: E0117 12:23:07.985281 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:07.993258 containerd[1603]: 2025-01-17 12:23:07.875 [INFO][4086] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" Jan 17 12:23:07.993258 containerd[1603]: 2025-01-17 12:23:07.877 [INFO][4086] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" iface="eth0" netns="/var/run/netns/cni-ed45158c-cca9-0a86-5cdd-a3ee12db4aa3" Jan 17 12:23:07.993258 containerd[1603]: 2025-01-17 12:23:07.877 [INFO][4086] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" iface="eth0" netns="/var/run/netns/cni-ed45158c-cca9-0a86-5cdd-a3ee12db4aa3" Jan 17 12:23:07.993258 containerd[1603]: 2025-01-17 12:23:07.877 [INFO][4086] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" iface="eth0" netns="/var/run/netns/cni-ed45158c-cca9-0a86-5cdd-a3ee12db4aa3" Jan 17 12:23:07.993258 containerd[1603]: 2025-01-17 12:23:07.877 [INFO][4086] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" Jan 17 12:23:07.993258 containerd[1603]: 2025-01-17 12:23:07.877 [INFO][4086] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" Jan 17 12:23:07.993258 containerd[1603]: 2025-01-17 12:23:07.946 [INFO][4120] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" HandleID="k8s-pod-network.9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--vmbmt-eth0" Jan 17 12:23:07.993258 containerd[1603]: 2025-01-17 12:23:07.947 [INFO][4120] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:07.993258 containerd[1603]: 2025-01-17 12:23:07.947 [INFO][4120] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:07.993258 containerd[1603]: 2025-01-17 12:23:07.971 [WARNING][4120] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" HandleID="k8s-pod-network.9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--vmbmt-eth0" Jan 17 12:23:07.993258 containerd[1603]: 2025-01-17 12:23:07.971 [INFO][4120] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" HandleID="k8s-pod-network.9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--vmbmt-eth0" Jan 17 12:23:07.993258 containerd[1603]: 2025-01-17 12:23:07.977 [INFO][4120] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:07.993258 containerd[1603]: 2025-01-17 12:23:07.983 [INFO][4086] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" Jan 17 12:23:07.997393 containerd[1603]: time="2025-01-17T12:23:07.993691456Z" level=info msg="TearDown network for sandbox \"9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66\" successfully" Jan 17 12:23:07.997393 containerd[1603]: time="2025-01-17T12:23:07.996216388Z" level=info msg="StopPodSandbox for \"9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66\" returns successfully" Jan 17 12:23:07.999472 containerd[1603]: time="2025-01-17T12:23:07.999400534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6ff6796c-vmbmt,Uid:c96ff941-06c3-4d81-9057-dc8dac75c1c4,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:23:08.015536 systemd[1]: run-netns-cni\x2ded45158c\x2dcca9\x2d0a86\x2d5cdd\x2da3ee12db4aa3.mount: Deactivated successfully. Jan 17 12:23:08.250053 kernel: bpftool[4163]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 12:23:08.479563 systemd-networkd[1223]: cali3db696dd913: Link UP Jan 17 12:23:08.483816 systemd-networkd[1223]: cali3db696dd913: Gained carrier Jan 17 12:23:08.506444 kubelet[2764]: I0117 12:23:08.506317 2764 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-t9p6v" podStartSLOduration=34.50624307 podStartE2EDuration="34.50624307s" podCreationTimestamp="2025-01-17 12:22:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:23:08.171051167 +0000 UTC m=+47.870456567" watchObservedRunningTime="2025-01-17 12:23:08.50624307 +0000 UTC m=+48.205648469" Jan 17 12:23:08.510588 containerd[1603]: 2025-01-17 12:23:08.265 [INFO][4141] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--vmbmt-eth0 calico-apiserver-7d6ff6796c- calico-apiserver c96ff941-06c3-4d81-9057-dc8dac75c1c4 841 0 2025-01-17 12:22:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d6ff6796c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-1-b9b10bea58 calico-apiserver-7d6ff6796c-vmbmt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3db696dd913 [] []}} ContainerID="b1566886e305e6f1028e2bcfc5c6ea23352cce27d4ef71b5350c56074aaf9e84" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff6796c-vmbmt" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--vmbmt-" Jan 17 12:23:08.510588 containerd[1603]: 2025-01-17 12:23:08.265 [INFO][4141] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b1566886e305e6f1028e2bcfc5c6ea23352cce27d4ef71b5350c56074aaf9e84" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff6796c-vmbmt" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--vmbmt-eth0" Jan 17 12:23:08.510588 containerd[1603]: 2025-01-17 12:23:08.374 [INFO][4167] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b1566886e305e6f1028e2bcfc5c6ea23352cce27d4ef71b5350c56074aaf9e84" HandleID="k8s-pod-network.b1566886e305e6f1028e2bcfc5c6ea23352cce27d4ef71b5350c56074aaf9e84" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--vmbmt-eth0" Jan 17 12:23:08.510588 containerd[1603]: 2025-01-17 12:23:08.394 [INFO][4167] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b1566886e305e6f1028e2bcfc5c6ea23352cce27d4ef71b5350c56074aaf9e84" HandleID="k8s-pod-network.b1566886e305e6f1028e2bcfc5c6ea23352cce27d4ef71b5350c56074aaf9e84" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--vmbmt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004bf630), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-1-b9b10bea58", "pod":"calico-apiserver-7d6ff6796c-vmbmt", "timestamp":"2025-01-17 12:23:08.374138563 +0000 UTC"}, Hostname:"ci-4081.3.0-1-b9b10bea58", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:23:08.510588 containerd[1603]: 2025-01-17 12:23:08.397 [INFO][4167] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:08.510588 containerd[1603]: 2025-01-17 12:23:08.397 [INFO][4167] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:08.510588 containerd[1603]: 2025-01-17 12:23:08.398 [INFO][4167] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-1-b9b10bea58' Jan 17 12:23:08.510588 containerd[1603]: 2025-01-17 12:23:08.404 [INFO][4167] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b1566886e305e6f1028e2bcfc5c6ea23352cce27d4ef71b5350c56074aaf9e84" host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:08.510588 containerd[1603]: 2025-01-17 12:23:08.415 [INFO][4167] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:08.510588 containerd[1603]: 2025-01-17 12:23:08.427 [INFO][4167] ipam/ipam.go 489: Trying affinity for 192.168.56.128/26 host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:08.510588 containerd[1603]: 2025-01-17 12:23:08.431 [INFO][4167] ipam/ipam.go 155: Attempting to load block cidr=192.168.56.128/26 host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:08.510588 containerd[1603]: 2025-01-17 12:23:08.439 [INFO][4167] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.56.128/26 host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:08.510588 containerd[1603]: 2025-01-17 12:23:08.439 [INFO][4167] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.56.128/26 handle="k8s-pod-network.b1566886e305e6f1028e2bcfc5c6ea23352cce27d4ef71b5350c56074aaf9e84" host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:08.510588 containerd[1603]: 2025-01-17 12:23:08.443 [INFO][4167] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b1566886e305e6f1028e2bcfc5c6ea23352cce27d4ef71b5350c56074aaf9e84 Jan 17 12:23:08.510588 containerd[1603]: 2025-01-17 12:23:08.452 [INFO][4167] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.56.128/26 handle="k8s-pod-network.b1566886e305e6f1028e2bcfc5c6ea23352cce27d4ef71b5350c56074aaf9e84" host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:08.510588 containerd[1603]: 2025-01-17 12:23:08.460 [INFO][4167] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.56.130/26] block=192.168.56.128/26 handle="k8s-pod-network.b1566886e305e6f1028e2bcfc5c6ea23352cce27d4ef71b5350c56074aaf9e84" host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:08.510588 containerd[1603]: 2025-01-17 12:23:08.460 [INFO][4167] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.56.130/26] handle="k8s-pod-network.b1566886e305e6f1028e2bcfc5c6ea23352cce27d4ef71b5350c56074aaf9e84" host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:08.510588 containerd[1603]: 2025-01-17 12:23:08.460 [INFO][4167] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:08.510588 containerd[1603]: 2025-01-17 12:23:08.460 [INFO][4167] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.56.130/26] IPv6=[] ContainerID="b1566886e305e6f1028e2bcfc5c6ea23352cce27d4ef71b5350c56074aaf9e84" HandleID="k8s-pod-network.b1566886e305e6f1028e2bcfc5c6ea23352cce27d4ef71b5350c56074aaf9e84" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--vmbmt-eth0" Jan 17 12:23:08.512800 containerd[1603]: 2025-01-17 12:23:08.465 [INFO][4141] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b1566886e305e6f1028e2bcfc5c6ea23352cce27d4ef71b5350c56074aaf9e84" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff6796c-vmbmt" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--vmbmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--vmbmt-eth0", GenerateName:"calico-apiserver-7d6ff6796c-", Namespace:"calico-apiserver", SelfLink:"", UID:"c96ff941-06c3-4d81-9057-dc8dac75c1c4", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d6ff6796c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-1-b9b10bea58", ContainerID:"", Pod:"calico-apiserver-7d6ff6796c-vmbmt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3db696dd913", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:08.512800 containerd[1603]: 2025-01-17 12:23:08.465 [INFO][4141] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.56.130/32] ContainerID="b1566886e305e6f1028e2bcfc5c6ea23352cce27d4ef71b5350c56074aaf9e84" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff6796c-vmbmt" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--vmbmt-eth0" Jan 17 12:23:08.512800 containerd[1603]: 2025-01-17 12:23:08.465 [INFO][4141] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3db696dd913 ContainerID="b1566886e305e6f1028e2bcfc5c6ea23352cce27d4ef71b5350c56074aaf9e84" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff6796c-vmbmt" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--vmbmt-eth0" Jan 17 12:23:08.512800 containerd[1603]: 2025-01-17 12:23:08.486 [INFO][4141] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b1566886e305e6f1028e2bcfc5c6ea23352cce27d4ef71b5350c56074aaf9e84" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff6796c-vmbmt" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--vmbmt-eth0" Jan 17 12:23:08.512800 containerd[1603]: 2025-01-17 12:23:08.488 [INFO][4141] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b1566886e305e6f1028e2bcfc5c6ea23352cce27d4ef71b5350c56074aaf9e84" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff6796c-vmbmt" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--vmbmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--vmbmt-eth0", GenerateName:"calico-apiserver-7d6ff6796c-", Namespace:"calico-apiserver", SelfLink:"", UID:"c96ff941-06c3-4d81-9057-dc8dac75c1c4", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d6ff6796c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-1-b9b10bea58", ContainerID:"b1566886e305e6f1028e2bcfc5c6ea23352cce27d4ef71b5350c56074aaf9e84", Pod:"calico-apiserver-7d6ff6796c-vmbmt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3db696dd913", MAC:"96:7b:eb:30:c6:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:08.512800 containerd[1603]: 2025-01-17 12:23:08.504 [INFO][4141] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b1566886e305e6f1028e2bcfc5c6ea23352cce27d4ef71b5350c56074aaf9e84" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff6796c-vmbmt" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--vmbmt-eth0" Jan 17 12:23:08.556999 containerd[1603]: time="2025-01-17T12:23:08.556792223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:23:08.556999 containerd[1603]: time="2025-01-17T12:23:08.556891673Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:23:08.559720 containerd[1603]: time="2025-01-17T12:23:08.558540649Z" level=info msg="StopPodSandbox for \"78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06\"" Jan 17 12:23:08.560485 containerd[1603]: time="2025-01-17T12:23:08.556912649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:08.560485 containerd[1603]: time="2025-01-17T12:23:08.559794376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:08.817465 systemd-networkd[1223]: cali0bb7602db75: Gained IPv6LL Jan 17 12:23:08.827183 containerd[1603]: time="2025-01-17T12:23:08.826205910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6ff6796c-vmbmt,Uid:c96ff941-06c3-4d81-9057-dc8dac75c1c4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b1566886e305e6f1028e2bcfc5c6ea23352cce27d4ef71b5350c56074aaf9e84\"" Jan 17 12:23:08.842904 containerd[1603]: 2025-01-17 12:23:08.730 [INFO][4242] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" Jan 17 12:23:08.842904 containerd[1603]: 2025-01-17 12:23:08.730 [INFO][4242] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" iface="eth0" netns="/var/run/netns/cni-5bab01f4-1b46-c52f-abb3-0bbbc90e52e7" Jan 17 12:23:08.842904 containerd[1603]: 2025-01-17 12:23:08.734 [INFO][4242] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" iface="eth0" netns="/var/run/netns/cni-5bab01f4-1b46-c52f-abb3-0bbbc90e52e7" Jan 17 12:23:08.842904 containerd[1603]: 2025-01-17 12:23:08.734 [INFO][4242] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" iface="eth0" netns="/var/run/netns/cni-5bab01f4-1b46-c52f-abb3-0bbbc90e52e7" Jan 17 12:23:08.842904 containerd[1603]: 2025-01-17 12:23:08.735 [INFO][4242] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" Jan 17 12:23:08.842904 containerd[1603]: 2025-01-17 12:23:08.735 [INFO][4242] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" Jan 17 12:23:08.842904 containerd[1603]: 2025-01-17 12:23:08.795 [INFO][4256] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" HandleID="k8s-pod-network.78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" Workload="ci--4081.3.0--1--b9b10bea58-k8s-csi--node--driver--mvjx9-eth0" Jan 17 12:23:08.842904 containerd[1603]: 2025-01-17 12:23:08.795 [INFO][4256] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:08.842904 containerd[1603]: 2025-01-17 12:23:08.796 [INFO][4256] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:08.842904 containerd[1603]: 2025-01-17 12:23:08.808 [WARNING][4256] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" HandleID="k8s-pod-network.78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" Workload="ci--4081.3.0--1--b9b10bea58-k8s-csi--node--driver--mvjx9-eth0" Jan 17 12:23:08.842904 containerd[1603]: 2025-01-17 12:23:08.808 [INFO][4256] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" HandleID="k8s-pod-network.78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" Workload="ci--4081.3.0--1--b9b10bea58-k8s-csi--node--driver--mvjx9-eth0" Jan 17 12:23:08.842904 containerd[1603]: 2025-01-17 12:23:08.820 [INFO][4256] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:08.842904 containerd[1603]: 2025-01-17 12:23:08.827 [INFO][4242] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" Jan 17 12:23:08.842904 containerd[1603]: time="2025-01-17T12:23:08.841580133Z" level=info msg="TearDown network for sandbox \"78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06\" successfully" Jan 17 12:23:08.842904 containerd[1603]: time="2025-01-17T12:23:08.841610599Z" level=info msg="StopPodSandbox for \"78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06\" returns successfully" Jan 17 12:23:08.842904 containerd[1603]: time="2025-01-17T12:23:08.842458406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mvjx9,Uid:9e48819f-106c-43b3-89f6-2976b3a7c1c2,Namespace:calico-system,Attempt:1,}" Jan 17 12:23:08.848973 containerd[1603]: time="2025-01-17T12:23:08.848866722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:23:08.894035 systemd[1]: run-netns-cni\x2d5bab01f4\x2d1b46\x2dc52f\x2dabb3\x2d0bbbc90e52e7.mount: Deactivated successfully. Jan 17 12:23:09.041567 systemd-networkd[1223]: vxlan.calico: Link UP Jan 17 12:23:09.041581 systemd-networkd[1223]: vxlan.calico: Gained carrier Jan 17 12:23:09.139408 kubelet[2764]: E0117 12:23:09.139368 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:09.347435 systemd-networkd[1223]: cali9aeef62f472: Link UP Jan 17 12:23:09.347815 systemd-networkd[1223]: cali9aeef62f472: Gained carrier Jan 17 12:23:09.405972 containerd[1603]: 2025-01-17 12:23:09.022 [INFO][4272] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--1--b9b10bea58-k8s-csi--node--driver--mvjx9-eth0 csi-node-driver- calico-system 9e48819f-106c-43b3-89f6-2976b3a7c1c2 853 0 2025-01-17 12:22:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-1-b9b10bea58 csi-node-driver-mvjx9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9aeef62f472 [] []}} ContainerID="971cac325d62f1c9a98c7a9aa629dcaa1a40d702cc17428de5e7ba286f60faa3" Namespace="calico-system" Pod="csi-node-driver-mvjx9" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-csi--node--driver--mvjx9-" Jan 17 12:23:09.405972 containerd[1603]: 2025-01-17 12:23:09.023 [INFO][4272] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="971cac325d62f1c9a98c7a9aa629dcaa1a40d702cc17428de5e7ba286f60faa3" Namespace="calico-system" Pod="csi-node-driver-mvjx9" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-csi--node--driver--mvjx9-eth0" Jan 17 12:23:09.405972 containerd[1603]: 2025-01-17 12:23:09.209 [INFO][4312] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="971cac325d62f1c9a98c7a9aa629dcaa1a40d702cc17428de5e7ba286f60faa3" HandleID="k8s-pod-network.971cac325d62f1c9a98c7a9aa629dcaa1a40d702cc17428de5e7ba286f60faa3" Workload="ci--4081.3.0--1--b9b10bea58-k8s-csi--node--driver--mvjx9-eth0" Jan 17 12:23:09.405972 containerd[1603]: 2025-01-17 12:23:09.234 [INFO][4312] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="971cac325d62f1c9a98c7a9aa629dcaa1a40d702cc17428de5e7ba286f60faa3" HandleID="k8s-pod-network.971cac325d62f1c9a98c7a9aa629dcaa1a40d702cc17428de5e7ba286f60faa3" Workload="ci--4081.3.0--1--b9b10bea58-k8s-csi--node--driver--mvjx9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000227230), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-1-b9b10bea58", "pod":"csi-node-driver-mvjx9", "timestamp":"2025-01-17 12:23:09.209288433 +0000 UTC"}, Hostname:"ci-4081.3.0-1-b9b10bea58", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:23:09.405972 containerd[1603]: 2025-01-17 12:23:09.235 [INFO][4312] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:09.405972 containerd[1603]: 2025-01-17 12:23:09.236 [INFO][4312] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:09.405972 containerd[1603]: 2025-01-17 12:23:09.236 [INFO][4312] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-1-b9b10bea58' Jan 17 12:23:09.405972 containerd[1603]: 2025-01-17 12:23:09.240 [INFO][4312] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.971cac325d62f1c9a98c7a9aa629dcaa1a40d702cc17428de5e7ba286f60faa3" host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:09.405972 containerd[1603]: 2025-01-17 12:23:09.253 [INFO][4312] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:09.405972 containerd[1603]: 2025-01-17 12:23:09.276 [INFO][4312] ipam/ipam.go 489: Trying affinity for 192.168.56.128/26 host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:09.405972 containerd[1603]: 2025-01-17 12:23:09.285 [INFO][4312] ipam/ipam.go 155: Attempting to load block cidr=192.168.56.128/26 host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:09.405972 containerd[1603]: 2025-01-17 12:23:09.294 [INFO][4312] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.56.128/26 host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:09.405972 containerd[1603]: 2025-01-17 12:23:09.294 [INFO][4312] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.56.128/26 handle="k8s-pod-network.971cac325d62f1c9a98c7a9aa629dcaa1a40d702cc17428de5e7ba286f60faa3" host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:09.405972 containerd[1603]: 2025-01-17 12:23:09.299 [INFO][4312] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.971cac325d62f1c9a98c7a9aa629dcaa1a40d702cc17428de5e7ba286f60faa3 Jan 17 12:23:09.405972 containerd[1603]: 2025-01-17 12:23:09.309 [INFO][4312] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.56.128/26 handle="k8s-pod-network.971cac325d62f1c9a98c7a9aa629dcaa1a40d702cc17428de5e7ba286f60faa3" host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:09.405972 containerd[1603]: 2025-01-17 12:23:09.322 [INFO][4312] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.56.131/26] block=192.168.56.128/26 handle="k8s-pod-network.971cac325d62f1c9a98c7a9aa629dcaa1a40d702cc17428de5e7ba286f60faa3" host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:09.405972 containerd[1603]: 2025-01-17 12:23:09.323 [INFO][4312] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.56.131/26] handle="k8s-pod-network.971cac325d62f1c9a98c7a9aa629dcaa1a40d702cc17428de5e7ba286f60faa3" host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:09.405972 containerd[1603]: 2025-01-17 12:23:09.324 [INFO][4312] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:09.405972 containerd[1603]: 2025-01-17 12:23:09.324 [INFO][4312] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.56.131/26] IPv6=[] ContainerID="971cac325d62f1c9a98c7a9aa629dcaa1a40d702cc17428de5e7ba286f60faa3" HandleID="k8s-pod-network.971cac325d62f1c9a98c7a9aa629dcaa1a40d702cc17428de5e7ba286f60faa3" Workload="ci--4081.3.0--1--b9b10bea58-k8s-csi--node--driver--mvjx9-eth0" Jan 17 12:23:09.415362 containerd[1603]: 2025-01-17 12:23:09.333 [INFO][4272] cni-plugin/k8s.go 386: Populated endpoint ContainerID="971cac325d62f1c9a98c7a9aa629dcaa1a40d702cc17428de5e7ba286f60faa3" Namespace="calico-system" Pod="csi-node-driver-mvjx9" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-csi--node--driver--mvjx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--1--b9b10bea58-k8s-csi--node--driver--mvjx9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9e48819f-106c-43b3-89f6-2976b3a7c1c2", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-1-b9b10bea58", ContainerID:"", Pod:"csi-node-driver-mvjx9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.56.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9aeef62f472", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:09.415362 containerd[1603]: 2025-01-17 12:23:09.333 [INFO][4272] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.56.131/32] ContainerID="971cac325d62f1c9a98c7a9aa629dcaa1a40d702cc17428de5e7ba286f60faa3" Namespace="calico-system" Pod="csi-node-driver-mvjx9" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-csi--node--driver--mvjx9-eth0" Jan 17 12:23:09.415362 containerd[1603]: 2025-01-17 12:23:09.333 [INFO][4272] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9aeef62f472 ContainerID="971cac325d62f1c9a98c7a9aa629dcaa1a40d702cc17428de5e7ba286f60faa3" Namespace="calico-system" Pod="csi-node-driver-mvjx9" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-csi--node--driver--mvjx9-eth0" Jan 17 12:23:09.415362 containerd[1603]: 2025-01-17 12:23:09.352 [INFO][4272] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="971cac325d62f1c9a98c7a9aa629dcaa1a40d702cc17428de5e7ba286f60faa3" Namespace="calico-system" Pod="csi-node-driver-mvjx9" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-csi--node--driver--mvjx9-eth0" Jan 17 12:23:09.415362 containerd[1603]: 2025-01-17 12:23:09.355 [INFO][4272] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="971cac325d62f1c9a98c7a9aa629dcaa1a40d702cc17428de5e7ba286f60faa3" Namespace="calico-system" Pod="csi-node-driver-mvjx9" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-csi--node--driver--mvjx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--1--b9b10bea58-k8s-csi--node--driver--mvjx9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9e48819f-106c-43b3-89f6-2976b3a7c1c2", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-1-b9b10bea58", ContainerID:"971cac325d62f1c9a98c7a9aa629dcaa1a40d702cc17428de5e7ba286f60faa3", Pod:"csi-node-driver-mvjx9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.56.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9aeef62f472", MAC:"ea:20:e2:06:3f:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:09.415362 containerd[1603]: 2025-01-17 12:23:09.383 [INFO][4272] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="971cac325d62f1c9a98c7a9aa629dcaa1a40d702cc17428de5e7ba286f60faa3" Namespace="calico-system" Pod="csi-node-driver-mvjx9" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-csi--node--driver--mvjx9-eth0" Jan 17 12:23:09.473075 containerd[1603]: time="2025-01-17T12:23:09.472156193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:23:09.473075 containerd[1603]: time="2025-01-17T12:23:09.472295656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:23:09.473075 containerd[1603]: time="2025-01-17T12:23:09.472527231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:09.475941 containerd[1603]: time="2025-01-17T12:23:09.475656272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:09.526309 systemd[1]: run-containerd-runc-k8s.io-971cac325d62f1c9a98c7a9aa629dcaa1a40d702cc17428de5e7ba286f60faa3-runc.cZuij5.mount: Deactivated successfully. Jan 17 12:23:09.558540 containerd[1603]: time="2025-01-17T12:23:09.557908014Z" level=info msg="StopPodSandbox for \"490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c\"" Jan 17 12:23:09.641364 containerd[1603]: time="2025-01-17T12:23:09.637478088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mvjx9,Uid:9e48819f-106c-43b3-89f6-2976b3a7c1c2,Namespace:calico-system,Attempt:1,} returns sandbox id \"971cac325d62f1c9a98c7a9aa629dcaa1a40d702cc17428de5e7ba286f60faa3\"" Jan 17 12:23:09.701237 systemd-networkd[1223]: cali3db696dd913: Gained IPv6LL Jan 17 12:23:09.831277 containerd[1603]: 2025-01-17 12:23:09.728 [INFO][4397] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" Jan 17 12:23:09.831277 containerd[1603]: 2025-01-17 12:23:09.728 [INFO][4397] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" iface="eth0" netns="/var/run/netns/cni-5f39f271-f3d0-5b68-26a2-ebf21b2384b6" Jan 17 12:23:09.831277 containerd[1603]: 2025-01-17 12:23:09.729 [INFO][4397] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" iface="eth0" netns="/var/run/netns/cni-5f39f271-f3d0-5b68-26a2-ebf21b2384b6" Jan 17 12:23:09.831277 containerd[1603]: 2025-01-17 12:23:09.730 [INFO][4397] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" iface="eth0" netns="/var/run/netns/cni-5f39f271-f3d0-5b68-26a2-ebf21b2384b6" Jan 17 12:23:09.831277 containerd[1603]: 2025-01-17 12:23:09.730 [INFO][4397] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" Jan 17 12:23:09.831277 containerd[1603]: 2025-01-17 12:23:09.730 [INFO][4397] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" Jan 17 12:23:09.831277 containerd[1603]: 2025-01-17 12:23:09.794 [INFO][4415] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" HandleID="k8s-pod-network.490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--kube--controllers--6946578766--9thqb-eth0" Jan 17 12:23:09.831277 containerd[1603]: 2025-01-17 12:23:09.797 [INFO][4415] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:09.831277 containerd[1603]: 2025-01-17 12:23:09.797 [INFO][4415] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:09.831277 containerd[1603]: 2025-01-17 12:23:09.819 [WARNING][4415] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" HandleID="k8s-pod-network.490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--kube--controllers--6946578766--9thqb-eth0" Jan 17 12:23:09.831277 containerd[1603]: 2025-01-17 12:23:09.819 [INFO][4415] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" HandleID="k8s-pod-network.490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--kube--controllers--6946578766--9thqb-eth0" Jan 17 12:23:09.831277 containerd[1603]: 2025-01-17 12:23:09.822 [INFO][4415] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:09.831277 containerd[1603]: 2025-01-17 12:23:09.825 [INFO][4397] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" Jan 17 12:23:09.837580 containerd[1603]: time="2025-01-17T12:23:09.831457768Z" level=info msg="TearDown network for sandbox \"490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c\" successfully" Jan 17 12:23:09.837580 containerd[1603]: time="2025-01-17T12:23:09.831504600Z" level=info msg="StopPodSandbox for \"490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c\" returns successfully" Jan 17 12:23:09.837580 containerd[1603]: time="2025-01-17T12:23:09.834033286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6946578766-9thqb,Uid:fafa34d6-67fc-4cb4-83a2-49e3ad56846d,Namespace:calico-system,Attempt:1,}" Jan 17 12:23:09.840907 systemd[1]: run-netns-cni\x2d5f39f271\x2df3d0\x2d5b68\x2d26a2\x2debf21b2384b6.mount: Deactivated successfully. Jan 17 12:23:09.895315 systemd-journald[1141]: Under memory pressure, flushing caches. Jan 17 12:23:09.894261 systemd-resolved[1488]: Under memory pressure, flushing caches. Jan 17 12:23:09.894290 systemd-resolved[1488]: Flushed all caches. Jan 17 12:23:10.058432 systemd-networkd[1223]: cali0489a2f66c8: Link UP Jan 17 12:23:10.061344 systemd-networkd[1223]: cali0489a2f66c8: Gained carrier Jan 17 12:23:10.089824 containerd[1603]: 2025-01-17 12:23:09.929 [INFO][4442] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--1--b9b10bea58-k8s-calico--kube--controllers--6946578766--9thqb-eth0 calico-kube-controllers-6946578766- calico-system fafa34d6-67fc-4cb4-83a2-49e3ad56846d 869 0 2025-01-17 12:22:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6946578766 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-1-b9b10bea58 calico-kube-controllers-6946578766-9thqb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0489a2f66c8 [] []}} ContainerID="680326e5ce7a2b3f40d3cd40b369bde9ccaef86f87eb6f48861aa85fc935b635" Namespace="calico-system" Pod="calico-kube-controllers-6946578766-9thqb" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-calico--kube--controllers--6946578766--9thqb-" Jan 17 12:23:10.089824 containerd[1603]: 2025-01-17 12:23:09.930 [INFO][4442] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="680326e5ce7a2b3f40d3cd40b369bde9ccaef86f87eb6f48861aa85fc935b635" Namespace="calico-system" Pod="calico-kube-controllers-6946578766-9thqb" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-calico--kube--controllers--6946578766--9thqb-eth0" Jan 17 12:23:10.089824 containerd[1603]: 2025-01-17 12:23:09.980 [INFO][4454] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="680326e5ce7a2b3f40d3cd40b369bde9ccaef86f87eb6f48861aa85fc935b635" HandleID="k8s-pod-network.680326e5ce7a2b3f40d3cd40b369bde9ccaef86f87eb6f48861aa85fc935b635" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--kube--controllers--6946578766--9thqb-eth0" Jan 17 12:23:10.089824 containerd[1603]: 2025-01-17 12:23:09.994 [INFO][4454] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="680326e5ce7a2b3f40d3cd40b369bde9ccaef86f87eb6f48861aa85fc935b635" HandleID="k8s-pod-network.680326e5ce7a2b3f40d3cd40b369bde9ccaef86f87eb6f48861aa85fc935b635" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--kube--controllers--6946578766--9thqb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bcc90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-1-b9b10bea58", "pod":"calico-kube-controllers-6946578766-9thqb", "timestamp":"2025-01-17 12:23:09.980275695 +0000 UTC"}, Hostname:"ci-4081.3.0-1-b9b10bea58", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:23:10.089824 containerd[1603]: 2025-01-17 12:23:09.994 [INFO][4454] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:10.089824 containerd[1603]: 2025-01-17 12:23:09.994 [INFO][4454] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:10.089824 containerd[1603]: 2025-01-17 12:23:09.994 [INFO][4454] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-1-b9b10bea58' Jan 17 12:23:10.089824 containerd[1603]: 2025-01-17 12:23:09.997 [INFO][4454] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.680326e5ce7a2b3f40d3cd40b369bde9ccaef86f87eb6f48861aa85fc935b635" host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:10.089824 containerd[1603]: 2025-01-17 12:23:10.009 [INFO][4454] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:10.089824 containerd[1603]: 2025-01-17 12:23:10.017 [INFO][4454] ipam/ipam.go 489: Trying affinity for 192.168.56.128/26 host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:10.089824 containerd[1603]: 2025-01-17 12:23:10.021 [INFO][4454] ipam/ipam.go 155: Attempting to load block cidr=192.168.56.128/26 host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:10.089824 containerd[1603]: 2025-01-17 12:23:10.025 [INFO][4454] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.56.128/26 host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:10.089824 containerd[1603]: 2025-01-17 12:23:10.025 [INFO][4454] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.56.128/26 handle="k8s-pod-network.680326e5ce7a2b3f40d3cd40b369bde9ccaef86f87eb6f48861aa85fc935b635" host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:10.089824 containerd[1603]: 2025-01-17 12:23:10.028 [INFO][4454] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.680326e5ce7a2b3f40d3cd40b369bde9ccaef86f87eb6f48861aa85fc935b635 Jan 17 12:23:10.089824 containerd[1603]: 2025-01-17 12:23:10.035 [INFO][4454] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.56.128/26 handle="k8s-pod-network.680326e5ce7a2b3f40d3cd40b369bde9ccaef86f87eb6f48861aa85fc935b635" host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:10.089824 containerd[1603]: 2025-01-17 12:23:10.046 [INFO][4454] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.56.132/26] block=192.168.56.128/26 handle="k8s-pod-network.680326e5ce7a2b3f40d3cd40b369bde9ccaef86f87eb6f48861aa85fc935b635" host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:10.089824 containerd[1603]: 2025-01-17 12:23:10.047 [INFO][4454] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.56.132/26] handle="k8s-pod-network.680326e5ce7a2b3f40d3cd40b369bde9ccaef86f87eb6f48861aa85fc935b635" host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:10.089824 containerd[1603]: 2025-01-17 12:23:10.047 [INFO][4454] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:10.089824 containerd[1603]: 2025-01-17 12:23:10.047 [INFO][4454] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.56.132/26] IPv6=[] ContainerID="680326e5ce7a2b3f40d3cd40b369bde9ccaef86f87eb6f48861aa85fc935b635" HandleID="k8s-pod-network.680326e5ce7a2b3f40d3cd40b369bde9ccaef86f87eb6f48861aa85fc935b635" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--kube--controllers--6946578766--9thqb-eth0" Jan 17 12:23:10.091066 containerd[1603]: 2025-01-17 12:23:10.052 [INFO][4442] cni-plugin/k8s.go 386: Populated endpoint ContainerID="680326e5ce7a2b3f40d3cd40b369bde9ccaef86f87eb6f48861aa85fc935b635" Namespace="calico-system" Pod="calico-kube-controllers-6946578766-9thqb" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-calico--kube--controllers--6946578766--9thqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--1--b9b10bea58-k8s-calico--kube--controllers--6946578766--9thqb-eth0", GenerateName:"calico-kube-controllers-6946578766-", Namespace:"calico-system", SelfLink:"", UID:"fafa34d6-67fc-4cb4-83a2-49e3ad56846d", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6946578766", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-1-b9b10bea58", ContainerID:"", Pod:"calico-kube-controllers-6946578766-9thqb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.56.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0489a2f66c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:10.091066 containerd[1603]: 2025-01-17 12:23:10.052 [INFO][4442] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.56.132/32] ContainerID="680326e5ce7a2b3f40d3cd40b369bde9ccaef86f87eb6f48861aa85fc935b635" Namespace="calico-system" Pod="calico-kube-controllers-6946578766-9thqb" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-calico--kube--controllers--6946578766--9thqb-eth0" Jan 17 12:23:10.091066 containerd[1603]: 2025-01-17 12:23:10.052 [INFO][4442] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0489a2f66c8 ContainerID="680326e5ce7a2b3f40d3cd40b369bde9ccaef86f87eb6f48861aa85fc935b635" Namespace="calico-system" Pod="calico-kube-controllers-6946578766-9thqb" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-calico--kube--controllers--6946578766--9thqb-eth0" Jan 17 12:23:10.091066 containerd[1603]: 2025-01-17 12:23:10.061 [INFO][4442] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="680326e5ce7a2b3f40d3cd40b369bde9ccaef86f87eb6f48861aa85fc935b635" Namespace="calico-system" Pod="calico-kube-controllers-6946578766-9thqb" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-calico--kube--controllers--6946578766--9thqb-eth0" Jan 17 12:23:10.091066 containerd[1603]: 2025-01-17 12:23:10.063 [INFO][4442] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="680326e5ce7a2b3f40d3cd40b369bde9ccaef86f87eb6f48861aa85fc935b635" Namespace="calico-system" Pod="calico-kube-controllers-6946578766-9thqb" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-calico--kube--controllers--6946578766--9thqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--1--b9b10bea58-k8s-calico--kube--controllers--6946578766--9thqb-eth0", GenerateName:"calico-kube-controllers-6946578766-", Namespace:"calico-system", SelfLink:"", UID:"fafa34d6-67fc-4cb4-83a2-49e3ad56846d", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6946578766", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-1-b9b10bea58", ContainerID:"680326e5ce7a2b3f40d3cd40b369bde9ccaef86f87eb6f48861aa85fc935b635", Pod:"calico-kube-controllers-6946578766-9thqb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.56.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0489a2f66c8", MAC:"5a:1d:a5:c5:e3:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:10.091066 containerd[1603]: 2025-01-17 12:23:10.085 [INFO][4442] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="680326e5ce7a2b3f40d3cd40b369bde9ccaef86f87eb6f48861aa85fc935b635" Namespace="calico-system" Pod="calico-kube-controllers-6946578766-9thqb" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-calico--kube--controllers--6946578766--9thqb-eth0" Jan 17 12:23:10.134051 containerd[1603]: time="2025-01-17T12:23:10.133610063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:23:10.134051 containerd[1603]: time="2025-01-17T12:23:10.133691416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:23:10.134051 containerd[1603]: time="2025-01-17T12:23:10.133734029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:10.134051 containerd[1603]: time="2025-01-17T12:23:10.133889132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:10.195374 kubelet[2764]: E0117 12:23:10.194930 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:10.259729 containerd[1603]: time="2025-01-17T12:23:10.259675175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6946578766-9thqb,Uid:fafa34d6-67fc-4cb4-83a2-49e3ad56846d,Namespace:calico-system,Attempt:1,} returns sandbox id \"680326e5ce7a2b3f40d3cd40b369bde9ccaef86f87eb6f48861aa85fc935b635\"" Jan 17 12:23:10.567825 containerd[1603]: time="2025-01-17T12:23:10.567487112Z" level=info msg="StopPodSandbox for \"b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c\"" Jan 17 12:23:10.917326 systemd-networkd[1223]: vxlan.calico: Gained IPv6LL Jan 17 12:23:10.942202 containerd[1603]: 2025-01-17 12:23:10.733 [INFO][4529] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" Jan 17 12:23:10.942202 containerd[1603]: 2025-01-17 12:23:10.734 [INFO][4529] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" iface="eth0" netns="/var/run/netns/cni-5e649ea6-7849-1ab9-5e1c-dffa97a0f6db" Jan 17 12:23:10.942202 containerd[1603]: 2025-01-17 12:23:10.735 [INFO][4529] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" iface="eth0" netns="/var/run/netns/cni-5e649ea6-7849-1ab9-5e1c-dffa97a0f6db" Jan 17 12:23:10.942202 containerd[1603]: 2025-01-17 12:23:10.740 [INFO][4529] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" iface="eth0" netns="/var/run/netns/cni-5e649ea6-7849-1ab9-5e1c-dffa97a0f6db" Jan 17 12:23:10.942202 containerd[1603]: 2025-01-17 12:23:10.740 [INFO][4529] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" Jan 17 12:23:10.942202 containerd[1603]: 2025-01-17 12:23:10.740 [INFO][4529] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" Jan 17 12:23:10.942202 containerd[1603]: 2025-01-17 12:23:10.905 [INFO][4536] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" HandleID="k8s-pod-network.b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" Workload="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--tftgd-eth0" Jan 17 12:23:10.942202 containerd[1603]: 2025-01-17 12:23:10.905 [INFO][4536] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:10.942202 containerd[1603]: 2025-01-17 12:23:10.905 [INFO][4536] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:10.942202 containerd[1603]: 2025-01-17 12:23:10.927 [WARNING][4536] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" HandleID="k8s-pod-network.b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" Workload="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--tftgd-eth0" Jan 17 12:23:10.942202 containerd[1603]: 2025-01-17 12:23:10.927 [INFO][4536] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" HandleID="k8s-pod-network.b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" Workload="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--tftgd-eth0" Jan 17 12:23:10.942202 containerd[1603]: 2025-01-17 12:23:10.930 [INFO][4536] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:10.942202 containerd[1603]: 2025-01-17 12:23:10.936 [INFO][4529] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" Jan 17 12:23:10.942770 containerd[1603]: time="2025-01-17T12:23:10.942378996Z" level=info msg="TearDown network for sandbox \"b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c\" successfully" Jan 17 12:23:10.942770 containerd[1603]: time="2025-01-17T12:23:10.942415794Z" level=info msg="StopPodSandbox for \"b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c\" returns successfully" Jan 17 12:23:10.945816 kubelet[2764]: E0117 12:23:10.943420 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:10.946003 containerd[1603]: time="2025-01-17T12:23:10.945368421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tftgd,Uid:21096fca-e879-4e15-89db-72e1ab742bae,Namespace:kube-system,Attempt:1,}" Jan 17 12:23:10.949652 systemd[1]: run-netns-cni\x2d5e649ea6\x2d7849\x2d1ab9\x2d5e1c\x2ddffa97a0f6db.mount: Deactivated successfully. Jan 17 12:23:11.110131 systemd-networkd[1223]: cali9aeef62f472: Gained IPv6LL Jan 17 12:23:11.211978 kubelet[2764]: E0117 12:23:11.209074 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:11.379128 systemd-networkd[1223]: calidd35b19fe81: Link UP Jan 17 12:23:11.385697 systemd-networkd[1223]: calidd35b19fe81: Gained carrier Jan 17 12:23:11.423079 containerd[1603]: 2025-01-17 12:23:11.127 [INFO][4547] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--tftgd-eth0 coredns-76f75df574- kube-system 21096fca-e879-4e15-89db-72e1ab742bae 876 0 2025-01-17 12:22:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-1-b9b10bea58 coredns-76f75df574-tftgd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidd35b19fe81 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1af403224062af560690281b4383107791f9e26e748d055640d56204216b57a2" Namespace="kube-system" Pod="coredns-76f75df574-tftgd" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--tftgd-" Jan 17 12:23:11.423079 containerd[1603]: 2025-01-17 12:23:11.128 [INFO][4547] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1af403224062af560690281b4383107791f9e26e748d055640d56204216b57a2" Namespace="kube-system" Pod="coredns-76f75df574-tftgd" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--tftgd-eth0" Jan 17 12:23:11.423079 containerd[1603]: 2025-01-17 12:23:11.251 [INFO][4559] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1af403224062af560690281b4383107791f9e26e748d055640d56204216b57a2" HandleID="k8s-pod-network.1af403224062af560690281b4383107791f9e26e748d055640d56204216b57a2" Workload="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--tftgd-eth0" Jan 17 12:23:11.423079 containerd[1603]: 2025-01-17 12:23:11.271 [INFO][4559] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1af403224062af560690281b4383107791f9e26e748d055640d56204216b57a2" HandleID="k8s-pod-network.1af403224062af560690281b4383107791f9e26e748d055640d56204216b57a2" Workload="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--tftgd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c3c10), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-1-b9b10bea58", "pod":"coredns-76f75df574-tftgd", "timestamp":"2025-01-17 12:23:11.251210283 +0000 UTC"}, Hostname:"ci-4081.3.0-1-b9b10bea58", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:23:11.423079 containerd[1603]: 2025-01-17 12:23:11.271 [INFO][4559] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:11.423079 containerd[1603]: 2025-01-17 12:23:11.271 [INFO][4559] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:11.423079 containerd[1603]: 2025-01-17 12:23:11.271 [INFO][4559] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-1-b9b10bea58' Jan 17 12:23:11.423079 containerd[1603]: 2025-01-17 12:23:11.277 [INFO][4559] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1af403224062af560690281b4383107791f9e26e748d055640d56204216b57a2" host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:11.423079 containerd[1603]: 2025-01-17 12:23:11.287 [INFO][4559] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:11.423079 containerd[1603]: 2025-01-17 12:23:11.297 [INFO][4559] ipam/ipam.go 489: Trying affinity for 192.168.56.128/26 host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:11.423079 containerd[1603]: 2025-01-17 12:23:11.306 [INFO][4559] ipam/ipam.go 155: Attempting to load block cidr=192.168.56.128/26 host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:11.423079 containerd[1603]: 2025-01-17 12:23:11.315 [INFO][4559] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.56.128/26 host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:11.423079 containerd[1603]: 2025-01-17 12:23:11.317 [INFO][4559] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.56.128/26 handle="k8s-pod-network.1af403224062af560690281b4383107791f9e26e748d055640d56204216b57a2" host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:11.423079 containerd[1603]: 2025-01-17 12:23:11.323 [INFO][4559] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1af403224062af560690281b4383107791f9e26e748d055640d56204216b57a2 Jan 17 12:23:11.423079 containerd[1603]: 2025-01-17 12:23:11.332 [INFO][4559] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.56.128/26 handle="k8s-pod-network.1af403224062af560690281b4383107791f9e26e748d055640d56204216b57a2" host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:11.423079 containerd[1603]: 2025-01-17 12:23:11.348 [INFO][4559] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.56.133/26] block=192.168.56.128/26 handle="k8s-pod-network.1af403224062af560690281b4383107791f9e26e748d055640d56204216b57a2" host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:11.423079 containerd[1603]: 2025-01-17 12:23:11.348 [INFO][4559] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.56.133/26] handle="k8s-pod-network.1af403224062af560690281b4383107791f9e26e748d055640d56204216b57a2" host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:11.423079 containerd[1603]: 2025-01-17 12:23:11.348 [INFO][4559] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:11.423079 containerd[1603]: 2025-01-17 12:23:11.348 [INFO][4559] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.56.133/26] IPv6=[] ContainerID="1af403224062af560690281b4383107791f9e26e748d055640d56204216b57a2" HandleID="k8s-pod-network.1af403224062af560690281b4383107791f9e26e748d055640d56204216b57a2" Workload="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--tftgd-eth0" Jan 17 12:23:11.426675 containerd[1603]: 2025-01-17 12:23:11.359 [INFO][4547] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1af403224062af560690281b4383107791f9e26e748d055640d56204216b57a2" Namespace="kube-system" Pod="coredns-76f75df574-tftgd" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--tftgd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--tftgd-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"21096fca-e879-4e15-89db-72e1ab742bae", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-1-b9b10bea58", ContainerID:"", Pod:"coredns-76f75df574-tftgd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd35b19fe81", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:11.426675 containerd[1603]: 2025-01-17 12:23:11.359 [INFO][4547] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.56.133/32] ContainerID="1af403224062af560690281b4383107791f9e26e748d055640d56204216b57a2" Namespace="kube-system" Pod="coredns-76f75df574-tftgd" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--tftgd-eth0" Jan 17 12:23:11.426675 containerd[1603]: 2025-01-17 12:23:11.359 [INFO][4547] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd35b19fe81 ContainerID="1af403224062af560690281b4383107791f9e26e748d055640d56204216b57a2" Namespace="kube-system" Pod="coredns-76f75df574-tftgd" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--tftgd-eth0" Jan 17 12:23:11.426675 containerd[1603]: 2025-01-17 12:23:11.385 [INFO][4547] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1af403224062af560690281b4383107791f9e26e748d055640d56204216b57a2" Namespace="kube-system" Pod="coredns-76f75df574-tftgd" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--tftgd-eth0" Jan 17 12:23:11.426675 containerd[1603]: 2025-01-17 12:23:11.387 [INFO][4547] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1af403224062af560690281b4383107791f9e26e748d055640d56204216b57a2" Namespace="kube-system" Pod="coredns-76f75df574-tftgd" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--tftgd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--tftgd-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"21096fca-e879-4e15-89db-72e1ab742bae", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-1-b9b10bea58", ContainerID:"1af403224062af560690281b4383107791f9e26e748d055640d56204216b57a2", Pod:"coredns-76f75df574-tftgd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd35b19fe81", MAC:"a6:c0:7f:47:a3:15", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:11.426675 containerd[1603]: 2025-01-17 12:23:11.409 [INFO][4547] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1af403224062af560690281b4383107791f9e26e748d055640d56204216b57a2" Namespace="kube-system" Pod="coredns-76f75df574-tftgd" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--tftgd-eth0" Jan 17 12:23:11.479678 containerd[1603]: time="2025-01-17T12:23:11.479460285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:23:11.479678 containerd[1603]: time="2025-01-17T12:23:11.479539753Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:23:11.483140 containerd[1603]: time="2025-01-17T12:23:11.481078642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:11.483140 containerd[1603]: time="2025-01-17T12:23:11.481631482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:11.558508 containerd[1603]: time="2025-01-17T12:23:11.558444674Z" level=info msg="StopPodSandbox for \"9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac\"" Jan 17 12:23:11.654571 containerd[1603]: time="2025-01-17T12:23:11.654497272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tftgd,Uid:21096fca-e879-4e15-89db-72e1ab742bae,Namespace:kube-system,Attempt:1,} returns sandbox id \"1af403224062af560690281b4383107791f9e26e748d055640d56204216b57a2\"" Jan 17 12:23:11.657882 kubelet[2764]: E0117 12:23:11.655402 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:11.664838 containerd[1603]: time="2025-01-17T12:23:11.664792921Z" level=info msg="CreateContainer within sandbox \"1af403224062af560690281b4383107791f9e26e748d055640d56204216b57a2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:23:11.712085 containerd[1603]: time="2025-01-17T12:23:11.711975219Z" level=info msg="CreateContainer within sandbox \"1af403224062af560690281b4383107791f9e26e748d055640d56204216b57a2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"102080bf237634bc62ffa3174c12cc78d082c9a3a82a78281a987da62b593c79\"" Jan 17 12:23:11.717891 containerd[1603]: time="2025-01-17T12:23:11.715549249Z" level=info msg="StartContainer for \"102080bf237634bc62ffa3174c12cc78d082c9a3a82a78281a987da62b593c79\"" Jan 17 12:23:11.890683 containerd[1603]: 2025-01-17 12:23:11.775 [INFO][4630] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" Jan 17 12:23:11.890683 containerd[1603]: 2025-01-17 12:23:11.776 [INFO][4630] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" iface="eth0" netns="/var/run/netns/cni-16e458c2-35c5-d9a3-882c-4ac74535ad2a" Jan 17 12:23:11.890683 containerd[1603]: 2025-01-17 12:23:11.777 [INFO][4630] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" iface="eth0" netns="/var/run/netns/cni-16e458c2-35c5-d9a3-882c-4ac74535ad2a" Jan 17 12:23:11.890683 containerd[1603]: 2025-01-17 12:23:11.778 [INFO][4630] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" iface="eth0" netns="/var/run/netns/cni-16e458c2-35c5-d9a3-882c-4ac74535ad2a" Jan 17 12:23:11.890683 containerd[1603]: 2025-01-17 12:23:11.778 [INFO][4630] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" Jan 17 12:23:11.890683 containerd[1603]: 2025-01-17 12:23:11.778 [INFO][4630] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" Jan 17 12:23:11.890683 containerd[1603]: 2025-01-17 12:23:11.845 [INFO][4652] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" HandleID="k8s-pod-network.9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--mj8x2-eth0" Jan 17 12:23:11.890683 containerd[1603]: 2025-01-17 12:23:11.846 [INFO][4652] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:11.890683 containerd[1603]: 2025-01-17 12:23:11.846 [INFO][4652] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:11.890683 containerd[1603]: 2025-01-17 12:23:11.870 [WARNING][4652] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" HandleID="k8s-pod-network.9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--mj8x2-eth0" Jan 17 12:23:11.890683 containerd[1603]: 2025-01-17 12:23:11.870 [INFO][4652] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" HandleID="k8s-pod-network.9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--mj8x2-eth0" Jan 17 12:23:11.890683 containerd[1603]: 2025-01-17 12:23:11.878 [INFO][4652] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:11.890683 containerd[1603]: 2025-01-17 12:23:11.884 [INFO][4630] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" Jan 17 12:23:11.892199 containerd[1603]: time="2025-01-17T12:23:11.891870417Z" level=info msg="TearDown network for sandbox \"9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac\" successfully" Jan 17 12:23:11.892199 containerd[1603]: time="2025-01-17T12:23:11.891921948Z" level=info msg="StopPodSandbox for \"9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac\" returns successfully" Jan 17 12:23:11.893050 containerd[1603]: time="2025-01-17T12:23:11.892912570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6ff6796c-mj8x2,Uid:21685820-0784-4b7f-bf71-b7f2faefd98c,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:23:11.922986 containerd[1603]: time="2025-01-17T12:23:11.921754634Z" level=info msg="StartContainer for \"102080bf237634bc62ffa3174c12cc78d082c9a3a82a78281a987da62b593c79\" returns successfully" Jan 17 12:23:11.957148 systemd[1]: run-netns-cni\x2d16e458c2\x2d35c5\x2dd9a3\x2d882c\x2d4ac74535ad2a.mount: Deactivated successfully. Jan 17 12:23:12.005287 systemd-networkd[1223]: cali0489a2f66c8: Gained IPv6LL Jan 17 12:23:12.232733 kubelet[2764]: E0117 12:23:12.230198 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:12.293261 kubelet[2764]: I0117 12:23:12.291667 2764 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-tftgd" podStartSLOduration=38.291620056 podStartE2EDuration="38.291620056s" podCreationTimestamp="2025-01-17 12:22:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:23:12.257198666 +0000 UTC m=+51.956604065" watchObservedRunningTime="2025-01-17 12:23:12.291620056 +0000 UTC m=+51.991025800" Jan 17 12:23:12.327213 systemd-networkd[1223]: calid9e991916aa: Link UP Jan 17 12:23:12.329490 systemd-networkd[1223]: calid9e991916aa: Gained carrier Jan 17 12:23:12.387730 containerd[1603]: 2025-01-17 12:23:12.067 [INFO][4684] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--mj8x2-eth0 calico-apiserver-7d6ff6796c- calico-apiserver 21685820-0784-4b7f-bf71-b7f2faefd98c 890 0 2025-01-17 12:22:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d6ff6796c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-1-b9b10bea58 calico-apiserver-7d6ff6796c-mj8x2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid9e991916aa [] []}} ContainerID="99314ac7ecb50968b210bd80b5baebdc99c0994c07cd6eb03445cb50c7d08e2a" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff6796c-mj8x2" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--mj8x2-" Jan 17 12:23:12.387730 containerd[1603]: 2025-01-17 12:23:12.068 [INFO][4684] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="99314ac7ecb50968b210bd80b5baebdc99c0994c07cd6eb03445cb50c7d08e2a" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff6796c-mj8x2" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--mj8x2-eth0" Jan 17 12:23:12.387730 containerd[1603]: 2025-01-17 12:23:12.156 [INFO][4698] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="99314ac7ecb50968b210bd80b5baebdc99c0994c07cd6eb03445cb50c7d08e2a" HandleID="k8s-pod-network.99314ac7ecb50968b210bd80b5baebdc99c0994c07cd6eb03445cb50c7d08e2a" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--mj8x2-eth0" Jan 17 12:23:12.387730 containerd[1603]: 2025-01-17 12:23:12.182 [INFO][4698] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="99314ac7ecb50968b210bd80b5baebdc99c0994c07cd6eb03445cb50c7d08e2a" HandleID="k8s-pod-network.99314ac7ecb50968b210bd80b5baebdc99c0994c07cd6eb03445cb50c7d08e2a" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--mj8x2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038d460), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-1-b9b10bea58", "pod":"calico-apiserver-7d6ff6796c-mj8x2", "timestamp":"2025-01-17 12:23:12.156163981 +0000 UTC"}, Hostname:"ci-4081.3.0-1-b9b10bea58", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:23:12.387730 containerd[1603]: 2025-01-17 12:23:12.182 [INFO][4698] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:12.387730 containerd[1603]: 2025-01-17 12:23:12.182 [INFO][4698] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:12.387730 containerd[1603]: 2025-01-17 12:23:12.182 [INFO][4698] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-1-b9b10bea58' Jan 17 12:23:12.387730 containerd[1603]: 2025-01-17 12:23:12.186 [INFO][4698] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.99314ac7ecb50968b210bd80b5baebdc99c0994c07cd6eb03445cb50c7d08e2a" host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:12.387730 containerd[1603]: 2025-01-17 12:23:12.195 [INFO][4698] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:12.387730 containerd[1603]: 2025-01-17 12:23:12.208 [INFO][4698] ipam/ipam.go 489: Trying affinity for 192.168.56.128/26 host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:12.387730 containerd[1603]: 2025-01-17 12:23:12.215 [INFO][4698] ipam/ipam.go 155: Attempting to load block cidr=192.168.56.128/26 host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:12.387730 containerd[1603]: 2025-01-17 12:23:12.238 [INFO][4698] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.56.128/26 host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:12.387730 containerd[1603]: 2025-01-17 12:23:12.238 [INFO][4698] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.56.128/26 handle="k8s-pod-network.99314ac7ecb50968b210bd80b5baebdc99c0994c07cd6eb03445cb50c7d08e2a" host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:12.387730 containerd[1603]: 2025-01-17 12:23:12.243 [INFO][4698] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.99314ac7ecb50968b210bd80b5baebdc99c0994c07cd6eb03445cb50c7d08e2a Jan 17 12:23:12.387730 containerd[1603]: 2025-01-17 12:23:12.264 [INFO][4698] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.56.128/26 handle="k8s-pod-network.99314ac7ecb50968b210bd80b5baebdc99c0994c07cd6eb03445cb50c7d08e2a" host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:12.387730 containerd[1603]: 2025-01-17 12:23:12.295 [INFO][4698] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.56.134/26] block=192.168.56.128/26 handle="k8s-pod-network.99314ac7ecb50968b210bd80b5baebdc99c0994c07cd6eb03445cb50c7d08e2a" host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:12.387730 containerd[1603]: 2025-01-17 12:23:12.297 [INFO][4698] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.56.134/26] handle="k8s-pod-network.99314ac7ecb50968b210bd80b5baebdc99c0994c07cd6eb03445cb50c7d08e2a" host="ci-4081.3.0-1-b9b10bea58" Jan 17 12:23:12.387730 containerd[1603]: 2025-01-17 12:23:12.297 [INFO][4698] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:12.387730 containerd[1603]: 2025-01-17 12:23:12.297 [INFO][4698] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.56.134/26] IPv6=[] ContainerID="99314ac7ecb50968b210bd80b5baebdc99c0994c07cd6eb03445cb50c7d08e2a" HandleID="k8s-pod-network.99314ac7ecb50968b210bd80b5baebdc99c0994c07cd6eb03445cb50c7d08e2a" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--mj8x2-eth0" Jan 17 12:23:12.390167 containerd[1603]: 2025-01-17 12:23:12.302 [INFO][4684] cni-plugin/k8s.go 386: Populated endpoint ContainerID="99314ac7ecb50968b210bd80b5baebdc99c0994c07cd6eb03445cb50c7d08e2a" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff6796c-mj8x2" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--mj8x2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--mj8x2-eth0", GenerateName:"calico-apiserver-7d6ff6796c-", Namespace:"calico-apiserver", SelfLink:"", UID:"21685820-0784-4b7f-bf71-b7f2faefd98c", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d6ff6796c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-1-b9b10bea58", ContainerID:"", Pod:"calico-apiserver-7d6ff6796c-mj8x2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid9e991916aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:12.390167 containerd[1603]: 2025-01-17 12:23:12.303 [INFO][4684] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.56.134/32] ContainerID="99314ac7ecb50968b210bd80b5baebdc99c0994c07cd6eb03445cb50c7d08e2a" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff6796c-mj8x2" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--mj8x2-eth0" Jan 17 12:23:12.390167 containerd[1603]: 2025-01-17 12:23:12.304 [INFO][4684] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid9e991916aa ContainerID="99314ac7ecb50968b210bd80b5baebdc99c0994c07cd6eb03445cb50c7d08e2a" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff6796c-mj8x2" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--mj8x2-eth0" Jan 17 12:23:12.390167 containerd[1603]: 2025-01-17 12:23:12.331 [INFO][4684] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="99314ac7ecb50968b210bd80b5baebdc99c0994c07cd6eb03445cb50c7d08e2a" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff6796c-mj8x2" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--mj8x2-eth0" Jan 17 12:23:12.390167 containerd[1603]: 2025-01-17 12:23:12.334 [INFO][4684] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="99314ac7ecb50968b210bd80b5baebdc99c0994c07cd6eb03445cb50c7d08e2a" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff6796c-mj8x2" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--mj8x2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--mj8x2-eth0", GenerateName:"calico-apiserver-7d6ff6796c-", Namespace:"calico-apiserver", SelfLink:"", UID:"21685820-0784-4b7f-bf71-b7f2faefd98c", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d6ff6796c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-1-b9b10bea58", ContainerID:"99314ac7ecb50968b210bd80b5baebdc99c0994c07cd6eb03445cb50c7d08e2a", Pod:"calico-apiserver-7d6ff6796c-mj8x2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid9e991916aa", MAC:"96:a2:b2:1a:a3:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:12.390167 containerd[1603]: 2025-01-17 12:23:12.363 [INFO][4684] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="99314ac7ecb50968b210bd80b5baebdc99c0994c07cd6eb03445cb50c7d08e2a" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff6796c-mj8x2" WorkloadEndpoint="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--mj8x2-eth0" Jan 17 12:23:12.484958 containerd[1603]: time="2025-01-17T12:23:12.484470825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:23:12.484958 containerd[1603]: time="2025-01-17T12:23:12.484607263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:23:12.484958 containerd[1603]: time="2025-01-17T12:23:12.484623436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:12.484958 containerd[1603]: time="2025-01-17T12:23:12.484739957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:12.623380 containerd[1603]: time="2025-01-17T12:23:12.623320982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6ff6796c-mj8x2,Uid:21685820-0784-4b7f-bf71-b7f2faefd98c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"99314ac7ecb50968b210bd80b5baebdc99c0994c07cd6eb03445cb50c7d08e2a\"" Jan 17 12:23:13.094467 systemd-networkd[1223]: calidd35b19fe81: Gained IPv6LL Jan 17 12:23:13.111555 kubelet[2764]: I0117 12:23:13.109489 2764 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:23:13.116313 kubelet[2764]: E0117 12:23:13.116263 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:13.260761 containerd[1603]: time="2025-01-17T12:23:13.253572141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:13.260761 containerd[1603]: time="2025-01-17T12:23:13.255897506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 17 12:23:13.260761 containerd[1603]: time="2025-01-17T12:23:13.256774666Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:13.264874 containerd[1603]: time="2025-01-17T12:23:13.264235008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:13.268436 containerd[1603]: time="2025-01-17T12:23:13.265222857Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 4.416117091s" Jan 17 12:23:13.268436 containerd[1603]: time="2025-01-17T12:23:13.267296656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 17 12:23:13.278503 containerd[1603]: time="2025-01-17T12:23:13.275306389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 17 12:23:13.278812 containerd[1603]: time="2025-01-17T12:23:13.278728869Z" level=info msg="CreateContainer within sandbox \"b1566886e305e6f1028e2bcfc5c6ea23352cce27d4ef71b5350c56074aaf9e84\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:23:13.297112 kubelet[2764]: E0117 12:23:13.290985 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:13.420505 containerd[1603]: time="2025-01-17T12:23:13.412272629Z" level=info msg="CreateContainer within sandbox \"b1566886e305e6f1028e2bcfc5c6ea23352cce27d4ef71b5350c56074aaf9e84\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"25c7d3c43b0f0730d5b0f7fde542f882bc328b99221cbcc5bc1a6fb5e22db2bb\"" Jan 17 12:23:13.423340 containerd[1603]: time="2025-01-17T12:23:13.423265379Z" level=info msg="StartContainer for \"25c7d3c43b0f0730d5b0f7fde542f882bc328b99221cbcc5bc1a6fb5e22db2bb\"" Jan 17 12:23:13.660246 kubelet[2764]: E0117 12:23:13.659225 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:13.805939 containerd[1603]: time="2025-01-17T12:23:13.805766910Z" level=info msg="StartContainer for \"25c7d3c43b0f0730d5b0f7fde542f882bc328b99221cbcc5bc1a6fb5e22db2bb\" returns successfully" Jan 17 12:23:14.118985 systemd-networkd[1223]: calid9e991916aa: Gained IPv6LL Jan 17 12:23:14.252646 kubelet[2764]: E0117 12:23:14.251893 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:14.277850 kubelet[2764]: I0117 12:23:14.277790 2764 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d6ff6796c-vmbmt" podStartSLOduration=27.840136864 podStartE2EDuration="32.277738202s" podCreationTimestamp="2025-01-17 12:22:42 +0000 UTC" firstStartedPulling="2025-01-17 12:23:08.832401958 +0000 UTC m=+48.531807353" lastFinishedPulling="2025-01-17 12:23:13.270003315 +0000 UTC m=+52.969408691" observedRunningTime="2025-01-17 12:23:14.277350326 +0000 UTC m=+53.976755710" watchObservedRunningTime="2025-01-17 12:23:14.277738202 +0000 UTC m=+53.977143595" Jan 17 12:23:14.288282 systemd[1]: run-containerd-runc-k8s.io-25c7d3c43b0f0730d5b0f7fde542f882bc328b99221cbcc5bc1a6fb5e22db2bb-runc.96KwZR.mount: Deactivated successfully. Jan 17 12:23:15.134213 containerd[1603]: time="2025-01-17T12:23:15.134120612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:15.135566 containerd[1603]: time="2025-01-17T12:23:15.135379687Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 17 12:23:15.138058 containerd[1603]: time="2025-01-17T12:23:15.136317615Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:15.139698 containerd[1603]: time="2025-01-17T12:23:15.139641150Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:15.141487 containerd[1603]: time="2025-01-17T12:23:15.140819029Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.865458174s" Jan 17 12:23:15.141638 containerd[1603]: time="2025-01-17T12:23:15.141611309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 17 12:23:15.144928 containerd[1603]: time="2025-01-17T12:23:15.144874326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 17 12:23:15.153247 containerd[1603]: time="2025-01-17T12:23:15.151361847Z" level=info msg="CreateContainer within sandbox \"971cac325d62f1c9a98c7a9aa629dcaa1a40d702cc17428de5e7ba286f60faa3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 17 12:23:15.209698 containerd[1603]: time="2025-01-17T12:23:15.209624679Z" level=info msg="CreateContainer within sandbox \"971cac325d62f1c9a98c7a9aa629dcaa1a40d702cc17428de5e7ba286f60faa3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1cc0a7bd8ae1804d518c2f3eec4e4b31766b025d2be64530554ba2a3d770f0d1\"" Jan 17 12:23:15.212365 containerd[1603]: time="2025-01-17T12:23:15.212303467Z" level=info msg="StartContainer for \"1cc0a7bd8ae1804d518c2f3eec4e4b31766b025d2be64530554ba2a3d770f0d1\"" Jan 17 12:23:15.401655 containerd[1603]: time="2025-01-17T12:23:15.400889872Z" level=info msg="StartContainer for \"1cc0a7bd8ae1804d518c2f3eec4e4b31766b025d2be64530554ba2a3d770f0d1\" returns successfully" Jan 17 12:23:15.587195 systemd[1]: Started sshd@7-137.184.236.252:22-139.178.68.195:41554.service - OpenSSH per-connection server daemon (139.178.68.195:41554). Jan 17 12:23:15.718509 sshd[4894]: Accepted publickey for core from 139.178.68.195 port 41554 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:15.724767 sshd[4894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:15.754315 systemd-logind[1574]: New session 8 of user core. Jan 17 12:23:15.760767 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:23:15.784667 systemd-journald[1141]: Under memory pressure, flushing caches. Jan 17 12:23:15.782596 systemd-resolved[1488]: Under memory pressure, flushing caches. Jan 17 12:23:15.782654 systemd-resolved[1488]: Flushed all caches. Jan 17 12:23:16.301362 sshd[4894]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:16.316701 systemd[1]: sshd@7-137.184.236.252:22-139.178.68.195:41554.service: Deactivated successfully. Jan 17 12:23:16.333479 systemd-logind[1574]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:23:16.334858 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:23:16.340107 systemd-logind[1574]: Removed session 8. Jan 17 12:23:18.039852 containerd[1603]: time="2025-01-17T12:23:18.039783622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:18.043187 containerd[1603]: time="2025-01-17T12:23:18.042825625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 17 12:23:18.045404 containerd[1603]: time="2025-01-17T12:23:18.044501686Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:18.058160 containerd[1603]: time="2025-01-17T12:23:18.057243721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:18.059129 containerd[1603]: time="2025-01-17T12:23:18.059081744Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.912008757s" Jan 17 12:23:18.059309 containerd[1603]: time="2025-01-17T12:23:18.059284660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 17 12:23:18.061927 containerd[1603]: time="2025-01-17T12:23:18.060596194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:23:18.102002 containerd[1603]: time="2025-01-17T12:23:18.101733179Z" level=info msg="CreateContainer within sandbox \"680326e5ce7a2b3f40d3cd40b369bde9ccaef86f87eb6f48861aa85fc935b635\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 17 12:23:18.129835 containerd[1603]: time="2025-01-17T12:23:18.125960096Z" level=info msg="CreateContainer within sandbox \"680326e5ce7a2b3f40d3cd40b369bde9ccaef86f87eb6f48861aa85fc935b635\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ae78dcc13e42cdc01438e51bafdedd08b9865043148f1a06e686d7961b147d92\"" Jan 17 12:23:18.129835 containerd[1603]: time="2025-01-17T12:23:18.128358668Z" level=info msg="StartContainer for \"ae78dcc13e42cdc01438e51bafdedd08b9865043148f1a06e686d7961b147d92\"" Jan 17 12:23:18.276195 containerd[1603]: time="2025-01-17T12:23:18.276131825Z" level=info msg="StartContainer for \"ae78dcc13e42cdc01438e51bafdedd08b9865043148f1a06e686d7961b147d92\" returns successfully" Jan 17 12:23:18.472328 containerd[1603]: time="2025-01-17T12:23:18.472255893Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:18.478178 containerd[1603]: time="2025-01-17T12:23:18.477990787Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 17 12:23:18.512269 containerd[1603]: time="2025-01-17T12:23:18.512161355Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 449.972558ms" Jan 17 12:23:18.512269 containerd[1603]: time="2025-01-17T12:23:18.512218321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 17 12:23:18.516625 containerd[1603]: time="2025-01-17T12:23:18.516388214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 17 12:23:18.522440 containerd[1603]: time="2025-01-17T12:23:18.521312888Z" level=info msg="CreateContainer within sandbox \"99314ac7ecb50968b210bd80b5baebdc99c0994c07cd6eb03445cb50c7d08e2a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:23:18.548068 containerd[1603]: time="2025-01-17T12:23:18.547646864Z" level=info msg="CreateContainer within sandbox \"99314ac7ecb50968b210bd80b5baebdc99c0994c07cd6eb03445cb50c7d08e2a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1286483efe098299133bf9017b738c8e4275edd20a7de28b0130b93abf646059\"" Jan 17 12:23:18.552365 containerd[1603]: time="2025-01-17T12:23:18.551122207Z" level=info msg="StartContainer for \"1286483efe098299133bf9017b738c8e4275edd20a7de28b0130b93abf646059\"" Jan 17 12:23:18.634491 kubelet[2764]: I0117 12:23:18.634296 2764 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6946578766-9thqb" podStartSLOduration=27.837013708 podStartE2EDuration="35.634224343s" podCreationTimestamp="2025-01-17 12:22:43 +0000 UTC" firstStartedPulling="2025-01-17 12:23:10.26259068 +0000 UTC m=+49.961996052" lastFinishedPulling="2025-01-17 12:23:18.059801311 +0000 UTC m=+57.759206687" observedRunningTime="2025-01-17 12:23:18.421682167 +0000 UTC m=+58.121087565" watchObservedRunningTime="2025-01-17 12:23:18.634224343 +0000 UTC m=+58.333629747" Jan 17 12:23:18.729684 containerd[1603]: time="2025-01-17T12:23:18.729508318Z" level=info msg="StartContainer for \"1286483efe098299133bf9017b738c8e4275edd20a7de28b0130b93abf646059\" returns successfully" Jan 17 12:23:19.411721 kubelet[2764]: I0117 12:23:19.411657 2764 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d6ff6796c-mj8x2" podStartSLOduration=31.526540538 podStartE2EDuration="37.41159592s" podCreationTimestamp="2025-01-17 12:22:42 +0000 UTC" firstStartedPulling="2025-01-17 12:23:12.627701463 +0000 UTC m=+52.327106832" lastFinishedPulling="2025-01-17 12:23:18.512756831 +0000 UTC m=+58.212162214" observedRunningTime="2025-01-17 12:23:19.40890616 +0000 UTC m=+59.108311549" watchObservedRunningTime="2025-01-17 12:23:19.41159592 +0000 UTC m=+59.111001326" Jan 17 12:23:20.317606 containerd[1603]: time="2025-01-17T12:23:20.316789808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:20.319232 containerd[1603]: time="2025-01-17T12:23:20.319089999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 17 12:23:20.321054 containerd[1603]: time="2025-01-17T12:23:20.320719222Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:20.325620 containerd[1603]: time="2025-01-17T12:23:20.325572932Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:20.327532 containerd[1603]: time="2025-01-17T12:23:20.327227352Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.81073064s" Jan 17 12:23:20.327532 containerd[1603]: time="2025-01-17T12:23:20.327327085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 17 12:23:20.333961 containerd[1603]: time="2025-01-17T12:23:20.333583701Z" level=info msg="CreateContainer within sandbox \"971cac325d62f1c9a98c7a9aa629dcaa1a40d702cc17428de5e7ba286f60faa3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 17 12:23:20.395933 kubelet[2764]: I0117 12:23:20.395881 2764 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:23:20.408107 containerd[1603]: time="2025-01-17T12:23:20.406861609Z" level=info msg="CreateContainer within sandbox \"971cac325d62f1c9a98c7a9aa629dcaa1a40d702cc17428de5e7ba286f60faa3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6dd74618d85c2610b492fb213808e4974e7af717436c5a74ba9f6d7222c52084\"" Jan 17 12:23:20.412932 containerd[1603]: time="2025-01-17T12:23:20.409652032Z" level=info msg="StartContainer for \"6dd74618d85c2610b492fb213808e4974e7af717436c5a74ba9f6d7222c52084\"" Jan 17 12:23:20.502677 systemd[1]: run-containerd-runc-k8s.io-6dd74618d85c2610b492fb213808e4974e7af717436c5a74ba9f6d7222c52084-runc.vzd1mZ.mount: Deactivated successfully. Jan 17 12:23:20.596798 containerd[1603]: time="2025-01-17T12:23:20.596479512Z" level=info msg="StartContainer for \"6dd74618d85c2610b492fb213808e4974e7af717436c5a74ba9f6d7222c52084\" returns successfully" Jan 17 12:23:20.643763 containerd[1603]: time="2025-01-17T12:23:20.643705092Z" level=info msg="StopPodSandbox for \"78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06\"" Jan 17 12:23:20.889576 containerd[1603]: 2025-01-17 12:23:20.769 [WARNING][5069] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--1--b9b10bea58-k8s-csi--node--driver--mvjx9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9e48819f-106c-43b3-89f6-2976b3a7c1c2", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-1-b9b10bea58", ContainerID:"971cac325d62f1c9a98c7a9aa629dcaa1a40d702cc17428de5e7ba286f60faa3", Pod:"csi-node-driver-mvjx9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.56.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9aeef62f472", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:20.889576 containerd[1603]: 2025-01-17 12:23:20.774 [INFO][5069] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" Jan 17 12:23:20.889576 containerd[1603]: 2025-01-17 12:23:20.774 [INFO][5069] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" iface="eth0" netns="" Jan 17 12:23:20.889576 containerd[1603]: 2025-01-17 12:23:20.774 [INFO][5069] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" Jan 17 12:23:20.889576 containerd[1603]: 2025-01-17 12:23:20.774 [INFO][5069] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" Jan 17 12:23:20.889576 containerd[1603]: 2025-01-17 12:23:20.855 [INFO][5075] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" HandleID="k8s-pod-network.78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" Workload="ci--4081.3.0--1--b9b10bea58-k8s-csi--node--driver--mvjx9-eth0" Jan 17 12:23:20.889576 containerd[1603]: 2025-01-17 12:23:20.855 [INFO][5075] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:20.889576 containerd[1603]: 2025-01-17 12:23:20.856 [INFO][5075] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:20.889576 containerd[1603]: 2025-01-17 12:23:20.870 [WARNING][5075] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" HandleID="k8s-pod-network.78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" Workload="ci--4081.3.0--1--b9b10bea58-k8s-csi--node--driver--mvjx9-eth0" Jan 17 12:23:20.889576 containerd[1603]: 2025-01-17 12:23:20.870 [INFO][5075] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" HandleID="k8s-pod-network.78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" Workload="ci--4081.3.0--1--b9b10bea58-k8s-csi--node--driver--mvjx9-eth0" Jan 17 12:23:20.889576 containerd[1603]: 2025-01-17 12:23:20.877 [INFO][5075] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:20.889576 containerd[1603]: 2025-01-17 12:23:20.884 [INFO][5069] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" Jan 17 12:23:20.889576 containerd[1603]: time="2025-01-17T12:23:20.889485855Z" level=info msg="TearDown network for sandbox \"78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06\" successfully" Jan 17 12:23:20.889576 containerd[1603]: time="2025-01-17T12:23:20.889522205Z" level=info msg="StopPodSandbox for \"78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06\" returns successfully" Jan 17 12:23:20.891948 containerd[1603]: time="2025-01-17T12:23:20.891135950Z" level=info msg="RemovePodSandbox for \"78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06\"" Jan 17 12:23:20.891948 containerd[1603]: time="2025-01-17T12:23:20.891188193Z" level=info msg="Forcibly stopping sandbox \"78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06\"" Jan 17 12:23:20.926648 kubelet[2764]: I0117 12:23:20.925883 2764 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 17 12:23:20.933038 kubelet[2764]: I0117 12:23:20.932953 2764 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 17 12:23:21.066901 containerd[1603]: 2025-01-17 12:23:20.980 [WARNING][5093] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--1--b9b10bea58-k8s-csi--node--driver--mvjx9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9e48819f-106c-43b3-89f6-2976b3a7c1c2", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-1-b9b10bea58", ContainerID:"971cac325d62f1c9a98c7a9aa629dcaa1a40d702cc17428de5e7ba286f60faa3", Pod:"csi-node-driver-mvjx9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.56.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9aeef62f472", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:21.066901 containerd[1603]: 2025-01-17 12:23:20.981 [INFO][5093] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" Jan 17 12:23:21.066901 containerd[1603]: 2025-01-17 12:23:20.981 [INFO][5093] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" iface="eth0" netns="" Jan 17 12:23:21.066901 containerd[1603]: 2025-01-17 12:23:20.981 [INFO][5093] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" Jan 17 12:23:21.066901 containerd[1603]: 2025-01-17 12:23:20.981 [INFO][5093] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" Jan 17 12:23:21.066901 containerd[1603]: 2025-01-17 12:23:21.036 [INFO][5099] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" HandleID="k8s-pod-network.78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" Workload="ci--4081.3.0--1--b9b10bea58-k8s-csi--node--driver--mvjx9-eth0" Jan 17 12:23:21.066901 containerd[1603]: 2025-01-17 12:23:21.036 [INFO][5099] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:21.066901 containerd[1603]: 2025-01-17 12:23:21.036 [INFO][5099] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:21.066901 containerd[1603]: 2025-01-17 12:23:21.057 [WARNING][5099] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" HandleID="k8s-pod-network.78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" Workload="ci--4081.3.0--1--b9b10bea58-k8s-csi--node--driver--mvjx9-eth0" Jan 17 12:23:21.066901 containerd[1603]: 2025-01-17 12:23:21.057 [INFO][5099] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" HandleID="k8s-pod-network.78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" Workload="ci--4081.3.0--1--b9b10bea58-k8s-csi--node--driver--mvjx9-eth0" Jan 17 12:23:21.066901 containerd[1603]: 2025-01-17 12:23:21.061 [INFO][5099] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:21.066901 containerd[1603]: 2025-01-17 12:23:21.063 [INFO][5093] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06" Jan 17 12:23:21.066901 containerd[1603]: time="2025-01-17T12:23:21.066363584Z" level=info msg="TearDown network for sandbox \"78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06\" successfully" Jan 17 12:23:21.099797 containerd[1603]: time="2025-01-17T12:23:21.099660234Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:23:21.099797 containerd[1603]: time="2025-01-17T12:23:21.099814327Z" level=info msg="RemovePodSandbox \"78a14119bf7c63e2b5e171be0fc2c40f318da6bb50cf454606f4196171c1ce06\" returns successfully" Jan 17 12:23:21.101483 containerd[1603]: time="2025-01-17T12:23:21.100927319Z" level=info msg="StopPodSandbox for \"b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c\"" Jan 17 12:23:21.276982 containerd[1603]: 2025-01-17 12:23:21.184 [WARNING][5118] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--tftgd-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"21096fca-e879-4e15-89db-72e1ab742bae", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-1-b9b10bea58", ContainerID:"1af403224062af560690281b4383107791f9e26e748d055640d56204216b57a2", Pod:"coredns-76f75df574-tftgd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd35b19fe81", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:21.276982 containerd[1603]: 2025-01-17 12:23:21.185 [INFO][5118] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" Jan 17 12:23:21.276982 containerd[1603]: 2025-01-17 12:23:21.185 [INFO][5118] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" iface="eth0" netns="" Jan 17 12:23:21.276982 containerd[1603]: 2025-01-17 12:23:21.185 [INFO][5118] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" Jan 17 12:23:21.276982 containerd[1603]: 2025-01-17 12:23:21.185 [INFO][5118] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" Jan 17 12:23:21.276982 containerd[1603]: 2025-01-17 12:23:21.255 [INFO][5125] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" HandleID="k8s-pod-network.b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" Workload="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--tftgd-eth0" Jan 17 12:23:21.276982 containerd[1603]: 2025-01-17 12:23:21.255 [INFO][5125] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:21.276982 containerd[1603]: 2025-01-17 12:23:21.255 [INFO][5125] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:21.276982 containerd[1603]: 2025-01-17 12:23:21.267 [WARNING][5125] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" HandleID="k8s-pod-network.b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" Workload="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--tftgd-eth0" Jan 17 12:23:21.276982 containerd[1603]: 2025-01-17 12:23:21.267 [INFO][5125] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" HandleID="k8s-pod-network.b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" Workload="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--tftgd-eth0" Jan 17 12:23:21.276982 containerd[1603]: 2025-01-17 12:23:21.272 [INFO][5125] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:21.276982 containerd[1603]: 2025-01-17 12:23:21.274 [INFO][5118] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" Jan 17 12:23:21.278550 containerd[1603]: time="2025-01-17T12:23:21.278497332Z" level=info msg="TearDown network for sandbox \"b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c\" successfully" Jan 17 12:23:21.278868 containerd[1603]: time="2025-01-17T12:23:21.278709050Z" level=info msg="StopPodSandbox for \"b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c\" returns successfully" Jan 17 12:23:21.279976 containerd[1603]: time="2025-01-17T12:23:21.279928145Z" level=info msg="RemovePodSandbox for \"b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c\"" Jan 17 12:23:21.279976 containerd[1603]: time="2025-01-17T12:23:21.279979139Z" level=info msg="Forcibly stopping sandbox \"b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c\"" Jan 17 12:23:21.313625 systemd[1]: Started sshd@8-137.184.236.252:22-139.178.68.195:41568.service - OpenSSH per-connection server daemon (139.178.68.195:41568). Jan 17 12:23:21.493218 containerd[1603]: 2025-01-17 12:23:21.358 [WARNING][5143] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--tftgd-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"21096fca-e879-4e15-89db-72e1ab742bae", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-1-b9b10bea58", ContainerID:"1af403224062af560690281b4383107791f9e26e748d055640d56204216b57a2", Pod:"coredns-76f75df574-tftgd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd35b19fe81", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:21.493218 containerd[1603]: 2025-01-17 12:23:21.358 [INFO][5143] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" Jan 17 12:23:21.493218 containerd[1603]: 2025-01-17 12:23:21.358 [INFO][5143] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" iface="eth0" netns="" Jan 17 12:23:21.493218 containerd[1603]: 2025-01-17 12:23:21.358 [INFO][5143] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" Jan 17 12:23:21.493218 containerd[1603]: 2025-01-17 12:23:21.358 [INFO][5143] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" Jan 17 12:23:21.493218 containerd[1603]: 2025-01-17 12:23:21.471 [INFO][5151] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" HandleID="k8s-pod-network.b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" Workload="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--tftgd-eth0" Jan 17 12:23:21.493218 containerd[1603]: 2025-01-17 12:23:21.471 [INFO][5151] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:21.493218 containerd[1603]: 2025-01-17 12:23:21.471 [INFO][5151] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:21.493218 containerd[1603]: 2025-01-17 12:23:21.482 [WARNING][5151] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" HandleID="k8s-pod-network.b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" Workload="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--tftgd-eth0" Jan 17 12:23:21.493218 containerd[1603]: 2025-01-17 12:23:21.482 [INFO][5151] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" HandleID="k8s-pod-network.b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" Workload="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--tftgd-eth0" Jan 17 12:23:21.493218 containerd[1603]: 2025-01-17 12:23:21.486 [INFO][5151] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:21.493218 containerd[1603]: 2025-01-17 12:23:21.490 [INFO][5143] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c" Jan 17 12:23:21.496219 containerd[1603]: time="2025-01-17T12:23:21.493994836Z" level=info msg="TearDown network for sandbox \"b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c\" successfully" Jan 17 12:23:21.502106 containerd[1603]: time="2025-01-17T12:23:21.501899142Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:23:21.502284 containerd[1603]: time="2025-01-17T12:23:21.502125977Z" level=info msg="RemovePodSandbox \"b764a9f128f62af59d347b27336b594511a20b2bbabdaecf129ff0aea4e7608c\" returns successfully" Jan 17 12:23:21.503699 containerd[1603]: time="2025-01-17T12:23:21.502902295Z" level=info msg="StopPodSandbox for \"bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d\"" Jan 17 12:23:21.531637 sshd[5147]: Accepted publickey for core from 139.178.68.195 port 41568 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:21.535820 sshd[5147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:21.545251 systemd-logind[1574]: New session 9 of user core. Jan 17 12:23:21.550801 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:23:21.651367 containerd[1603]: 2025-01-17 12:23:21.584 [WARNING][5170] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--t9p6v-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c6d9fd0c-351d-4397-ad95-002c18dff9fb", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-1-b9b10bea58", ContainerID:"59f1d732df8bbf15c79768030edceaa06d03a7c52632da1085a83f4790bc07d6", Pod:"coredns-76f75df574-t9p6v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0bb7602db75", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:21.651367 containerd[1603]: 2025-01-17 12:23:21.584 [INFO][5170] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" Jan 17 12:23:21.651367 containerd[1603]: 2025-01-17 12:23:21.584 [INFO][5170] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" iface="eth0" netns="" Jan 17 12:23:21.651367 containerd[1603]: 2025-01-17 12:23:21.585 [INFO][5170] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" Jan 17 12:23:21.651367 containerd[1603]: 2025-01-17 12:23:21.585 [INFO][5170] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" Jan 17 12:23:21.651367 containerd[1603]: 2025-01-17 12:23:21.628 [INFO][5179] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" HandleID="k8s-pod-network.bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" Workload="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--t9p6v-eth0" Jan 17 12:23:21.651367 containerd[1603]: 2025-01-17 12:23:21.630 [INFO][5179] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:21.651367 containerd[1603]: 2025-01-17 12:23:21.630 [INFO][5179] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:21.651367 containerd[1603]: 2025-01-17 12:23:21.639 [WARNING][5179] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" HandleID="k8s-pod-network.bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" Workload="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--t9p6v-eth0" Jan 17 12:23:21.651367 containerd[1603]: 2025-01-17 12:23:21.639 [INFO][5179] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" HandleID="k8s-pod-network.bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" Workload="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--t9p6v-eth0" Jan 17 12:23:21.651367 containerd[1603]: 2025-01-17 12:23:21.643 [INFO][5179] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:21.651367 containerd[1603]: 2025-01-17 12:23:21.646 [INFO][5170] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" Jan 17 12:23:21.651367 containerd[1603]: time="2025-01-17T12:23:21.650277754Z" level=info msg="TearDown network for sandbox \"bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d\" successfully" Jan 17 12:23:21.651367 containerd[1603]: time="2025-01-17T12:23:21.650310295Z" level=info msg="StopPodSandbox for \"bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d\" returns successfully" Jan 17 12:23:21.663708 containerd[1603]: time="2025-01-17T12:23:21.653456865Z" level=info msg="RemovePodSandbox for \"bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d\"" Jan 17 12:23:21.664157 containerd[1603]: time="2025-01-17T12:23:21.663731179Z" level=info msg="Forcibly stopping sandbox \"bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d\"" Jan 17 12:23:21.801335 systemd-journald[1141]: Under memory pressure, flushing caches. Jan 17 12:23:21.800678 systemd-resolved[1488]: Under memory pressure, flushing caches. Jan 17 12:23:21.800736 systemd-resolved[1488]: Flushed all caches. Jan 17 12:23:21.839523 containerd[1603]: 2025-01-17 12:23:21.750 [WARNING][5202] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--t9p6v-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c6d9fd0c-351d-4397-ad95-002c18dff9fb", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-1-b9b10bea58", ContainerID:"59f1d732df8bbf15c79768030edceaa06d03a7c52632da1085a83f4790bc07d6", Pod:"coredns-76f75df574-t9p6v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0bb7602db75", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:21.839523 containerd[1603]: 2025-01-17 12:23:21.750 [INFO][5202] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" Jan 17 12:23:21.839523 containerd[1603]: 2025-01-17 12:23:21.750 [INFO][5202] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" iface="eth0" netns="" Jan 17 12:23:21.839523 containerd[1603]: 2025-01-17 12:23:21.750 [INFO][5202] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" Jan 17 12:23:21.839523 containerd[1603]: 2025-01-17 12:23:21.750 [INFO][5202] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" Jan 17 12:23:21.839523 containerd[1603]: 2025-01-17 12:23:21.806 [INFO][5208] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" HandleID="k8s-pod-network.bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" Workload="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--t9p6v-eth0" Jan 17 12:23:21.839523 containerd[1603]: 2025-01-17 12:23:21.806 [INFO][5208] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:21.839523 containerd[1603]: 2025-01-17 12:23:21.806 [INFO][5208] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:21.839523 containerd[1603]: 2025-01-17 12:23:21.822 [WARNING][5208] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" HandleID="k8s-pod-network.bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" Workload="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--t9p6v-eth0" Jan 17 12:23:21.839523 containerd[1603]: 2025-01-17 12:23:21.823 [INFO][5208] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" HandleID="k8s-pod-network.bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" Workload="ci--4081.3.0--1--b9b10bea58-k8s-coredns--76f75df574--t9p6v-eth0" Jan 17 12:23:21.839523 containerd[1603]: 2025-01-17 12:23:21.829 [INFO][5208] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:21.839523 containerd[1603]: 2025-01-17 12:23:21.836 [INFO][5202] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d" Jan 17 12:23:21.839523 containerd[1603]: time="2025-01-17T12:23:21.839315264Z" level=info msg="TearDown network for sandbox \"bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d\" successfully" Jan 17 12:23:21.847768 containerd[1603]: time="2025-01-17T12:23:21.846702564Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:23:21.847768 containerd[1603]: time="2025-01-17T12:23:21.846850795Z" level=info msg="RemovePodSandbox \"bec0cba65cc8363d292d2aa8efbd35794fdf2013707bf8d3ff779795aacd363d\" returns successfully" Jan 17 12:23:21.847768 containerd[1603]: time="2025-01-17T12:23:21.847745386Z" level=info msg="StopPodSandbox for \"490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c\"" Jan 17 12:23:21.999512 containerd[1603]: 2025-01-17 12:23:21.914 [WARNING][5229] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--1--b9b10bea58-k8s-calico--kube--controllers--6946578766--9thqb-eth0", GenerateName:"calico-kube-controllers-6946578766-", Namespace:"calico-system", SelfLink:"", UID:"fafa34d6-67fc-4cb4-83a2-49e3ad56846d", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6946578766", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-1-b9b10bea58", ContainerID:"680326e5ce7a2b3f40d3cd40b369bde9ccaef86f87eb6f48861aa85fc935b635", Pod:"calico-kube-controllers-6946578766-9thqb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.56.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0489a2f66c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:21.999512 containerd[1603]: 2025-01-17 12:23:21.917 [INFO][5229] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" Jan 17 12:23:21.999512 containerd[1603]: 2025-01-17 12:23:21.917 [INFO][5229] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" iface="eth0" netns="" Jan 17 12:23:21.999512 containerd[1603]: 2025-01-17 12:23:21.917 [INFO][5229] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" Jan 17 12:23:21.999512 containerd[1603]: 2025-01-17 12:23:21.917 [INFO][5229] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" Jan 17 12:23:21.999512 containerd[1603]: 2025-01-17 12:23:21.968 [INFO][5235] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" HandleID="k8s-pod-network.490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--kube--controllers--6946578766--9thqb-eth0" Jan 17 12:23:21.999512 containerd[1603]: 2025-01-17 12:23:21.968 [INFO][5235] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:21.999512 containerd[1603]: 2025-01-17 12:23:21.969 [INFO][5235] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:21.999512 containerd[1603]: 2025-01-17 12:23:21.983 [WARNING][5235] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" HandleID="k8s-pod-network.490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--kube--controllers--6946578766--9thqb-eth0" Jan 17 12:23:21.999512 containerd[1603]: 2025-01-17 12:23:21.985 [INFO][5235] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" HandleID="k8s-pod-network.490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--kube--controllers--6946578766--9thqb-eth0" Jan 17 12:23:21.999512 containerd[1603]: 2025-01-17 12:23:21.991 [INFO][5235] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:21.999512 containerd[1603]: 2025-01-17 12:23:21.995 [INFO][5229] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" Jan 17 12:23:21.999512 containerd[1603]: time="2025-01-17T12:23:21.999317009Z" level=info msg="TearDown network for sandbox \"490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c\" successfully" Jan 17 12:23:21.999512 containerd[1603]: time="2025-01-17T12:23:21.999357464Z" level=info msg="StopPodSandbox for \"490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c\" returns successfully" Jan 17 12:23:21.999512 containerd[1603]: time="2025-01-17T12:23:22.000182029Z" level=info msg="RemovePodSandbox for \"490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c\"" Jan 17 12:23:21.999512 containerd[1603]: time="2025-01-17T12:23:22.000223556Z" level=info msg="Forcibly stopping sandbox \"490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c\"" Jan 17 12:23:22.202820 sshd[5147]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:22.205608 containerd[1603]: 2025-01-17 12:23:22.106 [WARNING][5254] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--1--b9b10bea58-k8s-calico--kube--controllers--6946578766--9thqb-eth0", GenerateName:"calico-kube-controllers-6946578766-", Namespace:"calico-system", SelfLink:"", UID:"fafa34d6-67fc-4cb4-83a2-49e3ad56846d", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6946578766", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-1-b9b10bea58", ContainerID:"680326e5ce7a2b3f40d3cd40b369bde9ccaef86f87eb6f48861aa85fc935b635", Pod:"calico-kube-controllers-6946578766-9thqb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.56.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0489a2f66c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:22.205608 containerd[1603]: 2025-01-17 12:23:22.107 [INFO][5254] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" Jan 17 12:23:22.205608 containerd[1603]: 2025-01-17 12:23:22.107 [INFO][5254] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" iface="eth0" netns="" Jan 17 12:23:22.205608 containerd[1603]: 2025-01-17 12:23:22.107 [INFO][5254] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" Jan 17 12:23:22.205608 containerd[1603]: 2025-01-17 12:23:22.107 [INFO][5254] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" Jan 17 12:23:22.205608 containerd[1603]: 2025-01-17 12:23:22.171 [INFO][5260] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" HandleID="k8s-pod-network.490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--kube--controllers--6946578766--9thqb-eth0" Jan 17 12:23:22.205608 containerd[1603]: 2025-01-17 12:23:22.171 [INFO][5260] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:22.205608 containerd[1603]: 2025-01-17 12:23:22.171 [INFO][5260] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:22.205608 containerd[1603]: 2025-01-17 12:23:22.185 [WARNING][5260] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" HandleID="k8s-pod-network.490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--kube--controllers--6946578766--9thqb-eth0" Jan 17 12:23:22.205608 containerd[1603]: 2025-01-17 12:23:22.185 [INFO][5260] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" HandleID="k8s-pod-network.490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--kube--controllers--6946578766--9thqb-eth0" Jan 17 12:23:22.205608 containerd[1603]: 2025-01-17 12:23:22.187 [INFO][5260] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:22.205608 containerd[1603]: 2025-01-17 12:23:22.200 [INFO][5254] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c" Jan 17 12:23:22.206164 containerd[1603]: time="2025-01-17T12:23:22.205684693Z" level=info msg="TearDown network for sandbox \"490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c\" successfully" Jan 17 12:23:22.211828 systemd[1]: sshd@8-137.184.236.252:22-139.178.68.195:41568.service: Deactivated successfully. Jan 17 12:23:22.220029 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:23:22.228225 systemd-logind[1574]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:23:22.229917 containerd[1603]: time="2025-01-17T12:23:22.229809942Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:23:22.229917 containerd[1603]: time="2025-01-17T12:23:22.229897961Z" level=info msg="RemovePodSandbox \"490f3102f94b3953f0a9b730df811cd31cd6fa1549a935cd2e4e478a47b0e50c\" returns successfully" Jan 17 12:23:22.231504 containerd[1603]: time="2025-01-17T12:23:22.231071656Z" level=info msg="StopPodSandbox for \"9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac\"" Jan 17 12:23:22.231391 systemd-logind[1574]: Removed session 9. Jan 17 12:23:22.373850 containerd[1603]: 2025-01-17 12:23:22.313 [WARNING][5281] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--mj8x2-eth0", GenerateName:"calico-apiserver-7d6ff6796c-", Namespace:"calico-apiserver", SelfLink:"", UID:"21685820-0784-4b7f-bf71-b7f2faefd98c", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d6ff6796c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-1-b9b10bea58", ContainerID:"99314ac7ecb50968b210bd80b5baebdc99c0994c07cd6eb03445cb50c7d08e2a", Pod:"calico-apiserver-7d6ff6796c-mj8x2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid9e991916aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:22.373850 containerd[1603]: 2025-01-17 12:23:22.314 [INFO][5281] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" Jan 17 12:23:22.373850 containerd[1603]: 2025-01-17 12:23:22.314 [INFO][5281] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" iface="eth0" netns="" Jan 17 12:23:22.373850 containerd[1603]: 2025-01-17 12:23:22.314 [INFO][5281] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" Jan 17 12:23:22.373850 containerd[1603]: 2025-01-17 12:23:22.314 [INFO][5281] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" Jan 17 12:23:22.373850 containerd[1603]: 2025-01-17 12:23:22.353 [INFO][5287] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" HandleID="k8s-pod-network.9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--mj8x2-eth0" Jan 17 12:23:22.373850 containerd[1603]: 2025-01-17 12:23:22.354 [INFO][5287] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:22.373850 containerd[1603]: 2025-01-17 12:23:22.354 [INFO][5287] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:22.373850 containerd[1603]: 2025-01-17 12:23:22.363 [WARNING][5287] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" HandleID="k8s-pod-network.9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--mj8x2-eth0" Jan 17 12:23:22.373850 containerd[1603]: 2025-01-17 12:23:22.363 [INFO][5287] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" HandleID="k8s-pod-network.9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--mj8x2-eth0" Jan 17 12:23:22.373850 containerd[1603]: 2025-01-17 12:23:22.367 [INFO][5287] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:22.373850 containerd[1603]: 2025-01-17 12:23:22.371 [INFO][5281] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" Jan 17 12:23:22.376278 containerd[1603]: time="2025-01-17T12:23:22.373898975Z" level=info msg="TearDown network for sandbox \"9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac\" successfully" Jan 17 12:23:22.376278 containerd[1603]: time="2025-01-17T12:23:22.373925879Z" level=info msg="StopPodSandbox for \"9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac\" returns successfully" Jan 17 12:23:22.376278 containerd[1603]: time="2025-01-17T12:23:22.374652804Z" level=info msg="RemovePodSandbox for \"9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac\"" Jan 17 12:23:22.376278 containerd[1603]: time="2025-01-17T12:23:22.374702841Z" level=info msg="Forcibly stopping sandbox \"9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac\"" Jan 17 12:23:22.502228 containerd[1603]: 2025-01-17 12:23:22.444 [WARNING][5305] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--mj8x2-eth0", GenerateName:"calico-apiserver-7d6ff6796c-", Namespace:"calico-apiserver", SelfLink:"", UID:"21685820-0784-4b7f-bf71-b7f2faefd98c", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d6ff6796c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-1-b9b10bea58", ContainerID:"99314ac7ecb50968b210bd80b5baebdc99c0994c07cd6eb03445cb50c7d08e2a", Pod:"calico-apiserver-7d6ff6796c-mj8x2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid9e991916aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:22.502228 containerd[1603]: 2025-01-17 12:23:22.444 [INFO][5305] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" Jan 17 12:23:22.502228 containerd[1603]: 2025-01-17 12:23:22.444 [INFO][5305] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" iface="eth0" netns="" Jan 17 12:23:22.502228 containerd[1603]: 2025-01-17 12:23:22.445 [INFO][5305] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" Jan 17 12:23:22.502228 containerd[1603]: 2025-01-17 12:23:22.445 [INFO][5305] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" Jan 17 12:23:22.502228 containerd[1603]: 2025-01-17 12:23:22.484 [INFO][5311] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" HandleID="k8s-pod-network.9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--mj8x2-eth0" Jan 17 12:23:22.502228 containerd[1603]: 2025-01-17 12:23:22.484 [INFO][5311] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:22.502228 containerd[1603]: 2025-01-17 12:23:22.484 [INFO][5311] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:22.502228 containerd[1603]: 2025-01-17 12:23:22.493 [WARNING][5311] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" HandleID="k8s-pod-network.9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--mj8x2-eth0" Jan 17 12:23:22.502228 containerd[1603]: 2025-01-17 12:23:22.493 [INFO][5311] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" HandleID="k8s-pod-network.9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--mj8x2-eth0" Jan 17 12:23:22.502228 containerd[1603]: 2025-01-17 12:23:22.495 [INFO][5311] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:22.502228 containerd[1603]: 2025-01-17 12:23:22.498 [INFO][5305] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac" Jan 17 12:23:22.502228 containerd[1603]: time="2025-01-17T12:23:22.501851952Z" level=info msg="TearDown network for sandbox \"9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac\" successfully" Jan 17 12:23:22.509045 containerd[1603]: time="2025-01-17T12:23:22.508963142Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:23:22.509236 containerd[1603]: time="2025-01-17T12:23:22.509098305Z" level=info msg="RemovePodSandbox \"9369937916106b83e2882528fdbade176f715f3267cb7b09d4a20503163491ac\" returns successfully" Jan 17 12:23:22.509945 containerd[1603]: time="2025-01-17T12:23:22.509675976Z" level=info msg="StopPodSandbox for \"9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66\"" Jan 17 12:23:22.632973 containerd[1603]: 2025-01-17 12:23:22.574 [WARNING][5329] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--vmbmt-eth0", GenerateName:"calico-apiserver-7d6ff6796c-", Namespace:"calico-apiserver", SelfLink:"", UID:"c96ff941-06c3-4d81-9057-dc8dac75c1c4", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d6ff6796c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-1-b9b10bea58", ContainerID:"b1566886e305e6f1028e2bcfc5c6ea23352cce27d4ef71b5350c56074aaf9e84", Pod:"calico-apiserver-7d6ff6796c-vmbmt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3db696dd913", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:22.632973 containerd[1603]: 2025-01-17 12:23:22.574 [INFO][5329] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" Jan 17 12:23:22.632973 containerd[1603]: 2025-01-17 12:23:22.575 [INFO][5329] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" iface="eth0" netns="" Jan 17 12:23:22.632973 containerd[1603]: 2025-01-17 12:23:22.575 [INFO][5329] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" Jan 17 12:23:22.632973 containerd[1603]: 2025-01-17 12:23:22.575 [INFO][5329] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" Jan 17 12:23:22.632973 containerd[1603]: 2025-01-17 12:23:22.616 [INFO][5337] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" HandleID="k8s-pod-network.9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--vmbmt-eth0" Jan 17 12:23:22.632973 containerd[1603]: 2025-01-17 12:23:22.617 [INFO][5337] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:22.632973 containerd[1603]: 2025-01-17 12:23:22.617 [INFO][5337] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:22.632973 containerd[1603]: 2025-01-17 12:23:22.624 [WARNING][5337] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" HandleID="k8s-pod-network.9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--vmbmt-eth0" Jan 17 12:23:22.632973 containerd[1603]: 2025-01-17 12:23:22.624 [INFO][5337] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" HandleID="k8s-pod-network.9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--vmbmt-eth0" Jan 17 12:23:22.632973 containerd[1603]: 2025-01-17 12:23:22.627 [INFO][5337] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:22.632973 containerd[1603]: 2025-01-17 12:23:22.630 [INFO][5329] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" Jan 17 12:23:22.638920 containerd[1603]: time="2025-01-17T12:23:22.633292466Z" level=info msg="TearDown network for sandbox \"9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66\" successfully" Jan 17 12:23:22.638920 containerd[1603]: time="2025-01-17T12:23:22.633376169Z" level=info msg="StopPodSandbox for \"9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66\" returns successfully" Jan 17 12:23:22.638920 containerd[1603]: time="2025-01-17T12:23:22.634409164Z" level=info msg="RemovePodSandbox for \"9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66\"" Jan 17 12:23:22.638920 containerd[1603]: time="2025-01-17T12:23:22.634456018Z" level=info msg="Forcibly stopping sandbox \"9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66\"" Jan 17 12:23:22.761763 containerd[1603]: 2025-01-17 12:23:22.705 [WARNING][5355] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--vmbmt-eth0", GenerateName:"calico-apiserver-7d6ff6796c-", Namespace:"calico-apiserver", SelfLink:"", UID:"c96ff941-06c3-4d81-9057-dc8dac75c1c4", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d6ff6796c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-1-b9b10bea58", ContainerID:"b1566886e305e6f1028e2bcfc5c6ea23352cce27d4ef71b5350c56074aaf9e84", Pod:"calico-apiserver-7d6ff6796c-vmbmt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3db696dd913", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:22.761763 containerd[1603]: 2025-01-17 12:23:22.705 [INFO][5355] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" Jan 17 12:23:22.761763 containerd[1603]: 2025-01-17 12:23:22.705 [INFO][5355] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" iface="eth0" netns="" Jan 17 12:23:22.761763 containerd[1603]: 2025-01-17 12:23:22.705 [INFO][5355] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" Jan 17 12:23:22.761763 containerd[1603]: 2025-01-17 12:23:22.706 [INFO][5355] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" Jan 17 12:23:22.761763 containerd[1603]: 2025-01-17 12:23:22.742 [INFO][5361] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" HandleID="k8s-pod-network.9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--vmbmt-eth0" Jan 17 12:23:22.761763 containerd[1603]: 2025-01-17 12:23:22.743 [INFO][5361] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:22.761763 containerd[1603]: 2025-01-17 12:23:22.743 [INFO][5361] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:22.761763 containerd[1603]: 2025-01-17 12:23:22.751 [WARNING][5361] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" HandleID="k8s-pod-network.9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--vmbmt-eth0" Jan 17 12:23:22.761763 containerd[1603]: 2025-01-17 12:23:22.752 [INFO][5361] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" HandleID="k8s-pod-network.9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" Workload="ci--4081.3.0--1--b9b10bea58-k8s-calico--apiserver--7d6ff6796c--vmbmt-eth0" Jan 17 12:23:22.761763 containerd[1603]: 2025-01-17 12:23:22.755 [INFO][5361] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:22.761763 containerd[1603]: 2025-01-17 12:23:22.757 [INFO][5355] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66" Jan 17 12:23:22.761763 containerd[1603]: time="2025-01-17T12:23:22.761510646Z" level=info msg="TearDown network for sandbox \"9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66\" successfully" Jan 17 12:23:22.767625 containerd[1603]: time="2025-01-17T12:23:22.767280223Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:23:22.767625 containerd[1603]: time="2025-01-17T12:23:22.767359541Z" level=info msg="RemovePodSandbox \"9e744f43003690227e6fefb7f1d4d1712edd60b120dc93086a0ec4f4f8e56e66\" returns successfully" Jan 17 12:23:27.213764 systemd[1]: Started sshd@9-137.184.236.252:22-139.178.68.195:52302.service - OpenSSH per-connection server daemon (139.178.68.195:52302). Jan 17 12:23:27.281060 sshd[5387]: Accepted publickey for core from 139.178.68.195 port 52302 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:27.283841 sshd[5387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:27.291126 systemd-logind[1574]: New session 10 of user core. Jan 17 12:23:27.299656 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:23:27.486304 sshd[5387]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:27.496068 systemd[1]: sshd@9-137.184.236.252:22-139.178.68.195:52302.service: Deactivated successfully. Jan 17 12:23:27.500395 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:23:27.502869 systemd-logind[1574]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:23:27.504387 systemd-logind[1574]: Removed session 10. Jan 17 12:23:28.528103 kubelet[2764]: I0117 12:23:28.527290 2764 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:23:28.574976 kubelet[2764]: I0117 12:23:28.574917 2764 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-mvjx9" podStartSLOduration=34.887602667 podStartE2EDuration="45.574873356s" podCreationTimestamp="2025-01-17 12:22:43 +0000 UTC" firstStartedPulling="2025-01-17 12:23:09.640347862 +0000 UTC m=+49.339753231" lastFinishedPulling="2025-01-17 12:23:20.327618523 +0000 UTC m=+60.027023920" observedRunningTime="2025-01-17 12:23:21.447157442 +0000 UTC m=+61.146562850" watchObservedRunningTime="2025-01-17 12:23:28.574873356 +0000 UTC m=+68.274278747" Jan 17 12:23:31.555177 kubelet[2764]: E0117 12:23:31.554991 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:32.502646 systemd[1]: Started sshd@10-137.184.236.252:22-139.178.68.195:52312.service - OpenSSH per-connection server daemon (139.178.68.195:52312). Jan 17 12:23:32.558761 sshd[5410]: Accepted publickey for core from 139.178.68.195 port 52312 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:32.562193 sshd[5410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:32.570215 systemd-logind[1574]: New session 11 of user core. Jan 17 12:23:32.575884 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:23:32.758845 sshd[5410]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:32.772348 systemd[1]: Started sshd@11-137.184.236.252:22-139.178.68.195:52316.service - OpenSSH per-connection server daemon (139.178.68.195:52316). Jan 17 12:23:32.773754 systemd[1]: sshd@10-137.184.236.252:22-139.178.68.195:52312.service: Deactivated successfully. Jan 17 12:23:32.778718 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:23:32.784479 systemd-logind[1574]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:23:32.786936 systemd-logind[1574]: Removed session 11. Jan 17 12:23:32.825300 sshd[5425]: Accepted publickey for core from 139.178.68.195 port 52316 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:32.827990 sshd[5425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:32.837512 systemd-logind[1574]: New session 12 of user core. Jan 17 12:23:32.843624 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:23:33.094980 sshd[5425]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:33.105673 systemd[1]: Started sshd@12-137.184.236.252:22-139.178.68.195:52320.service - OpenSSH per-connection server daemon (139.178.68.195:52320). Jan 17 12:23:33.118450 systemd-logind[1574]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:23:33.132664 systemd[1]: sshd@11-137.184.236.252:22-139.178.68.195:52316.service: Deactivated successfully. Jan 17 12:23:33.136488 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:23:33.151838 systemd-logind[1574]: Removed session 12. Jan 17 12:23:33.211892 sshd[5436]: Accepted publickey for core from 139.178.68.195 port 52320 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:33.214028 sshd[5436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:33.222895 systemd-logind[1574]: New session 13 of user core. Jan 17 12:23:33.229650 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:23:33.410891 sshd[5436]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:33.418321 systemd-logind[1574]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:23:33.419183 systemd[1]: sshd@12-137.184.236.252:22-139.178.68.195:52320.service: Deactivated successfully. Jan 17 12:23:33.427095 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:23:33.432480 systemd-logind[1574]: Removed session 13. Jan 17 12:23:38.426154 systemd[1]: Started sshd@13-137.184.236.252:22-139.178.68.195:52942.service - OpenSSH per-connection server daemon (139.178.68.195:52942). Jan 17 12:23:38.482058 sshd[5459]: Accepted publickey for core from 139.178.68.195 port 52942 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:38.486289 sshd[5459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:38.493362 systemd-logind[1574]: New session 14 of user core. Jan 17 12:23:38.498674 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:23:38.684909 sshd[5459]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:38.690945 systemd[1]: sshd@13-137.184.236.252:22-139.178.68.195:52942.service: Deactivated successfully. Jan 17 12:23:38.695323 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:23:38.696435 systemd-logind[1574]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:23:38.698120 systemd-logind[1574]: Removed session 14. Jan 17 12:23:41.555340 kubelet[2764]: E0117 12:23:41.554989 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:43.699857 systemd[1]: Started sshd@14-137.184.236.252:22-139.178.68.195:52950.service - OpenSSH per-connection server daemon (139.178.68.195:52950). Jan 17 12:23:43.825788 sshd[5494]: Accepted publickey for core from 139.178.68.195 port 52950 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:43.830141 sshd[5494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:43.841419 systemd-logind[1574]: New session 15 of user core. Jan 17 12:23:43.848806 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:23:44.124315 sshd[5494]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:44.134381 systemd[1]: sshd@14-137.184.236.252:22-139.178.68.195:52950.service: Deactivated successfully. Jan 17 12:23:44.153122 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:23:44.158544 systemd-logind[1574]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:23:44.163407 systemd-logind[1574]: Removed session 15. Jan 17 12:23:45.555907 kubelet[2764]: E0117 12:23:45.555280 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:49.136922 systemd[1]: Started sshd@15-137.184.236.252:22-139.178.68.195:42958.service - OpenSSH per-connection server daemon (139.178.68.195:42958). Jan 17 12:23:49.316356 sshd[5508]: Accepted publickey for core from 139.178.68.195 port 42958 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:49.324697 sshd[5508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:49.343337 systemd-logind[1574]: New session 16 of user core. Jan 17 12:23:49.352971 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:23:49.848617 systemd-journald[1141]: Under memory pressure, flushing caches. Jan 17 12:23:49.846343 systemd-resolved[1488]: Under memory pressure, flushing caches. Jan 17 12:23:49.846356 systemd-resolved[1488]: Flushed all caches. Jan 17 12:23:50.220275 sshd[5508]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:50.229865 systemd[1]: sshd@15-137.184.236.252:22-139.178.68.195:42958.service: Deactivated successfully. Jan 17 12:23:50.234424 systemd-logind[1574]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:23:50.235355 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:23:50.239592 systemd-logind[1574]: Removed session 16. Jan 17 12:23:52.563760 kubelet[2764]: E0117 12:23:52.563695 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:52.568193 kubelet[2764]: E0117 12:23:52.566279 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:55.236852 systemd[1]: Started sshd@16-137.184.236.252:22-139.178.68.195:52000.service - OpenSSH per-connection server daemon (139.178.68.195:52000). Jan 17 12:23:55.319313 sshd[5549]: Accepted publickey for core from 139.178.68.195 port 52000 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:55.322945 sshd[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:55.331742 systemd-logind[1574]: New session 17 of user core. Jan 17 12:23:55.341727 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:23:55.625620 sshd[5549]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:55.639566 systemd[1]: sshd@16-137.184.236.252:22-139.178.68.195:52000.service: Deactivated successfully. Jan 17 12:23:55.646697 systemd-logind[1574]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:23:55.648464 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:23:55.652877 systemd-logind[1574]: Removed session 17. Jan 17 12:24:00.643871 systemd[1]: Started sshd@17-137.184.236.252:22-139.178.68.195:52004.service - OpenSSH per-connection server daemon (139.178.68.195:52004). Jan 17 12:24:00.737251 sshd[5582]: Accepted publickey for core from 139.178.68.195 port 52004 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:24:00.739833 sshd[5582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:24:00.748217 systemd-logind[1574]: New session 18 of user core. Jan 17 12:24:00.755682 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:24:01.000148 sshd[5582]: pam_unix(sshd:session): session closed for user core Jan 17 12:24:01.012373 systemd[1]: Started sshd@18-137.184.236.252:22-139.178.68.195:52008.service - OpenSSH per-connection server daemon (139.178.68.195:52008). Jan 17 12:24:01.014397 systemd[1]: sshd@17-137.184.236.252:22-139.178.68.195:52004.service: Deactivated successfully. Jan 17 12:24:01.022359 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:24:01.025195 systemd-logind[1574]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:24:01.030679 systemd-logind[1574]: Removed session 18. Jan 17 12:24:01.106666 sshd[5593]: Accepted publickey for core from 139.178.68.195 port 52008 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:24:01.111117 sshd[5593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:24:01.123250 systemd-logind[1574]: New session 19 of user core. Jan 17 12:24:01.129671 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:24:01.695813 sshd[5593]: pam_unix(sshd:session): session closed for user core Jan 17 12:24:01.701447 systemd[1]: Started sshd@19-137.184.236.252:22-139.178.68.195:52014.service - OpenSSH per-connection server daemon (139.178.68.195:52014). Jan 17 12:24:01.707508 systemd[1]: sshd@18-137.184.236.252:22-139.178.68.195:52008.service: Deactivated successfully. Jan 17 12:24:01.715954 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:24:01.722284 systemd-logind[1574]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:24:01.752657 systemd-logind[1574]: Removed session 19. Jan 17 12:24:01.849402 sshd[5606]: Accepted publickey for core from 139.178.68.195 port 52014 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:24:01.853988 sshd[5606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:24:01.864228 systemd-logind[1574]: New session 20 of user core. Jan 17 12:24:01.877461 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:24:04.929865 sshd[5606]: pam_unix(sshd:session): session closed for user core Jan 17 12:24:04.954624 systemd[1]: Started sshd@20-137.184.236.252:22-139.178.68.195:48336.service - OpenSSH per-connection server daemon (139.178.68.195:48336). Jan 17 12:24:04.958973 systemd[1]: sshd@19-137.184.236.252:22-139.178.68.195:52014.service: Deactivated successfully. Jan 17 12:24:04.982190 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:24:04.995244 systemd-logind[1574]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:24:05.006134 systemd-logind[1574]: Removed session 20. Jan 17 12:24:05.083938 sshd[5623]: Accepted publickey for core from 139.178.68.195 port 48336 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:24:05.087662 sshd[5623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:24:05.098301 systemd-logind[1574]: New session 21 of user core. Jan 17 12:24:05.106420 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:24:05.831682 systemd-journald[1141]: Under memory pressure, flushing caches. Jan 17 12:24:05.829432 systemd-resolved[1488]: Under memory pressure, flushing caches. Jan 17 12:24:05.829442 systemd-resolved[1488]: Flushed all caches. Jan 17 12:24:05.948592 sshd[5623]: pam_unix(sshd:session): session closed for user core Jan 17 12:24:05.958338 systemd[1]: Started sshd@21-137.184.236.252:22-139.178.68.195:48352.service - OpenSSH per-connection server daemon (139.178.68.195:48352). Jan 17 12:24:05.970901 systemd[1]: sshd@20-137.184.236.252:22-139.178.68.195:48336.service: Deactivated successfully. Jan 17 12:24:05.984377 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:24:05.991417 systemd-logind[1574]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:24:05.993280 systemd-logind[1574]: Removed session 21. Jan 17 12:24:06.039162 sshd[5640]: Accepted publickey for core from 139.178.68.195 port 48352 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:24:06.042167 sshd[5640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:24:06.050183 systemd-logind[1574]: New session 22 of user core. Jan 17 12:24:06.061706 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:24:06.253358 sshd[5640]: pam_unix(sshd:session): session closed for user core Jan 17 12:24:06.262807 systemd[1]: sshd@21-137.184.236.252:22-139.178.68.195:48352.service: Deactivated successfully. Jan 17 12:24:06.264339 systemd-logind[1574]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:24:06.269370 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:24:06.271584 systemd-logind[1574]: Removed session 22. Jan 17 12:24:11.267402 systemd[1]: Started sshd@22-137.184.236.252:22-139.178.68.195:48364.service - OpenSSH per-connection server daemon (139.178.68.195:48364). Jan 17 12:24:11.317255 sshd[5657]: Accepted publickey for core from 139.178.68.195 port 48364 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:24:11.320789 sshd[5657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:24:11.330219 systemd-logind[1574]: New session 23 of user core. Jan 17 12:24:11.337417 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:24:11.543364 sshd[5657]: pam_unix(sshd:session): session closed for user core Jan 17 12:24:11.549099 systemd[1]: sshd@22-137.184.236.252:22-139.178.68.195:48364.service: Deactivated successfully. Jan 17 12:24:11.556944 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:24:11.558115 systemd-logind[1574]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:24:11.562421 systemd-logind[1574]: Removed session 23. Jan 17 12:24:14.573078 kubelet[2764]: E0117 12:24:14.572987 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:24:16.556193 systemd[1]: Started sshd@23-137.184.236.252:22-139.178.68.195:55024.service - OpenSSH per-connection server daemon (139.178.68.195:55024). Jan 17 12:24:16.648066 sshd[5697]: Accepted publickey for core from 139.178.68.195 port 55024 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:24:16.655411 sshd[5697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:24:16.663255 systemd-logind[1574]: New session 24 of user core. Jan 17 12:24:16.668900 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 12:24:16.923230 sshd[5697]: pam_unix(sshd:session): session closed for user core Jan 17 12:24:16.930279 systemd[1]: sshd@23-137.184.236.252:22-139.178.68.195:55024.service: Deactivated successfully. Jan 17 12:24:16.936897 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 12:24:16.936987 systemd-logind[1574]: Session 24 logged out. Waiting for processes to exit. Jan 17 12:24:16.940767 systemd-logind[1574]: Removed session 24. Jan 17 12:24:21.940197 systemd[1]: Started sshd@24-137.184.236.252:22-139.178.68.195:55034.service - OpenSSH per-connection server daemon (139.178.68.195:55034). Jan 17 12:24:21.996076 sshd[5714]: Accepted publickey for core from 139.178.68.195 port 55034 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:24:21.997773 sshd[5714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:24:22.008267 systemd-logind[1574]: New session 25 of user core. Jan 17 12:24:22.019607 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 12:24:22.217106 sshd[5714]: pam_unix(sshd:session): session closed for user core Jan 17 12:24:22.221994 systemd-logind[1574]: Session 25 logged out. Waiting for processes to exit. Jan 17 12:24:22.222310 systemd[1]: sshd@24-137.184.236.252:22-139.178.68.195:55034.service: Deactivated successfully. Jan 17 12:24:22.228237 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 12:24:22.230515 systemd-logind[1574]: Removed session 25. Jan 17 12:24:22.585620 kubelet[2764]: E0117 12:24:22.585115 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:24:27.233523 systemd[1]: Started sshd@25-137.184.236.252:22-139.178.68.195:56438.service - OpenSSH per-connection server daemon (139.178.68.195:56438). Jan 17 12:24:27.281061 sshd[5746]: Accepted publickey for core from 139.178.68.195 port 56438 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:24:27.282306 sshd[5746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:24:27.288041 systemd-logind[1574]: New session 26 of user core. Jan 17 12:24:27.294592 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 12:24:27.466644 sshd[5746]: pam_unix(sshd:session): session closed for user core Jan 17 12:24:27.471231 systemd[1]: sshd@25-137.184.236.252:22-139.178.68.195:56438.service: Deactivated successfully. Jan 17 12:24:27.478652 systemd-logind[1574]: Session 26 logged out. Waiting for processes to exit. Jan 17 12:24:27.479705 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 12:24:27.484626 systemd-logind[1574]: Removed session 26.