Apr 30 03:23:00.005917 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 03:23:00.005973 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:23:00.005997 kernel: BIOS-provided physical RAM map: Apr 30 03:23:00.006009 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 30 03:23:00.006021 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 30 03:23:00.006033 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 30 03:23:00.006049 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Apr 30 03:23:00.006071 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Apr 30 03:23:00.006084 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 30 03:23:00.006102 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 30 03:23:00.006116 kernel: NX (Execute Disable) protection: active Apr 30 03:23:00.006129 kernel: APIC: Static calls initialized Apr 30 03:23:00.006146 kernel: SMBIOS 2.8 present. Apr 30 03:23:00.006161 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Apr 30 03:23:00.006177 kernel: Hypervisor detected: KVM Apr 30 03:23:00.006197 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 03:23:00.006215 kernel: kvm-clock: using sched offset of 3144059238 cycles Apr 30 03:23:00.006231 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 03:23:00.006247 kernel: tsc: Detected 2494.134 MHz processor Apr 30 03:23:00.006262 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 03:23:00.006278 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 03:23:00.006293 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Apr 30 03:23:00.006308 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 30 03:23:00.006323 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 03:23:00.006343 kernel: ACPI: Early table checksum verification disabled Apr 30 03:23:00.006358 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Apr 30 03:23:00.006374 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:23:00.006389 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:23:00.006404 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:23:00.006419 kernel: ACPI: FACS 0x000000007FFE0000 000040 Apr 30 03:23:00.006433 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:23:00.006448 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:23:00.006464 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:23:00.006483 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:23:00.006499 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Apr 30 03:23:00.006514 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Apr 30 03:23:00.006529 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Apr 30 03:23:00.006544 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Apr 30 03:23:00.006558 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Apr 30 03:23:00.006574 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Apr 30 03:23:00.006615 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Apr 30 03:23:00.006632 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 30 03:23:00.006648 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 30 03:23:00.006665 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Apr 30 03:23:00.006682 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Apr 30 03:23:00.006702 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Apr 30 03:23:00.006718 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Apr 30 03:23:00.006739 kernel: Zone ranges: Apr 30 03:23:00.007233 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 03:23:00.007252 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Apr 30 03:23:00.007269 kernel: Normal empty Apr 30 03:23:00.007282 kernel: Movable zone start for each node Apr 30 03:23:00.007294 kernel: Early memory node ranges Apr 30 03:23:00.007306 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 30 03:23:00.007319 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Apr 30 03:23:00.007332 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Apr 30 03:23:00.007354 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:23:00.007366 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 30 03:23:00.007386 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Apr 30 03:23:00.007400 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 30 03:23:00.007413 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 03:23:00.007428 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 30 03:23:00.007443 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 03:23:00.007456 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 03:23:00.007470 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 03:23:00.007490 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 03:23:00.007502 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 03:23:00.007515 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 03:23:00.007527 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 03:23:00.007540 kernel: TSC deadline timer available Apr 30 03:23:00.007564 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 03:23:00.007576 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 03:23:00.007588 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Apr 30 03:23:00.007606 kernel: Booting paravirtualized kernel on KVM Apr 30 03:23:00.007619 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 03:23:00.007638 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 03:23:00.007652 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 03:23:00.007665 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 03:23:00.007678 kernel: pcpu-alloc: [0] 0 1 Apr 30 03:23:00.007690 kernel: kvm-guest: PV spinlocks disabled, no host support Apr 30 03:23:00.007705 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:23:00.007720 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 03:23:00.007733 kernel: random: crng init done Apr 30 03:23:00.007765 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 03:23:00.007778 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 03:23:00.007791 kernel: Fallback order for Node 0: 0 Apr 30 03:23:00.007804 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Apr 30 03:23:00.007818 kernel: Policy zone: DMA32 Apr 30 03:23:00.007831 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 03:23:00.007845 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 125148K reserved, 0K cma-reserved) Apr 30 03:23:00.007857 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 03:23:00.007877 kernel: Kernel/User page tables isolation: enabled Apr 30 03:23:00.007890 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 03:23:00.007903 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 03:23:00.007917 kernel: Dynamic Preempt: voluntary Apr 30 03:23:00.007929 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 03:23:00.007944 kernel: rcu: RCU event tracing is enabled. Apr 30 03:23:00.007955 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 03:23:00.007967 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 03:23:00.007980 kernel: Rude variant of Tasks RCU enabled. Apr 30 03:23:00.007993 kernel: Tracing variant of Tasks RCU enabled. Apr 30 03:23:00.008032 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 03:23:00.008063 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 03:23:00.008076 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 30 03:23:00.008091 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 03:23:00.008109 kernel: Console: colour VGA+ 80x25 Apr 30 03:23:00.008122 kernel: printk: console [tty0] enabled Apr 30 03:23:00.008144 kernel: printk: console [ttyS0] enabled Apr 30 03:23:00.008156 kernel: ACPI: Core revision 20230628 Apr 30 03:23:00.008169 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 30 03:23:00.008189 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 03:23:00.008201 kernel: x2apic enabled Apr 30 03:23:00.008213 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 03:23:00.008225 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 30 03:23:00.008238 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f3946f721, max_idle_ns: 440795294991 ns Apr 30 03:23:00.008250 kernel: Calibrating delay loop (skipped) preset value.. 4988.26 BogoMIPS (lpj=2494134) Apr 30 03:23:00.008263 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 30 03:23:00.008276 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 30 03:23:00.008309 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 03:23:00.008324 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 03:23:00.008338 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 03:23:00.008355 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 03:23:00.008370 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 30 03:23:00.008385 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 03:23:00.008399 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 03:23:00.008414 kernel: MDS: Mitigation: Clear CPU buffers Apr 30 03:23:00.008427 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:23:00.008454 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 03:23:00.008468 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 03:23:00.008482 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 03:23:00.008495 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 03:23:00.008511 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 30 03:23:00.008526 kernel: Freeing SMP alternatives memory: 32K Apr 30 03:23:00.008540 kernel: pid_max: default: 32768 minimum: 301 Apr 30 03:23:00.008554 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 03:23:00.008574 kernel: landlock: Up and running. Apr 30 03:23:00.008590 kernel: SELinux: Initializing. Apr 30 03:23:00.008603 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 03:23:00.008617 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 03:23:00.008632 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Apr 30 03:23:00.008649 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:23:00.008663 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:23:00.008677 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:23:00.008692 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Apr 30 03:23:00.008712 kernel: signal: max sigframe size: 1776 Apr 30 03:23:00.008727 kernel: rcu: Hierarchical SRCU implementation. Apr 30 03:23:00.013115 kernel: rcu: Max phase no-delay instances is 400. Apr 30 03:23:00.013165 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 30 03:23:00.013183 kernel: smp: Bringing up secondary CPUs ... Apr 30 03:23:00.013200 kernel: smpboot: x86: Booting SMP configuration: Apr 30 03:23:00.013216 kernel: .... node #0, CPUs: #1 Apr 30 03:23:00.013233 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 03:23:00.013257 kernel: smpboot: Max logical packages: 1 Apr 30 03:23:00.013286 kernel: smpboot: Total of 2 processors activated (9976.53 BogoMIPS) Apr 30 03:23:00.013303 kernel: devtmpfs: initialized Apr 30 03:23:00.013318 kernel: x86/mm: Memory block size: 128MB Apr 30 03:23:00.013334 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 03:23:00.013350 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 03:23:00.013366 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 03:23:00.013380 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 03:23:00.013396 kernel: audit: initializing netlink subsys (disabled) Apr 30 03:23:00.013411 kernel: audit: type=2000 audit(1745983379.111:1): state=initialized audit_enabled=0 res=1 Apr 30 03:23:00.013429 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 03:23:00.013444 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 03:23:00.013459 kernel: cpuidle: using governor menu Apr 30 03:23:00.013475 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 03:23:00.013492 kernel: dca service started, version 1.12.1 Apr 30 03:23:00.013508 kernel: PCI: Using configuration type 1 for base access Apr 30 03:23:00.013521 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 03:23:00.013534 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 03:23:00.013549 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 03:23:00.013568 kernel: ACPI: Added _OSI(Module Device) Apr 30 03:23:00.013585 kernel: ACPI: Added _OSI(Processor Device) Apr 30 03:23:00.013602 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 03:23:00.013617 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 03:23:00.013633 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 03:23:00.013650 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 03:23:00.013666 kernel: ACPI: Interpreter enabled Apr 30 03:23:00.013682 kernel: ACPI: PM: (supports S0 S5) Apr 30 03:23:00.013699 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 03:23:00.013721 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 03:23:00.013738 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 03:23:00.013773 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 30 03:23:00.013788 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 03:23:00.014164 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 30 03:23:00.014362 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 30 03:23:00.014532 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 30 03:23:00.014562 kernel: acpiphp: Slot [3] registered Apr 30 03:23:00.014574 kernel: acpiphp: Slot [4] registered Apr 30 03:23:00.014588 kernel: acpiphp: Slot [5] registered Apr 30 03:23:00.014602 kernel: acpiphp: Slot [6] registered Apr 30 03:23:00.014617 kernel: acpiphp: Slot [7] registered Apr 30 03:23:00.014629 kernel: acpiphp: Slot [8] registered Apr 30 03:23:00.014642 kernel: acpiphp: Slot [9] registered Apr 30 03:23:00.014656 kernel: acpiphp: Slot [10] registered Apr 30 03:23:00.014670 kernel: acpiphp: Slot [11] registered Apr 30 03:23:00.014683 kernel: acpiphp: Slot [12] registered Apr 30 03:23:00.014702 kernel: acpiphp: Slot [13] registered Apr 30 03:23:00.014715 kernel: acpiphp: Slot [14] registered Apr 30 03:23:00.014728 kernel: acpiphp: Slot [15] registered Apr 30 03:23:00.014759 kernel: acpiphp: Slot [16] registered Apr 30 03:23:00.014773 kernel: acpiphp: Slot [17] registered Apr 30 03:23:00.014787 kernel: acpiphp: Slot [18] registered Apr 30 03:23:00.015264 kernel: acpiphp: Slot [19] registered Apr 30 03:23:00.015285 kernel: acpiphp: Slot [20] registered Apr 30 03:23:00.015299 kernel: acpiphp: Slot [21] registered Apr 30 03:23:00.015323 kernel: acpiphp: Slot [22] registered Apr 30 03:23:00.015336 kernel: acpiphp: Slot [23] registered Apr 30 03:23:00.015349 kernel: acpiphp: Slot [24] registered Apr 30 03:23:00.015361 kernel: acpiphp: Slot [25] registered Apr 30 03:23:00.015374 kernel: acpiphp: Slot [26] registered Apr 30 03:23:00.015387 kernel: acpiphp: Slot [27] registered Apr 30 03:23:00.015400 kernel: acpiphp: Slot [28] registered Apr 30 03:23:00.015414 kernel: acpiphp: Slot [29] registered Apr 30 03:23:00.015427 kernel: acpiphp: Slot [30] registered Apr 30 03:23:00.015440 kernel: acpiphp: Slot [31] registered Apr 30 03:23:00.015461 kernel: PCI host bridge to bus 0000:00 Apr 30 03:23:00.015695 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 03:23:00.015877 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 03:23:00.016021 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 03:23:00.016161 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 30 03:23:00.016305 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Apr 30 03:23:00.016444 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 03:23:00.016667 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 30 03:23:00.020029 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 30 03:23:00.020245 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Apr 30 03:23:00.020415 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Apr 30 03:23:00.020575 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Apr 30 03:23:00.020738 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Apr 30 03:23:00.020949 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Apr 30 03:23:00.021119 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Apr 30 03:23:00.021297 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Apr 30 03:23:00.021455 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Apr 30 03:23:00.021652 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Apr 30 03:23:00.023968 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Apr 30 03:23:00.024179 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Apr 30 03:23:00.024358 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Apr 30 03:23:00.024517 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Apr 30 03:23:00.024670 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Apr 30 03:23:00.024840 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Apr 30 03:23:00.024993 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Apr 30 03:23:00.025147 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 03:23:00.025334 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Apr 30 03:23:00.025502 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Apr 30 03:23:00.025662 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Apr 30 03:23:00.027414 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Apr 30 03:23:00.027634 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 30 03:23:00.028970 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Apr 30 03:23:00.029186 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Apr 30 03:23:00.029356 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Apr 30 03:23:00.029531 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Apr 30 03:23:00.029685 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Apr 30 03:23:00.030459 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Apr 30 03:23:00.030623 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Apr 30 03:23:00.032239 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Apr 30 03:23:00.032457 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Apr 30 03:23:00.032640 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Apr 30 03:23:00.032851 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Apr 30 03:23:00.033071 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Apr 30 03:23:00.033295 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Apr 30 03:23:00.033456 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Apr 30 03:23:00.033609 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Apr 30 03:23:00.035999 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Apr 30 03:23:00.036209 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Apr 30 03:23:00.036368 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Apr 30 03:23:00.036392 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 03:23:00.036409 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 03:23:00.036427 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 03:23:00.036444 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 03:23:00.036461 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 30 03:23:00.036485 kernel: iommu: Default domain type: Translated Apr 30 03:23:00.036502 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 03:23:00.036515 kernel: PCI: Using ACPI for IRQ routing Apr 30 03:23:00.036531 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 03:23:00.036548 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 30 03:23:00.036565 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Apr 30 03:23:00.036734 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Apr 30 03:23:00.036903 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Apr 30 03:23:00.037060 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 03:23:00.037082 kernel: vgaarb: loaded Apr 30 03:23:00.037099 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 30 03:23:00.037116 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 30 03:23:00.037134 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 03:23:00.037151 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 03:23:00.037168 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 03:23:00.037185 kernel: pnp: PnP ACPI init Apr 30 03:23:00.037202 kernel: pnp: PnP ACPI: found 4 devices Apr 30 03:23:00.037224 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 03:23:00.037241 kernel: NET: Registered PF_INET protocol family Apr 30 03:23:00.037257 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 03:23:00.037274 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 30 03:23:00.037291 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 03:23:00.037307 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:23:00.037324 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 03:23:00.037342 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 30 03:23:00.037358 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 03:23:00.037378 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 03:23:00.037396 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 03:23:00.037412 kernel: NET: Registered PF_XDP protocol family Apr 30 03:23:00.037557 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 03:23:00.037712 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 03:23:00.039982 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 03:23:00.040159 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 30 03:23:00.040298 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Apr 30 03:23:00.040514 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Apr 30 03:23:00.040678 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 30 03:23:00.040703 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 30 03:23:00.040910 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 31218 usecs Apr 30 03:23:00.040933 kernel: PCI: CLS 0 bytes, default 64 Apr 30 03:23:00.040952 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 30 03:23:00.040970 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f3946f721, max_idle_ns: 440795294991 ns Apr 30 03:23:00.040987 kernel: Initialise system trusted keyrings Apr 30 03:23:00.041012 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 30 03:23:00.041029 kernel: Key type asymmetric registered Apr 30 03:23:00.041046 kernel: Asymmetric key parser 'x509' registered Apr 30 03:23:00.041064 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 03:23:00.041081 kernel: io scheduler mq-deadline registered Apr 30 03:23:00.041098 kernel: io scheduler kyber registered Apr 30 03:23:00.041115 kernel: io scheduler bfq registered Apr 30 03:23:00.041133 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 03:23:00.041150 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Apr 30 03:23:00.041167 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Apr 30 03:23:00.041189 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Apr 30 03:23:00.041206 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 03:23:00.041223 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 03:23:00.041240 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 03:23:00.041257 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 03:23:00.041274 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 03:23:00.041292 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 30 03:23:00.041485 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 30 03:23:00.041656 kernel: rtc_cmos 00:03: registered as rtc0 Apr 30 03:23:00.043959 kernel: rtc_cmos 00:03: setting system clock to 2025-04-30T03:22:59 UTC (1745983379) Apr 30 03:23:00.044139 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Apr 30 03:23:00.044162 kernel: intel_pstate: CPU model not supported Apr 30 03:23:00.044180 kernel: NET: Registered PF_INET6 protocol family Apr 30 03:23:00.044197 kernel: Segment Routing with IPv6 Apr 30 03:23:00.044216 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 03:23:00.044230 kernel: NET: Registered PF_PACKET protocol family Apr 30 03:23:00.044255 kernel: Key type dns_resolver registered Apr 30 03:23:00.044268 kernel: IPI shorthand broadcast: enabled Apr 30 03:23:00.044281 kernel: sched_clock: Marking stable (855006315, 89979663)->(1040955489, -95969511) Apr 30 03:23:00.044294 kernel: registered taskstats version 1 Apr 30 03:23:00.044308 kernel: Loading compiled-in X.509 certificates Apr 30 03:23:00.044322 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 03:23:00.044336 kernel: Key type .fscrypt registered Apr 30 03:23:00.044349 kernel: Key type fscrypt-provisioning registered Apr 30 03:23:00.044363 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 03:23:00.044383 kernel: ima: Allocated hash algorithm: sha1 Apr 30 03:23:00.044398 kernel: ima: No architecture policies found Apr 30 03:23:00.044413 kernel: clk: Disabling unused clocks Apr 30 03:23:00.044429 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 03:23:00.044445 kernel: Write protecting the kernel read-only data: 36864k Apr 30 03:23:00.044488 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 03:23:00.044506 kernel: Run /init as init process Apr 30 03:23:00.044522 kernel: with arguments: Apr 30 03:23:00.044536 kernel: /init Apr 30 03:23:00.044555 kernel: with environment: Apr 30 03:23:00.044571 kernel: HOME=/ Apr 30 03:23:00.044586 kernel: TERM=linux Apr 30 03:23:00.044602 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 03:23:00.044628 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:23:00.044648 systemd[1]: Detected virtualization kvm. Apr 30 03:23:00.044666 systemd[1]: Detected architecture x86-64. Apr 30 03:23:00.044684 systemd[1]: Running in initrd. Apr 30 03:23:00.044704 systemd[1]: No hostname configured, using default hostname. Apr 30 03:23:00.044721 systemd[1]: Hostname set to . Apr 30 03:23:00.044739 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:23:00.044784 systemd[1]: Queued start job for default target initrd.target. Apr 30 03:23:00.044801 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:23:00.044819 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:23:00.044839 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 03:23:00.044866 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:23:00.044889 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 03:23:00.044907 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 03:23:00.044927 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 03:23:00.044944 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 03:23:00.044961 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:23:00.044985 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:23:00.045007 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:23:00.045024 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:23:00.045042 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:23:00.045063 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:23:00.045081 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:23:00.045099 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:23:00.045121 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:23:00.045138 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:23:00.045155 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:23:00.045172 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:23:00.045189 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:23:00.045203 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:23:00.045218 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 03:23:00.045233 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:23:00.045253 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 03:23:00.045269 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 03:23:00.045285 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:23:00.045301 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:23:00.045318 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:23:00.045334 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 03:23:00.045351 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:23:00.045368 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 03:23:00.045442 systemd-journald[183]: Collecting audit messages is disabled. Apr 30 03:23:00.045486 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:23:00.045502 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:23:00.045519 systemd-journald[183]: Journal started Apr 30 03:23:00.045551 systemd-journald[183]: Runtime Journal (/run/log/journal/7519e81931fb410d83c953a15b10282b) is 4.9M, max 39.3M, 34.4M free. Apr 30 03:23:00.026346 systemd-modules-load[184]: Inserted module 'overlay' Apr 30 03:23:00.057534 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:23:00.058528 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:23:00.076943 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 03:23:00.077277 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:23:00.079900 kernel: Bridge firewalling registered Apr 30 03:23:00.079401 systemd-modules-load[184]: Inserted module 'br_netfilter' Apr 30 03:23:00.081191 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:23:00.087171 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:23:00.089997 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:23:00.101024 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:23:00.118107 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:23:00.128062 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 03:23:00.128985 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:23:00.130160 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:23:00.138670 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:23:00.144933 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:23:00.159185 dracut-cmdline[214]: dracut-dracut-053 Apr 30 03:23:00.165791 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:23:00.202150 systemd-resolved[221]: Positive Trust Anchors: Apr 30 03:23:00.202166 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:23:00.202203 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:23:00.205521 systemd-resolved[221]: Defaulting to hostname 'linux'. Apr 30 03:23:00.208554 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:23:00.209175 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:23:00.283825 kernel: SCSI subsystem initialized Apr 30 03:23:00.296794 kernel: Loading iSCSI transport class v2.0-870. Apr 30 03:23:00.309796 kernel: iscsi: registered transport (tcp) Apr 30 03:23:00.337803 kernel: iscsi: registered transport (qla4xxx) Apr 30 03:23:00.337878 kernel: QLogic iSCSI HBA Driver Apr 30 03:23:00.398120 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 03:23:00.407172 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 03:23:00.438027 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 03:23:00.438120 kernel: device-mapper: uevent: version 1.0.3 Apr 30 03:23:00.438146 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 03:23:00.486788 kernel: raid6: avx2x4 gen() 14957 MB/s Apr 30 03:23:00.503803 kernel: raid6: avx2x2 gen() 15689 MB/s Apr 30 03:23:00.520988 kernel: raid6: avx2x1 gen() 10939 MB/s Apr 30 03:23:00.521076 kernel: raid6: using algorithm avx2x2 gen() 15689 MB/s Apr 30 03:23:00.539195 kernel: raid6: .... xor() 18347 MB/s, rmw enabled Apr 30 03:23:00.539366 kernel: raid6: using avx2x2 recovery algorithm Apr 30 03:23:00.563800 kernel: xor: automatically using best checksumming function avx Apr 30 03:23:00.758864 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 03:23:00.779575 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:23:00.792094 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:23:00.814143 systemd-udevd[402]: Using default interface naming scheme 'v255'. Apr 30 03:23:00.822395 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:23:00.832578 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 03:23:00.867263 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Apr 30 03:23:00.926757 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:23:00.935234 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:23:01.027321 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:23:01.040066 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 03:23:01.072590 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 03:23:01.075882 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:23:01.077828 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:23:01.079537 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:23:01.087089 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 03:23:01.133645 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:23:01.159802 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Apr 30 03:23:01.194052 kernel: scsi host0: Virtio SCSI HBA Apr 30 03:23:01.194315 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Apr 30 03:23:01.194519 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 03:23:01.194542 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 03:23:01.194561 kernel: GPT:9289727 != 125829119 Apr 30 03:23:01.194577 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 03:23:01.194592 kernel: GPT:9289727 != 125829119 Apr 30 03:23:01.194608 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 03:23:01.194624 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:23:01.195781 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Apr 30 03:23:01.202104 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Apr 30 03:23:01.234135 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:23:01.234382 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:23:01.237146 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:23:01.237890 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:23:01.238193 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:23:01.238840 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:23:01.255055 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:23:01.267289 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 03:23:01.267393 kernel: AES CTR mode by8 optimization enabled Apr 30 03:23:01.299799 kernel: ACPI: bus type USB registered Apr 30 03:23:01.300769 kernel: usbcore: registered new interface driver usbfs Apr 30 03:23:01.302400 kernel: usbcore: registered new interface driver hub Apr 30 03:23:01.302486 kernel: usbcore: registered new device driver usb Apr 30 03:23:01.303903 kernel: libata version 3.00 loaded. Apr 30 03:23:01.332886 kernel: ata_piix 0000:00:01.1: version 2.13 Apr 30 03:23:01.347617 kernel: scsi host1: ata_piix Apr 30 03:23:01.348360 kernel: scsi host2: ata_piix Apr 30 03:23:01.348550 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Apr 30 03:23:01.348570 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Apr 30 03:23:01.334159 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 30 03:23:01.390788 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (450) Apr 30 03:23:01.390839 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (452) Apr 30 03:23:01.392325 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:23:01.412448 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 30 03:23:01.424892 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 03:23:01.431860 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 30 03:23:01.433259 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 30 03:23:01.453631 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 03:23:01.459276 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:23:01.465300 disk-uuid[540]: Primary Header is updated. Apr 30 03:23:01.465300 disk-uuid[540]: Secondary Entries is updated. Apr 30 03:23:01.465300 disk-uuid[540]: Secondary Header is updated. Apr 30 03:23:01.475814 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:23:01.490819 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:23:01.514330 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:23:01.539793 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Apr 30 03:23:01.557645 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Apr 30 03:23:01.558106 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Apr 30 03:23:01.558254 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Apr 30 03:23:01.558430 kernel: hub 1-0:1.0: USB hub found Apr 30 03:23:01.558656 kernel: hub 1-0:1.0: 2 ports detected Apr 30 03:23:02.488801 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:23:02.490342 disk-uuid[541]: The operation has completed successfully. Apr 30 03:23:02.532553 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 03:23:02.532686 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 03:23:02.556033 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 03:23:02.560876 sh[561]: Success Apr 30 03:23:02.579864 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 03:23:02.684539 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 03:23:02.688296 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 03:23:02.689979 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 03:23:02.732707 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 03:23:02.732792 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:23:02.732812 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 03:23:02.732830 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 03:23:02.732848 kernel: BTRFS info (device dm-0): using free space tree Apr 30 03:23:02.742694 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 03:23:02.744658 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 03:23:02.757021 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 03:23:02.759716 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 03:23:02.775340 kernel: BTRFS info (device vda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:23:02.775426 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:23:02.775449 kernel: BTRFS info (device vda6): using free space tree Apr 30 03:23:02.779805 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 03:23:02.792631 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 03:23:02.793829 kernel: BTRFS info (device vda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:23:02.799444 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 03:23:02.805004 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 03:23:02.948994 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:23:02.960632 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:23:02.989789 ignition[651]: Ignition 2.19.0 Apr 30 03:23:02.989808 ignition[651]: Stage: fetch-offline Apr 30 03:23:02.989878 ignition[651]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:23:02.989891 ignition[651]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:23:02.990054 ignition[651]: parsed url from cmdline: "" Apr 30 03:23:02.990059 ignition[651]: no config URL provided Apr 30 03:23:02.990067 ignition[651]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:23:02.994242 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:23:02.990086 ignition[651]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:23:02.990096 ignition[651]: failed to fetch config: resource requires networking Apr 30 03:23:02.990400 ignition[651]: Ignition finished successfully Apr 30 03:23:03.017481 systemd-networkd[749]: lo: Link UP Apr 30 03:23:03.017502 systemd-networkd[749]: lo: Gained carrier Apr 30 03:23:03.021186 systemd-networkd[749]: Enumeration completed Apr 30 03:23:03.021848 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Apr 30 03:23:03.021854 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Apr 30 03:23:03.022877 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:23:03.023932 systemd[1]: Reached target network.target - Network. Apr 30 03:23:03.024146 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:23:03.024152 systemd-networkd[749]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:23:03.025604 systemd-networkd[749]: eth0: Link UP Apr 30 03:23:03.025609 systemd-networkd[749]: eth0: Gained carrier Apr 30 03:23:03.025622 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Apr 30 03:23:03.031426 systemd-networkd[749]: eth1: Link UP Apr 30 03:23:03.031437 systemd-networkd[749]: eth1: Gained carrier Apr 30 03:23:03.031459 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:23:03.035195 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 03:23:03.049918 systemd-networkd[749]: eth0: DHCPv4 address 64.227.96.87/20, gateway 64.227.96.1 acquired from 169.254.169.253 Apr 30 03:23:03.054913 systemd-networkd[749]: eth1: DHCPv4 address 10.124.0.22/20 acquired from 169.254.169.253 Apr 30 03:23:03.067428 ignition[754]: Ignition 2.19.0 Apr 30 03:23:03.067443 ignition[754]: Stage: fetch Apr 30 03:23:03.067784 ignition[754]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:23:03.067803 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:23:03.067995 ignition[754]: parsed url from cmdline: "" Apr 30 03:23:03.068001 ignition[754]: no config URL provided Apr 30 03:23:03.068010 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:23:03.068024 ignition[754]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:23:03.068057 ignition[754]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Apr 30 03:23:03.082351 ignition[754]: GET result: OK Apr 30 03:23:03.082570 ignition[754]: parsing config with SHA512: 7cd2838ac8c190fbe87d6e7ce3fbab0eae8821ec93e130cdd7de51b57eb966b6efed291705ccb2a97000780f3cd77687c5083b34c981b5ea3c3c63b5927382d6 Apr 30 03:23:03.090558 unknown[754]: fetched base config from "system" Apr 30 03:23:03.090574 unknown[754]: fetched base config from "system" Apr 30 03:23:03.090583 unknown[754]: fetched user config from "digitalocean" Apr 30 03:23:03.092098 ignition[754]: fetch: fetch complete Apr 30 03:23:03.092108 ignition[754]: fetch: fetch passed Apr 30 03:23:03.092205 ignition[754]: Ignition finished successfully Apr 30 03:23:03.095550 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 03:23:03.102220 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 03:23:03.140459 ignition[761]: Ignition 2.19.0 Apr 30 03:23:03.141450 ignition[761]: Stage: kargs Apr 30 03:23:03.142191 ignition[761]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:23:03.142592 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:23:03.144402 ignition[761]: kargs: kargs passed Apr 30 03:23:03.144578 ignition[761]: Ignition finished successfully Apr 30 03:23:03.146199 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 03:23:03.154121 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 03:23:03.192527 ignition[767]: Ignition 2.19.0 Apr 30 03:23:03.192544 ignition[767]: Stage: disks Apr 30 03:23:03.192930 ignition[767]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:23:03.192950 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:23:03.194516 ignition[767]: disks: disks passed Apr 30 03:23:03.194618 ignition[767]: Ignition finished successfully Apr 30 03:23:03.196472 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 03:23:03.201887 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 03:23:03.202544 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:23:03.203652 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:23:03.204485 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:23:03.205121 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:23:03.213166 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 03:23:03.249415 systemd-fsck[775]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 03:23:03.253876 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 03:23:03.259447 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 03:23:03.369977 kernel: EXT4-fs (vda9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 03:23:03.371186 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 03:23:03.372504 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 03:23:03.385015 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:23:03.388728 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 03:23:03.390291 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Apr 30 03:23:03.399776 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (783) Apr 30 03:23:03.399022 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 03:23:03.400938 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 03:23:03.408119 kernel: BTRFS info (device vda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:23:03.408176 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:23:03.408219 kernel: BTRFS info (device vda6): using free space tree Apr 30 03:23:03.401004 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:23:03.412795 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 03:23:03.416152 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 03:23:03.418069 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:23:03.426198 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 03:23:03.516653 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 03:23:03.522787 coreos-metadata[785]: Apr 30 03:23:03.521 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Apr 30 03:23:03.524926 coreos-metadata[786]: Apr 30 03:23:03.524 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Apr 30 03:23:03.529662 initrd-setup-root[820]: cut: /sysroot/etc/group: No such file or directory Apr 30 03:23:03.534514 coreos-metadata[785]: Apr 30 03:23:03.533 INFO Fetch successful Apr 30 03:23:03.535313 coreos-metadata[786]: Apr 30 03:23:03.534 INFO Fetch successful Apr 30 03:23:03.541578 coreos-metadata[786]: Apr 30 03:23:03.540 INFO wrote hostname ci-4081.3.3-0-0c5ff7085f to /sysroot/etc/hostname Apr 30 03:23:03.543634 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:23:03.545895 initrd-setup-root[827]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 03:23:03.545501 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Apr 30 03:23:03.545621 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Apr 30 03:23:03.553266 initrd-setup-root[836]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 03:23:03.668850 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 03:23:03.673990 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 03:23:03.676012 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 03:23:03.690822 kernel: BTRFS info (device vda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:23:03.722347 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 03:23:03.729730 ignition[904]: INFO : Ignition 2.19.0 Apr 30 03:23:03.731392 ignition[904]: INFO : Stage: mount Apr 30 03:23:03.731392 ignition[904]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:23:03.731392 ignition[904]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:23:03.729956 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 03:23:03.735631 ignition[904]: INFO : mount: mount passed Apr 30 03:23:03.735631 ignition[904]: INFO : Ignition finished successfully Apr 30 03:23:03.735314 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 03:23:03.742014 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 03:23:03.777160 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:23:03.788916 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (917) Apr 30 03:23:03.789015 kernel: BTRFS info (device vda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:23:03.791804 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:23:03.791910 kernel: BTRFS info (device vda6): using free space tree Apr 30 03:23:03.794812 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 03:23:03.798362 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:23:03.832550 ignition[934]: INFO : Ignition 2.19.0 Apr 30 03:23:03.832550 ignition[934]: INFO : Stage: files Apr 30 03:23:03.834055 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:23:03.834055 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:23:03.834055 ignition[934]: DEBUG : files: compiled without relabeling support, skipping Apr 30 03:23:03.836203 ignition[934]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 03:23:03.836203 ignition[934]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 03:23:03.839513 ignition[934]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 03:23:03.840480 ignition[934]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 03:23:03.841808 unknown[934]: wrote ssh authorized keys file for user: core Apr 30 03:23:03.842797 ignition[934]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 03:23:03.844818 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 03:23:03.845857 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 03:23:03.845857 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:23:03.845857 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 03:23:03.895798 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 03:23:04.054829 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:23:04.054829 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 03:23:04.054829 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 03:23:04.054829 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:23:04.054829 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:23:04.054829 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:23:04.054829 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:23:04.054829 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:23:04.060463 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:23:04.060463 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:23:04.060463 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:23:04.060463 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:23:04.060463 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:23:04.060463 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:23:04.060463 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Apr 30 03:23:04.431014 systemd-networkd[749]: eth0: Gained IPv6LL Apr 30 03:23:04.752376 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 03:23:05.007362 systemd-networkd[749]: eth1: Gained IPv6LL Apr 30 03:23:05.083560 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:23:05.083560 ignition[934]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 30 03:23:05.085262 ignition[934]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 03:23:05.085262 ignition[934]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 03:23:05.085262 ignition[934]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 30 03:23:05.085262 ignition[934]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 30 03:23:05.085262 ignition[934]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:23:05.085262 ignition[934]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:23:05.085262 ignition[934]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 30 03:23:05.085262 ignition[934]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 30 03:23:05.091310 ignition[934]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 03:23:05.091310 ignition[934]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:23:05.091310 ignition[934]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:23:05.091310 ignition[934]: INFO : files: files passed Apr 30 03:23:05.091310 ignition[934]: INFO : Ignition finished successfully Apr 30 03:23:05.088276 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 03:23:05.095077 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 03:23:05.097920 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 03:23:05.103373 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 03:23:05.104162 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 03:23:05.124457 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:23:05.124457 initrd-setup-root-after-ignition[962]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:23:05.127017 initrd-setup-root-after-ignition[966]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:23:05.128874 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:23:05.130100 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 03:23:05.135169 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 03:23:05.178146 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 03:23:05.178324 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 03:23:05.179531 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 03:23:05.180108 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 03:23:05.180941 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 03:23:05.187081 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 03:23:05.204728 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:23:05.213015 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 03:23:05.225613 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:23:05.226793 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:23:05.227949 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 03:23:05.228388 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 03:23:05.228560 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:23:05.229639 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 03:23:05.230106 systemd[1]: Stopped target basic.target - Basic System. Apr 30 03:23:05.230838 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 03:23:05.231600 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:23:05.232380 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 03:23:05.233220 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 03:23:05.233976 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:23:05.234801 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 03:23:05.235579 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 03:23:05.236371 systemd[1]: Stopped target swap.target - Swaps. Apr 30 03:23:05.236984 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 03:23:05.237163 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:23:05.238021 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:23:05.238484 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:23:05.239265 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 03:23:05.239405 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:23:05.239995 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 03:23:05.240144 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 03:23:05.241245 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 03:23:05.241402 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:23:05.242169 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 03:23:05.242312 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 03:23:05.243205 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 03:23:05.243315 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:23:05.250099 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 03:23:05.250605 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 03:23:05.250784 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:23:05.252701 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 03:23:05.257351 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 03:23:05.257533 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:23:05.262151 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 03:23:05.263826 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:23:05.269828 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 03:23:05.269994 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 03:23:05.286726 ignition[986]: INFO : Ignition 2.19.0 Apr 30 03:23:05.289713 ignition[986]: INFO : Stage: umount Apr 30 03:23:05.289713 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:23:05.289713 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:23:05.289713 ignition[986]: INFO : umount: umount passed Apr 30 03:23:05.289713 ignition[986]: INFO : Ignition finished successfully Apr 30 03:23:05.293046 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 03:23:05.296514 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 03:23:05.297213 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 03:23:05.298487 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 03:23:05.299181 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 03:23:05.300578 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 03:23:05.300709 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 03:23:05.301521 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 03:23:05.301608 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 03:23:05.302461 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 03:23:05.302515 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 03:23:05.303350 systemd[1]: Stopped target network.target - Network. Apr 30 03:23:05.304075 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 03:23:05.304167 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:23:05.304931 systemd[1]: Stopped target paths.target - Path Units. Apr 30 03:23:05.305601 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 03:23:05.309061 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:23:05.309618 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 03:23:05.310581 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 03:23:05.311517 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 03:23:05.311607 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:23:05.312293 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 03:23:05.312337 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:23:05.313052 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 03:23:05.313111 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 03:23:05.313805 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 03:23:05.313861 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 03:23:05.314617 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 03:23:05.314683 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 03:23:05.315572 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 03:23:05.316723 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 03:23:05.321199 systemd-networkd[749]: eth0: DHCPv6 lease lost Apr 30 03:23:05.327837 systemd-networkd[749]: eth1: DHCPv6 lease lost Apr 30 03:23:05.327843 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 03:23:05.327976 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 03:23:05.331338 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 03:23:05.331521 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 03:23:05.334073 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 03:23:05.334149 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:23:05.339948 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 03:23:05.340412 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 03:23:05.340491 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:23:05.340960 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:23:05.341007 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:23:05.341413 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 03:23:05.341458 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 03:23:05.341861 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 03:23:05.341902 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:23:05.343010 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:23:05.366130 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 03:23:05.366354 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:23:05.367519 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 03:23:05.367709 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 03:23:05.368152 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 03:23:05.368188 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:23:05.368925 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 03:23:05.368980 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:23:05.370110 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 03:23:05.370167 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 03:23:05.371531 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:23:05.371587 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:23:05.388107 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 03:23:05.389178 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 03:23:05.389294 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:23:05.390442 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:23:05.390502 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:23:05.392951 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 03:23:05.393092 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 03:23:05.395928 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 03:23:05.396105 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 03:23:05.398423 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 03:23:05.409119 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 03:23:05.420521 systemd[1]: Switching root. Apr 30 03:23:05.449447 systemd-journald[183]: Journal stopped Apr 30 03:23:06.870319 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Apr 30 03:23:06.870450 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 03:23:06.870475 kernel: SELinux: policy capability open_perms=1 Apr 30 03:23:06.870502 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 03:23:06.870520 kernel: SELinux: policy capability always_check_network=0 Apr 30 03:23:06.870538 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 03:23:06.870565 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 03:23:06.870584 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 03:23:06.870601 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 03:23:06.870627 kernel: audit: type=1403 audit(1745983385.700:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 03:23:06.870648 systemd[1]: Successfully loaded SELinux policy in 41.486ms. Apr 30 03:23:06.870690 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.528ms. Apr 30 03:23:06.870719 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:23:06.873214 systemd[1]: Detected virtualization kvm. Apr 30 03:23:06.873292 systemd[1]: Detected architecture x86-64. Apr 30 03:23:06.873328 systemd[1]: Detected first boot. Apr 30 03:23:06.873350 systemd[1]: Hostname set to . Apr 30 03:23:06.873370 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:23:06.873397 zram_generator::config[1047]: No configuration found. Apr 30 03:23:06.873420 systemd[1]: Populated /etc with preset unit settings. Apr 30 03:23:06.873446 systemd[1]: Queued start job for default target multi-user.target. Apr 30 03:23:06.873464 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 30 03:23:06.873483 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 03:23:06.873502 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 03:23:06.873520 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 03:23:06.873538 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 03:23:06.873556 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 03:23:06.873576 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 03:23:06.873606 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 03:23:06.873625 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 03:23:06.873642 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:23:06.873662 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:23:06.873680 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 03:23:06.873699 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 03:23:06.873718 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 03:23:06.873736 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:23:06.875308 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 03:23:06.875351 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:23:06.875370 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 03:23:06.875388 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:23:06.875408 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:23:06.875427 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:23:06.875446 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:23:06.875466 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 03:23:06.875490 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 03:23:06.875512 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:23:06.875534 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:23:06.875555 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:23:06.875577 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:23:06.875598 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:23:06.875619 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 03:23:06.875639 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 03:23:06.875659 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 03:23:06.875683 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 03:23:06.875703 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:23:06.875723 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 03:23:06.876274 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 03:23:06.876329 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 03:23:06.876350 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 03:23:06.876369 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:23:06.876387 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:23:06.876405 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 03:23:06.876434 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:23:06.876453 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:23:06.876471 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:23:06.876489 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 03:23:06.876506 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:23:06.876528 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 03:23:06.876548 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 30 03:23:06.876568 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 30 03:23:06.876591 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:23:06.876610 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:23:06.876629 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 03:23:06.876647 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 03:23:06.876666 kernel: fuse: init (API version 7.39) Apr 30 03:23:06.876686 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:23:06.876707 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:23:06.876727 kernel: ACPI: bus type drm_connector registered Apr 30 03:23:06.878476 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 03:23:06.878518 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 03:23:06.878538 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 03:23:06.878556 kernel: loop: module loaded Apr 30 03:23:06.878576 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 03:23:06.878595 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 03:23:06.878613 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 03:23:06.878634 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:23:06.878653 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 03:23:06.878682 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 03:23:06.878701 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:23:06.878719 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:23:06.878736 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:23:06.880341 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:23:06.880443 systemd-journald[1137]: Collecting audit messages is disabled. Apr 30 03:23:06.880494 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 03:23:06.880520 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:23:06.880544 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:23:06.880567 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 03:23:06.880591 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 03:23:06.880620 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:23:06.880644 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:23:06.880668 systemd-journald[1137]: Journal started Apr 30 03:23:06.880716 systemd-journald[1137]: Runtime Journal (/run/log/journal/7519e81931fb410d83c953a15b10282b) is 4.9M, max 39.3M, 34.4M free. Apr 30 03:23:06.883921 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:23:06.886502 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:23:06.889179 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 03:23:06.891464 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 03:23:06.910965 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 03:23:06.919020 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 03:23:06.927933 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 03:23:06.928713 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 03:23:06.938541 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 03:23:06.958158 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 03:23:06.962025 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:23:06.970782 systemd-journald[1137]: Time spent on flushing to /var/log/journal/7519e81931fb410d83c953a15b10282b is 120.026ms for 968 entries. Apr 30 03:23:06.970782 systemd-journald[1137]: System Journal (/var/log/journal/7519e81931fb410d83c953a15b10282b) is 8.0M, max 195.6M, 187.6M free. Apr 30 03:23:07.113633 systemd-journald[1137]: Received client request to flush runtime journal. Apr 30 03:23:06.970044 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 03:23:06.972262 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:23:06.979083 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:23:06.992207 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:23:07.002460 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 03:23:07.003414 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 03:23:07.034516 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 03:23:07.035339 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 03:23:07.057107 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Apr 30 03:23:07.057124 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Apr 30 03:23:07.060417 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:23:07.071642 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:23:07.074436 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:23:07.086150 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 03:23:07.098065 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 03:23:07.119503 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 03:23:07.144504 udevadm[1202]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 03:23:07.162756 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 03:23:07.177228 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:23:07.219715 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Apr 30 03:23:07.220350 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Apr 30 03:23:07.231538 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:23:07.930972 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 03:23:07.938031 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:23:07.989960 systemd-udevd[1218]: Using default interface naming scheme 'v255'. Apr 30 03:23:08.014458 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:23:08.025993 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:23:08.049810 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 03:23:08.083892 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 30 03:23:08.157986 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:23:08.158237 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:23:08.167776 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 30 03:23:08.167892 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:23:08.171781 kernel: ACPI: button: Power Button [PWRF] Apr 30 03:23:08.181943 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:23:08.199963 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:23:08.200581 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 03:23:08.200632 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 03:23:08.200685 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:23:08.201853 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 03:23:08.228231 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:23:08.228501 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:23:08.229372 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:23:08.229550 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:23:08.232625 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:23:08.236098 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:23:08.242648 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:23:08.242786 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:23:08.249810 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Apr 30 03:23:08.271800 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Apr 30 03:23:08.279215 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Apr 30 03:23:08.294821 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1220) Apr 30 03:23:08.299773 kernel: Console: switching to colour dummy device 80x25 Apr 30 03:23:08.315606 systemd-networkd[1222]: lo: Link UP Apr 30 03:23:08.317690 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 30 03:23:08.317964 kernel: [drm] features: -context_init Apr 30 03:23:08.315620 systemd-networkd[1222]: lo: Gained carrier Apr 30 03:23:08.322899 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 30 03:23:08.326782 kernel: [drm] number of scanouts: 1 Apr 30 03:23:08.326860 kernel: [drm] number of cap sets: 0 Apr 30 03:23:08.342377 systemd-networkd[1222]: Enumeration completed Apr 30 03:23:08.345795 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Apr 30 03:23:08.348001 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:23:08.363280 systemd-networkd[1222]: eth0: Configuring with /run/systemd/network/10-8a:30:b3:9b:10:9c.network. Apr 30 03:23:08.364138 systemd-networkd[1222]: eth1: Configuring with /run/systemd/network/10-c6:43:4c:31:cf:09.network. Apr 30 03:23:08.364686 systemd-networkd[1222]: eth0: Link UP Apr 30 03:23:08.364691 systemd-networkd[1222]: eth0: Gained carrier Apr 30 03:23:08.366059 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 03:23:08.369777 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Apr 30 03:23:08.369229 systemd-networkd[1222]: eth1: Link UP Apr 30 03:23:08.369237 systemd-networkd[1222]: eth1: Gained carrier Apr 30 03:23:08.378357 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 03:23:08.387920 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 30 03:23:08.425182 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:23:08.428841 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 03:23:08.442509 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:23:08.442864 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:23:08.452066 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:23:08.466065 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:23:08.466316 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:23:08.478184 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:23:08.486236 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 03:23:08.597230 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:23:08.644298 kernel: EDAC MC: Ver: 3.0.0 Apr 30 03:23:08.672622 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 03:23:08.684213 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 03:23:08.703332 lvm[1280]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:23:08.733353 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 03:23:08.736371 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:23:08.744173 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 03:23:08.756226 lvm[1283]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:23:08.788002 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 03:23:08.790976 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:23:08.798971 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Apr 30 03:23:08.800857 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 03:23:08.800922 systemd[1]: Reached target machines.target - Containers. Apr 30 03:23:08.809259 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 03:23:08.830952 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 03:23:08.839793 kernel: ISO 9660 Extensions: RRIP_1991A Apr 30 03:23:08.841471 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Apr 30 03:23:08.845737 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:23:08.851072 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 03:23:08.862102 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 03:23:08.868002 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 03:23:08.871609 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:23:08.882157 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 03:23:08.894061 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 03:23:08.901352 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 03:23:08.923799 kernel: loop0: detected capacity change from 0 to 210664 Apr 30 03:23:08.937763 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 03:23:08.939703 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 03:23:08.962136 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 03:23:08.988297 kernel: loop1: detected capacity change from 0 to 142488 Apr 30 03:23:09.044791 kernel: loop2: detected capacity change from 0 to 140768 Apr 30 03:23:09.101035 kernel: loop3: detected capacity change from 0 to 8 Apr 30 03:23:09.127950 kernel: loop4: detected capacity change from 0 to 210664 Apr 30 03:23:09.149591 kernel: loop5: detected capacity change from 0 to 142488 Apr 30 03:23:09.170850 kernel: loop6: detected capacity change from 0 to 140768 Apr 30 03:23:09.191998 kernel: loop7: detected capacity change from 0 to 8 Apr 30 03:23:09.193591 (sd-merge)[1308]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Apr 30 03:23:09.194410 (sd-merge)[1308]: Merged extensions into '/usr'. Apr 30 03:23:09.215551 systemd[1]: Reloading requested from client PID 1299 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 03:23:09.215568 systemd[1]: Reloading... Apr 30 03:23:09.360501 zram_generator::config[1335]: No configuration found. Apr 30 03:23:09.539384 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:23:09.564532 ldconfig[1296]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 03:23:09.619203 systemd[1]: Reloading finished in 402 ms. Apr 30 03:23:09.639233 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 03:23:09.643107 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 03:23:09.656086 systemd[1]: Starting ensure-sysext.service... Apr 30 03:23:09.659951 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:23:09.675006 systemd[1]: Reloading requested from client PID 1386 ('systemctl') (unit ensure-sysext.service)... Apr 30 03:23:09.675423 systemd[1]: Reloading... Apr 30 03:23:09.701021 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 03:23:09.701436 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 03:23:09.703344 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 03:23:09.703667 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Apr 30 03:23:09.703770 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Apr 30 03:23:09.707376 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:23:09.709058 systemd-tmpfiles[1387]: Skipping /boot Apr 30 03:23:09.724304 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:23:09.724318 systemd-tmpfiles[1387]: Skipping /boot Apr 30 03:23:09.764779 zram_generator::config[1412]: No configuration found. Apr 30 03:23:09.970570 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:23:10.062953 systemd-networkd[1222]: eth0: Gained IPv6LL Apr 30 03:23:10.082621 systemd[1]: Reloading finished in 406 ms. Apr 30 03:23:10.101417 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 03:23:10.109575 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:23:10.125165 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:23:10.131156 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 03:23:10.137085 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 03:23:10.150235 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:23:10.157283 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 03:23:10.170093 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:23:10.170372 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:23:10.179160 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:23:10.186099 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:23:10.208073 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:23:10.208676 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:23:10.208856 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:23:10.224233 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:23:10.224648 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:23:10.225002 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:23:10.225219 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:23:10.231232 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 03:23:10.235655 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:23:10.245979 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:23:10.257377 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:23:10.257596 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:23:10.271162 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:23:10.281820 augenrules[1497]: No rules Apr 30 03:23:10.273336 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:23:10.279413 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:23:10.288459 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 03:23:10.302090 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:23:10.302441 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:23:10.314218 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:23:10.331080 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:23:10.337025 systemd-resolved[1478]: Positive Trust Anchors: Apr 30 03:23:10.337048 systemd-resolved[1478]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:23:10.337105 systemd-resolved[1478]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:23:10.340947 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:23:10.345264 systemd-resolved[1478]: Using system hostname 'ci-4081.3.3-0-0c5ff7085f'. Apr 30 03:23:10.352020 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:23:10.356244 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:23:10.383116 systemd-networkd[1222]: eth1: Gained IPv6LL Apr 30 03:23:10.388383 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 03:23:10.390162 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:23:10.392111 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:23:10.396213 systemd[1]: Finished ensure-sysext.service. Apr 30 03:23:10.397682 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 03:23:10.400379 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:23:10.400671 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:23:10.405057 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:23:10.405298 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:23:10.407614 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:23:10.409007 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:23:10.411907 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:23:10.412119 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:23:10.424902 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 03:23:10.436554 systemd[1]: Reached target network.target - Network. Apr 30 03:23:10.438450 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 03:23:10.440107 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:23:10.441548 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:23:10.441672 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:23:10.453600 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 03:23:10.454302 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:23:10.518648 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 03:23:10.523823 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:23:10.524527 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 03:23:10.527298 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 03:23:10.527879 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 03:23:10.528325 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 03:23:10.528361 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:23:10.528859 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 03:23:10.529514 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 03:23:10.530269 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 03:23:10.530805 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:23:10.534957 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 03:23:10.538718 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 03:23:10.542821 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 03:23:10.545178 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 03:23:10.545884 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:23:10.546478 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:23:10.548277 systemd[1]: System is tainted: cgroupsv1 Apr 30 03:23:10.548354 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:23:10.548400 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:23:10.551376 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 03:23:10.555514 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 03:23:10.566076 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 03:23:10.580976 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 03:23:10.591399 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 03:23:10.593721 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 03:23:10.612938 jq[1540]: false Apr 30 03:23:10.608091 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:23:10.621383 coreos-metadata[1535]: Apr 30 03:23:10.621 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Apr 30 03:23:10.630370 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 03:23:10.636975 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 03:23:10.640447 coreos-metadata[1535]: Apr 30 03:23:10.638 INFO Fetch successful Apr 30 03:23:10.648956 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 03:23:10.655535 dbus-daemon[1536]: [system] SELinux support is enabled Apr 30 03:23:10.667992 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 03:23:10.677241 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 03:23:10.701107 extend-filesystems[1541]: Found loop4 Apr 30 03:23:10.701107 extend-filesystems[1541]: Found loop5 Apr 30 03:23:11.252285 extend-filesystems[1541]: Found loop6 Apr 30 03:23:11.252285 extend-filesystems[1541]: Found loop7 Apr 30 03:23:11.252285 extend-filesystems[1541]: Found vda Apr 30 03:23:11.252285 extend-filesystems[1541]: Found vda1 Apr 30 03:23:11.252285 extend-filesystems[1541]: Found vda2 Apr 30 03:23:11.252285 extend-filesystems[1541]: Found vda3 Apr 30 03:23:11.252285 extend-filesystems[1541]: Found usr Apr 30 03:23:11.252285 extend-filesystems[1541]: Found vda4 Apr 30 03:23:11.252285 extend-filesystems[1541]: Found vda6 Apr 30 03:23:11.252285 extend-filesystems[1541]: Found vda7 Apr 30 03:23:11.252285 extend-filesystems[1541]: Found vda9 Apr 30 03:23:11.252285 extend-filesystems[1541]: Checking size of /dev/vda9 Apr 30 03:23:11.246596 systemd-timesyncd[1530]: Contacted time server 12.205.28.193:123 (0.flatcar.pool.ntp.org). Apr 30 03:23:11.246684 systemd-timesyncd[1530]: Initial clock synchronization to Wed 2025-04-30 03:23:11.246391 UTC. Apr 30 03:23:11.246755 systemd-resolved[1478]: Clock change detected. Flushing caches. Apr 30 03:23:11.256410 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 03:23:11.266039 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 03:23:11.270652 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 03:23:11.289514 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 03:23:11.303782 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 03:23:11.326650 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 03:23:11.326939 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 03:23:11.342386 update_engine[1565]: I20250430 03:23:11.341276 1565 main.cc:92] Flatcar Update Engine starting Apr 30 03:23:11.344533 update_engine[1565]: I20250430 03:23:11.343071 1565 update_check_scheduler.cc:74] Next update check in 6m0s Apr 30 03:23:11.344638 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 03:23:11.345007 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 03:23:11.347577 extend-filesystems[1541]: Resized partition /dev/vda9 Apr 30 03:23:11.354838 extend-filesystems[1577]: resize2fs 1.47.1 (20-May-2024) Apr 30 03:23:11.373135 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Apr 30 03:23:11.365848 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 03:23:11.366196 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 03:23:11.374385 jq[1567]: true Apr 30 03:23:11.383623 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1231) Apr 30 03:23:11.417894 jq[1582]: true Apr 30 03:23:11.441412 tar[1578]: linux-amd64/helm Apr 30 03:23:11.430638 (ntainerd)[1586]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 03:23:11.433540 systemd[1]: Started update-engine.service - Update Engine. Apr 30 03:23:11.443404 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 03:23:11.448373 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 03:23:11.448427 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 03:23:11.449148 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 03:23:11.449301 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Apr 30 03:23:11.456015 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 03:23:11.464559 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 03:23:11.467727 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 03:23:11.569422 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 03:23:11.571730 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 03:23:11.626197 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Apr 30 03:23:11.650405 extend-filesystems[1577]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 30 03:23:11.650405 extend-filesystems[1577]: old_desc_blocks = 1, new_desc_blocks = 8 Apr 30 03:23:11.650405 extend-filesystems[1577]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Apr 30 03:23:11.647068 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 03:23:11.692076 extend-filesystems[1541]: Resized filesystem in /dev/vda9 Apr 30 03:23:11.692076 extend-filesystems[1541]: Found vdb Apr 30 03:23:11.648629 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 03:23:11.710216 bash[1622]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:23:11.710997 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 03:23:11.731642 systemd[1]: Starting sshkeys.service... Apr 30 03:23:11.737093 systemd-logind[1563]: New seat seat0. Apr 30 03:23:11.749529 systemd-logind[1563]: Watching system buttons on /dev/input/event1 (Power Button) Apr 30 03:23:11.749558 systemd-logind[1563]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 03:23:11.749909 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 03:23:11.779828 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 03:23:11.793035 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 03:23:11.986413 coreos-metadata[1632]: Apr 30 03:23:11.986 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Apr 30 03:23:12.007031 coreos-metadata[1632]: Apr 30 03:23:12.005 INFO Fetch successful Apr 30 03:23:12.041497 unknown[1632]: wrote ssh authorized keys file for user: core Apr 30 03:23:12.104428 locksmithd[1602]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 03:23:12.111870 update-ssh-keys[1649]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:23:12.115439 sshd_keygen[1579]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 03:23:12.116856 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 03:23:12.135629 systemd[1]: Finished sshkeys.service. Apr 30 03:23:12.235353 containerd[1586]: time="2025-04-30T03:23:12.233848249Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 03:23:12.240982 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 03:23:12.255673 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 03:23:12.285347 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 03:23:12.287566 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 03:23:12.297876 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 03:23:12.315723 containerd[1586]: time="2025-04-30T03:23:12.315587617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:23:12.320650 containerd[1586]: time="2025-04-30T03:23:12.319650482Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:23:12.320650 containerd[1586]: time="2025-04-30T03:23:12.319695220Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 03:23:12.320650 containerd[1586]: time="2025-04-30T03:23:12.319736824Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 03:23:12.320650 containerd[1586]: time="2025-04-30T03:23:12.319930337Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 03:23:12.320650 containerd[1586]: time="2025-04-30T03:23:12.319948966Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 03:23:12.320650 containerd[1586]: time="2025-04-30T03:23:12.320010956Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:23:12.320650 containerd[1586]: time="2025-04-30T03:23:12.320023228Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:23:12.320650 containerd[1586]: time="2025-04-30T03:23:12.320249964Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:23:12.320650 containerd[1586]: time="2025-04-30T03:23:12.320265616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 03:23:12.320650 containerd[1586]: time="2025-04-30T03:23:12.320278247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:23:12.320650 containerd[1586]: time="2025-04-30T03:23:12.320287939Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 03:23:12.323421 containerd[1586]: time="2025-04-30T03:23:12.322616650Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:23:12.323421 containerd[1586]: time="2025-04-30T03:23:12.322963260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:23:12.323421 containerd[1586]: time="2025-04-30T03:23:12.323214210Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:23:12.323421 containerd[1586]: time="2025-04-30T03:23:12.323271899Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 03:23:12.325096 containerd[1586]: time="2025-04-30T03:23:12.325056876Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 03:23:12.325288 containerd[1586]: time="2025-04-30T03:23:12.325272427Z" level=info msg="metadata content store policy set" policy=shared Apr 30 03:23:12.338473 containerd[1586]: time="2025-04-30T03:23:12.338409610Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 03:23:12.339821 containerd[1586]: time="2025-04-30T03:23:12.339686761Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 03:23:12.339969 containerd[1586]: time="2025-04-30T03:23:12.339853193Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 03:23:12.339969 containerd[1586]: time="2025-04-30T03:23:12.339888643Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 03:23:12.339969 containerd[1586]: time="2025-04-30T03:23:12.339910431Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 03:23:12.340244 containerd[1586]: time="2025-04-30T03:23:12.340141514Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 03:23:12.341863 containerd[1586]: time="2025-04-30T03:23:12.340671309Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 03:23:12.341863 containerd[1586]: time="2025-04-30T03:23:12.340863664Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 03:23:12.341863 containerd[1586]: time="2025-04-30T03:23:12.340889145Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 03:23:12.341863 containerd[1586]: time="2025-04-30T03:23:12.340911068Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 03:23:12.341863 containerd[1586]: time="2025-04-30T03:23:12.340931030Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 03:23:12.341863 containerd[1586]: time="2025-04-30T03:23:12.340947069Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 03:23:12.341863 containerd[1586]: time="2025-04-30T03:23:12.340960052Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 03:23:12.341863 containerd[1586]: time="2025-04-30T03:23:12.340977394Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 03:23:12.341863 containerd[1586]: time="2025-04-30T03:23:12.340993413Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 03:23:12.341863 containerd[1586]: time="2025-04-30T03:23:12.341006795Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 03:23:12.341863 containerd[1586]: time="2025-04-30T03:23:12.341022710Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 03:23:12.341863 containerd[1586]: time="2025-04-30T03:23:12.341036983Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 03:23:12.341863 containerd[1586]: time="2025-04-30T03:23:12.341057819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 03:23:12.341863 containerd[1586]: time="2025-04-30T03:23:12.341071000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 03:23:12.342266 containerd[1586]: time="2025-04-30T03:23:12.341082943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 03:23:12.342266 containerd[1586]: time="2025-04-30T03:23:12.341096714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 03:23:12.342266 containerd[1586]: time="2025-04-30T03:23:12.341108154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 03:23:12.342266 containerd[1586]: time="2025-04-30T03:23:12.341120985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 03:23:12.342266 containerd[1586]: time="2025-04-30T03:23:12.341132931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 03:23:12.342266 containerd[1586]: time="2025-04-30T03:23:12.341145586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 03:23:12.342266 containerd[1586]: time="2025-04-30T03:23:12.341159102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 03:23:12.342266 containerd[1586]: time="2025-04-30T03:23:12.341172273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 03:23:12.342266 containerd[1586]: time="2025-04-30T03:23:12.341183530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 03:23:12.342266 containerd[1586]: time="2025-04-30T03:23:12.341194188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 03:23:12.342266 containerd[1586]: time="2025-04-30T03:23:12.341205710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 03:23:12.342266 containerd[1586]: time="2025-04-30T03:23:12.341234527Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 03:23:12.342266 containerd[1586]: time="2025-04-30T03:23:12.341259255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 03:23:12.342266 containerd[1586]: time="2025-04-30T03:23:12.341271786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 03:23:12.342266 containerd[1586]: time="2025-04-30T03:23:12.341283388Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 03:23:12.345004 containerd[1586]: time="2025-04-30T03:23:12.344403125Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 03:23:12.345004 containerd[1586]: time="2025-04-30T03:23:12.344459069Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 03:23:12.345004 containerd[1586]: time="2025-04-30T03:23:12.344478851Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 03:23:12.345004 containerd[1586]: time="2025-04-30T03:23:12.344492299Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 03:23:12.345004 containerd[1586]: time="2025-04-30T03:23:12.344519348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 03:23:12.345004 containerd[1586]: time="2025-04-30T03:23:12.344536545Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 03:23:12.345004 containerd[1586]: time="2025-04-30T03:23:12.344550262Z" level=info msg="NRI interface is disabled by configuration." Apr 30 03:23:12.345004 containerd[1586]: time="2025-04-30T03:23:12.344573933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 03:23:12.345268 containerd[1586]: time="2025-04-30T03:23:12.344920141Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 03:23:12.345268 containerd[1586]: time="2025-04-30T03:23:12.345003470Z" level=info msg="Connect containerd service" Apr 30 03:23:12.345268 containerd[1586]: time="2025-04-30T03:23:12.345060861Z" level=info msg="using legacy CRI server" Apr 30 03:23:12.345268 containerd[1586]: time="2025-04-30T03:23:12.345070883Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 03:23:12.348581 containerd[1586]: time="2025-04-30T03:23:12.345500530Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 03:23:12.348581 containerd[1586]: time="2025-04-30T03:23:12.346496431Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:23:12.346755 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 03:23:12.353142 containerd[1586]: time="2025-04-30T03:23:12.352268210Z" level=info msg="Start subscribing containerd event" Apr 30 03:23:12.354244 containerd[1586]: time="2025-04-30T03:23:12.353306965Z" level=info msg="Start recovering state" Apr 30 03:23:12.359151 containerd[1586]: time="2025-04-30T03:23:12.358844340Z" level=info msg="Start event monitor" Apr 30 03:23:12.359151 containerd[1586]: time="2025-04-30T03:23:12.358889144Z" level=info msg="Start snapshots syncer" Apr 30 03:23:12.359151 containerd[1586]: time="2025-04-30T03:23:12.358901591Z" level=info msg="Start cni network conf syncer for default" Apr 30 03:23:12.359151 containerd[1586]: time="2025-04-30T03:23:12.358915403Z" level=info msg="Start streaming server" Apr 30 03:23:12.359151 containerd[1586]: time="2025-04-30T03:23:12.359070709Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 03:23:12.359151 containerd[1586]: time="2025-04-30T03:23:12.359137711Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 03:23:12.359516 containerd[1586]: time="2025-04-30T03:23:12.359193334Z" level=info msg="containerd successfully booted in 0.130282s" Apr 30 03:23:12.360588 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 03:23:12.370016 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 03:23:12.372405 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 03:23:12.377686 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 03:23:12.696914 tar[1578]: linux-amd64/LICENSE Apr 30 03:23:12.697575 tar[1578]: linux-amd64/README.md Apr 30 03:23:12.716639 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 03:23:13.167590 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:23:13.170123 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 03:23:13.175510 systemd[1]: Startup finished in 6.989s (kernel) + 6.978s (userspace) = 13.968s. Apr 30 03:23:13.179892 (kubelet)[1693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:23:13.731971 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 03:23:13.738446 systemd[1]: Started sshd@0-64.227.96.87:22-139.178.89.65:51654.service - OpenSSH per-connection server daemon (139.178.89.65:51654). Apr 30 03:23:13.821491 sshd[1704]: Accepted publickey for core from 139.178.89.65 port 51654 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:23:13.823894 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:23:13.836644 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 03:23:13.845465 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 03:23:13.850248 systemd-logind[1563]: New session 1 of user core. Apr 30 03:23:13.875626 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 03:23:13.890902 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 03:23:13.896190 (systemd)[1710]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 03:23:14.008465 kubelet[1693]: E0430 03:23:14.007639 1693 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:23:14.012203 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:23:14.012644 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:23:14.026405 systemd[1710]: Queued start job for default target default.target. Apr 30 03:23:14.027280 systemd[1710]: Created slice app.slice - User Application Slice. Apr 30 03:23:14.027317 systemd[1710]: Reached target paths.target - Paths. Apr 30 03:23:14.027356 systemd[1710]: Reached target timers.target - Timers. Apr 30 03:23:14.032479 systemd[1710]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 03:23:14.054653 systemd[1710]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 03:23:14.054732 systemd[1710]: Reached target sockets.target - Sockets. Apr 30 03:23:14.054746 systemd[1710]: Reached target basic.target - Basic System. Apr 30 03:23:14.054795 systemd[1710]: Reached target default.target - Main User Target. Apr 30 03:23:14.054828 systemd[1710]: Startup finished in 147ms. Apr 30 03:23:14.056103 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 03:23:14.077937 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 03:23:14.148170 systemd[1]: Started sshd@1-64.227.96.87:22-139.178.89.65:51660.service - OpenSSH per-connection server daemon (139.178.89.65:51660). Apr 30 03:23:14.192621 sshd[1724]: Accepted publickey for core from 139.178.89.65 port 51660 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:23:14.195022 sshd[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:23:14.251984 systemd-logind[1563]: New session 2 of user core. Apr 30 03:23:14.263815 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 03:23:14.333561 sshd[1724]: pam_unix(sshd:session): session closed for user core Apr 30 03:23:14.342803 systemd[1]: Started sshd@2-64.227.96.87:22-139.178.89.65:51668.service - OpenSSH per-connection server daemon (139.178.89.65:51668). Apr 30 03:23:14.343642 systemd[1]: sshd@1-64.227.96.87:22-139.178.89.65:51660.service: Deactivated successfully. Apr 30 03:23:14.347803 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 03:23:14.349007 systemd-logind[1563]: Session 2 logged out. Waiting for processes to exit. Apr 30 03:23:14.352194 systemd-logind[1563]: Removed session 2. Apr 30 03:23:14.388129 sshd[1729]: Accepted publickey for core from 139.178.89.65 port 51668 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:23:14.390090 sshd[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:23:14.397431 systemd-logind[1563]: New session 3 of user core. Apr 30 03:23:14.404782 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 03:23:14.465663 sshd[1729]: pam_unix(sshd:session): session closed for user core Apr 30 03:23:14.470172 systemd[1]: sshd@2-64.227.96.87:22-139.178.89.65:51668.service: Deactivated successfully. Apr 30 03:23:14.475708 systemd-logind[1563]: Session 3 logged out. Waiting for processes to exit. Apr 30 03:23:14.476644 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 03:23:14.483740 systemd[1]: Started sshd@3-64.227.96.87:22-139.178.89.65:51684.service - OpenSSH per-connection server daemon (139.178.89.65:51684). Apr 30 03:23:14.485842 systemd-logind[1563]: Removed session 3. Apr 30 03:23:14.537673 sshd[1740]: Accepted publickey for core from 139.178.89.65 port 51684 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:23:14.540092 sshd[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:23:14.546299 systemd-logind[1563]: New session 4 of user core. Apr 30 03:23:14.557796 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 03:23:14.623941 sshd[1740]: pam_unix(sshd:session): session closed for user core Apr 30 03:23:14.637177 systemd[1]: Started sshd@4-64.227.96.87:22-139.178.89.65:51686.service - OpenSSH per-connection server daemon (139.178.89.65:51686). Apr 30 03:23:14.638053 systemd[1]: sshd@3-64.227.96.87:22-139.178.89.65:51684.service: Deactivated successfully. Apr 30 03:23:14.640531 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 03:23:14.642343 systemd-logind[1563]: Session 4 logged out. Waiting for processes to exit. Apr 30 03:23:14.644821 systemd-logind[1563]: Removed session 4. Apr 30 03:23:14.676359 sshd[1745]: Accepted publickey for core from 139.178.89.65 port 51686 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:23:14.678690 sshd[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:23:14.684026 systemd-logind[1563]: New session 5 of user core. Apr 30 03:23:14.691788 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 03:23:14.762053 sudo[1752]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 03:23:14.762890 sudo[1752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:23:14.779956 sudo[1752]: pam_unix(sudo:session): session closed for user root Apr 30 03:23:14.784723 sshd[1745]: pam_unix(sshd:session): session closed for user core Apr 30 03:23:14.797521 systemd[1]: Started sshd@5-64.227.96.87:22-139.178.89.65:51692.service - OpenSSH per-connection server daemon (139.178.89.65:51692). Apr 30 03:23:14.798225 systemd[1]: sshd@4-64.227.96.87:22-139.178.89.65:51686.service: Deactivated successfully. Apr 30 03:23:14.805036 systemd-logind[1563]: Session 5 logged out. Waiting for processes to exit. Apr 30 03:23:14.805594 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 03:23:14.809759 systemd-logind[1563]: Removed session 5. Apr 30 03:23:14.837729 sshd[1754]: Accepted publickey for core from 139.178.89.65 port 51692 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:23:14.840108 sshd[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:23:14.847041 systemd-logind[1563]: New session 6 of user core. Apr 30 03:23:14.852924 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 03:23:14.915584 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 03:23:14.916018 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:23:14.920721 sudo[1762]: pam_unix(sudo:session): session closed for user root Apr 30 03:23:14.928008 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 03:23:14.928516 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:23:14.945996 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 03:23:14.958544 auditctl[1765]: No rules Apr 30 03:23:14.959049 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 03:23:14.959477 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 03:23:14.969170 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:23:15.002368 augenrules[1784]: No rules Apr 30 03:23:15.004213 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:23:15.006902 sudo[1761]: pam_unix(sudo:session): session closed for user root Apr 30 03:23:15.013667 sshd[1754]: pam_unix(sshd:session): session closed for user core Apr 30 03:23:15.022817 systemd[1]: Started sshd@6-64.227.96.87:22-139.178.89.65:51696.service - OpenSSH per-connection server daemon (139.178.89.65:51696). Apr 30 03:23:15.023637 systemd[1]: sshd@5-64.227.96.87:22-139.178.89.65:51692.service: Deactivated successfully. Apr 30 03:23:15.026467 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 03:23:15.028572 systemd-logind[1563]: Session 6 logged out. Waiting for processes to exit. Apr 30 03:23:15.030497 systemd-logind[1563]: Removed session 6. Apr 30 03:23:15.067742 sshd[1791]: Accepted publickey for core from 139.178.89.65 port 51696 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:23:15.070037 sshd[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:23:15.075732 systemd-logind[1563]: New session 7 of user core. Apr 30 03:23:15.085776 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 03:23:15.147693 sudo[1797]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 03:23:15.148015 sudo[1797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:23:15.636782 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 03:23:15.644995 (dockerd)[1813]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 03:23:16.078593 dockerd[1813]: time="2025-04-30T03:23:16.077982303Z" level=info msg="Starting up" Apr 30 03:23:16.319895 dockerd[1813]: time="2025-04-30T03:23:16.319596044Z" level=info msg="Loading containers: start." Apr 30 03:23:16.441697 kernel: Initializing XFRM netlink socket Apr 30 03:23:16.548084 systemd-networkd[1222]: docker0: Link UP Apr 30 03:23:16.568545 dockerd[1813]: time="2025-04-30T03:23:16.568413062Z" level=info msg="Loading containers: done." Apr 30 03:23:16.587086 dockerd[1813]: time="2025-04-30T03:23:16.586984191Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 03:23:16.587584 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2264711006-merged.mount: Deactivated successfully. Apr 30 03:23:16.588308 dockerd[1813]: time="2025-04-30T03:23:16.588267416Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 03:23:16.588458 dockerd[1813]: time="2025-04-30T03:23:16.588439659Z" level=info msg="Daemon has completed initialization" Apr 30 03:23:16.628309 dockerd[1813]: time="2025-04-30T03:23:16.628188391Z" level=info msg="API listen on /run/docker.sock" Apr 30 03:23:16.628791 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 03:23:17.944880 containerd[1586]: time="2025-04-30T03:23:17.944831028Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 03:23:18.457174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount39354799.mount: Deactivated successfully. Apr 30 03:23:19.768306 containerd[1586]: time="2025-04-30T03:23:19.768243251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:23:19.769644 containerd[1586]: time="2025-04-30T03:23:19.769197619Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" Apr 30 03:23:19.770368 containerd[1586]: time="2025-04-30T03:23:19.770320112Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:23:19.773974 containerd[1586]: time="2025-04-30T03:23:19.773896748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:23:19.775742 containerd[1586]: time="2025-04-30T03:23:19.775519959Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 1.830640942s" Apr 30 03:23:19.775742 containerd[1586]: time="2025-04-30T03:23:19.775571895Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" Apr 30 03:23:19.802158 containerd[1586]: time="2025-04-30T03:23:19.801866720Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 03:23:21.402846 containerd[1586]: time="2025-04-30T03:23:21.402783344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:23:21.405027 containerd[1586]: time="2025-04-30T03:23:21.404945038Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" Apr 30 03:23:21.406385 containerd[1586]: time="2025-04-30T03:23:21.406181202Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:23:21.409504 containerd[1586]: time="2025-04-30T03:23:21.409047279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:23:21.410179 containerd[1586]: time="2025-04-30T03:23:21.410143946Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 1.608232243s" Apr 30 03:23:21.410179 containerd[1586]: time="2025-04-30T03:23:21.410177820Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" Apr 30 03:23:21.439598 containerd[1586]: time="2025-04-30T03:23:21.439538169Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 03:23:22.583975 containerd[1586]: time="2025-04-30T03:23:22.583881559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:23:22.585963 containerd[1586]: time="2025-04-30T03:23:22.585879615Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" Apr 30 03:23:22.586361 containerd[1586]: time="2025-04-30T03:23:22.586302997Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:23:22.597183 containerd[1586]: time="2025-04-30T03:23:22.596996446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:23:22.598627 containerd[1586]: time="2025-04-30T03:23:22.598560595Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.158727054s" Apr 30 03:23:22.598627 containerd[1586]: time="2025-04-30T03:23:22.598624962Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" Apr 30 03:23:22.645010 containerd[1586]: time="2025-04-30T03:23:22.644705314Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 03:23:23.704589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4050584963.mount: Deactivated successfully. Apr 30 03:23:24.262965 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 03:23:24.273969 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:23:24.329785 containerd[1586]: time="2025-04-30T03:23:24.329714610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:23:24.332378 containerd[1586]: time="2025-04-30T03:23:24.331362648Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" Apr 30 03:23:24.332378 containerd[1586]: time="2025-04-30T03:23:24.332073441Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:23:24.337368 containerd[1586]: time="2025-04-30T03:23:24.335466613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:23:24.337368 containerd[1586]: time="2025-04-30T03:23:24.336432439Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.691677504s" Apr 30 03:23:24.337368 containerd[1586]: time="2025-04-30T03:23:24.336470513Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" Apr 30 03:23:24.379357 containerd[1586]: time="2025-04-30T03:23:24.378592978Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 03:23:24.380751 systemd-resolved[1478]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Apr 30 03:23:24.454671 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:23:24.474136 (kubelet)[2067]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:23:24.541437 kubelet[2067]: E0430 03:23:24.541235 2067 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:23:24.547016 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:23:24.547319 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:23:24.841945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount292312046.mount: Deactivated successfully. Apr 30 03:23:25.747059 containerd[1586]: time="2025-04-30T03:23:25.746978163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:23:25.748473 containerd[1586]: time="2025-04-30T03:23:25.748388891Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Apr 30 03:23:25.749499 containerd[1586]: time="2025-04-30T03:23:25.749386978Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:23:25.753193 containerd[1586]: time="2025-04-30T03:23:25.753098924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:23:25.755385 containerd[1586]: time="2025-04-30T03:23:25.754924002Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.376105743s" Apr 30 03:23:25.755385 containerd[1586]: time="2025-04-30T03:23:25.754990505Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 30 03:23:25.785021 containerd[1586]: time="2025-04-30T03:23:25.784963927Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 03:23:26.221834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount88316896.mount: Deactivated successfully. Apr 30 03:23:26.229988 containerd[1586]: time="2025-04-30T03:23:26.228363423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:23:26.229988 containerd[1586]: time="2025-04-30T03:23:26.229436887Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Apr 30 03:23:26.229988 containerd[1586]: time="2025-04-30T03:23:26.229903905Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:23:26.233414 containerd[1586]: time="2025-04-30T03:23:26.233346088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:23:26.235088 containerd[1586]: time="2025-04-30T03:23:26.235028572Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 449.757479ms" Apr 30 03:23:26.235419 containerd[1586]: time="2025-04-30T03:23:26.235316899Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Apr 30 03:23:26.267821 containerd[1586]: time="2025-04-30T03:23:26.267767189Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 03:23:26.723213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount566132893.mount: Deactivated successfully. Apr 30 03:23:27.431519 systemd-resolved[1478]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Apr 30 03:23:28.524156 containerd[1586]: time="2025-04-30T03:23:28.524063479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:23:28.525980 containerd[1586]: time="2025-04-30T03:23:28.525902694Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Apr 30 03:23:28.527115 containerd[1586]: time="2025-04-30T03:23:28.527031201Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:23:28.530370 containerd[1586]: time="2025-04-30T03:23:28.529990317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:23:28.531622 containerd[1586]: time="2025-04-30T03:23:28.531441871Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.263632634s" Apr 30 03:23:28.531622 containerd[1586]: time="2025-04-30T03:23:28.531488360Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Apr 30 03:23:31.588944 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:23:31.599706 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:23:31.627143 systemd[1]: Reloading requested from client PID 2241 ('systemctl') (unit session-7.scope)... Apr 30 03:23:31.627160 systemd[1]: Reloading... Apr 30 03:23:31.729409 zram_generator::config[2277]: No configuration found. Apr 30 03:23:31.918431 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:23:32.006342 systemd[1]: Reloading finished in 378 ms. Apr 30 03:23:32.058720 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:23:32.063826 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:23:32.068067 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:23:32.068391 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:23:32.072940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:23:32.212596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:23:32.220369 (kubelet)[2349]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:23:32.284895 kubelet[2349]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:23:32.284895 kubelet[2349]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:23:32.284895 kubelet[2349]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:23:32.287263 kubelet[2349]: I0430 03:23:32.287179 2349 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:23:32.602473 kubelet[2349]: I0430 03:23:32.601903 2349 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 03:23:32.602473 kubelet[2349]: I0430 03:23:32.601941 2349 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:23:32.602473 kubelet[2349]: I0430 03:23:32.602266 2349 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 03:23:32.629515 kubelet[2349]: I0430 03:23:32.629295 2349 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:23:32.632957 kubelet[2349]: E0430 03:23:32.632693 2349 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://64.227.96.87:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 64.227.96.87:6443: connect: connection refused Apr 30 03:23:32.648919 kubelet[2349]: I0430 03:23:32.648884 2349 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:23:32.650818 kubelet[2349]: I0430 03:23:32.650721 2349 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:23:32.651096 kubelet[2349]: I0430 03:23:32.650809 2349 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-0-0c5ff7085f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 03:23:32.651665 kubelet[2349]: I0430 03:23:32.651624 2349 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:23:32.651665 kubelet[2349]: I0430 03:23:32.651655 2349 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 03:23:32.651821 kubelet[2349]: I0430 03:23:32.651804 2349 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:23:32.653090 kubelet[2349]: I0430 03:23:32.652845 2349 kubelet.go:400] "Attempting to sync node with API server" Apr 30 03:23:32.653090 kubelet[2349]: I0430 03:23:32.652871 2349 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:23:32.653090 kubelet[2349]: I0430 03:23:32.652910 2349 kubelet.go:312] "Adding apiserver pod source" Apr 30 03:23:32.653090 kubelet[2349]: I0430 03:23:32.652930 2349 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:23:32.655255 kubelet[2349]: W0430 03:23:32.655081 2349 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.227.96.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-0-0c5ff7085f&limit=500&resourceVersion=0": dial tcp 64.227.96.87:6443: connect: connection refused Apr 30 03:23:32.655255 kubelet[2349]: E0430 03:23:32.655144 2349 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://64.227.96.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-0-0c5ff7085f&limit=500&resourceVersion=0": dial tcp 64.227.96.87:6443: connect: connection refused Apr 30 03:23:32.656718 kubelet[2349]: W0430 03:23:32.656457 2349 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.227.96.87:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.227.96.87:6443: connect: connection refused Apr 30 03:23:32.656718 kubelet[2349]: E0430 03:23:32.656535 2349 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://64.227.96.87:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.227.96.87:6443: connect: connection refused Apr 30 03:23:32.656897 kubelet[2349]: I0430 03:23:32.656792 2349 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:23:32.658926 kubelet[2349]: I0430 03:23:32.658871 2349 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:23:32.659045 kubelet[2349]: W0430 03:23:32.658962 2349 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 03:23:32.660052 kubelet[2349]: I0430 03:23:32.659623 2349 server.go:1264] "Started kubelet" Apr 30 03:23:32.667906 kubelet[2349]: E0430 03:23:32.667519 2349 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.227.96.87:6443/api/v1/namespaces/default/events\": dial tcp 64.227.96.87:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.3-0-0c5ff7085f.183afaa9c74c99b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-0-0c5ff7085f,UID:ci-4081.3.3-0-0c5ff7085f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-0-0c5ff7085f,},FirstTimestamp:2025-04-30 03:23:32.659591603 +0000 UTC m=+0.432813761,LastTimestamp:2025-04-30 03:23:32.659591603 +0000 UTC m=+0.432813761,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-0-0c5ff7085f,}" Apr 30 03:23:32.667906 kubelet[2349]: I0430 03:23:32.667695 2349 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:23:32.668582 kubelet[2349]: I0430 03:23:32.668552 2349 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:23:32.669768 kubelet[2349]: I0430 03:23:32.669739 2349 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:23:32.669863 kubelet[2349]: I0430 03:23:32.669806 2349 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:23:32.672357 kubelet[2349]: I0430 03:23:32.670945 2349 server.go:455] "Adding debug handlers to kubelet server" Apr 30 03:23:32.676704 kubelet[2349]: I0430 03:23:32.676674 2349 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 03:23:32.677019 kubelet[2349]: I0430 03:23:32.676823 2349 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:23:32.677019 kubelet[2349]: I0430 03:23:32.676908 2349 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:23:32.677909 kubelet[2349]: W0430 03:23:32.677762 2349 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.227.96.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.227.96.87:6443: connect: connection refused Apr 30 03:23:32.677909 kubelet[2349]: E0430 03:23:32.677915 2349 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://64.227.96.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.227.96.87:6443: connect: connection refused Apr 30 03:23:32.678858 kubelet[2349]: E0430 03:23:32.678728 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.96.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-0-0c5ff7085f?timeout=10s\": dial tcp 64.227.96.87:6443: connect: connection refused" interval="200ms" Apr 30 03:23:32.680278 kubelet[2349]: I0430 03:23:32.680250 2349 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:23:32.680411 kubelet[2349]: I0430 03:23:32.680392 2349 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:23:32.681459 kubelet[2349]: E0430 03:23:32.681435 2349 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:23:32.685688 kubelet[2349]: I0430 03:23:32.685656 2349 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:23:32.707706 kubelet[2349]: I0430 03:23:32.707632 2349 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:23:32.710708 kubelet[2349]: I0430 03:23:32.710662 2349 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:23:32.710708 kubelet[2349]: I0430 03:23:32.710706 2349 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:23:32.710885 kubelet[2349]: I0430 03:23:32.710727 2349 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 03:23:32.710885 kubelet[2349]: E0430 03:23:32.710786 2349 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:23:32.723297 kubelet[2349]: W0430 03:23:32.722950 2349 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.227.96.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.227.96.87:6443: connect: connection refused Apr 30 03:23:32.723297 kubelet[2349]: E0430 03:23:32.723017 2349 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://64.227.96.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.227.96.87:6443: connect: connection refused Apr 30 03:23:32.730843 kubelet[2349]: I0430 03:23:32.730796 2349 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:23:32.730843 kubelet[2349]: I0430 03:23:32.730820 2349 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:23:32.730843 kubelet[2349]: I0430 03:23:32.730846 2349 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:23:32.732940 kubelet[2349]: I0430 03:23:32.732885 2349 policy_none.go:49] "None policy: Start" Apr 30 03:23:32.734459 kubelet[2349]: I0430 03:23:32.733874 2349 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:23:32.734459 kubelet[2349]: I0430 03:23:32.733954 2349 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:23:32.741205 kubelet[2349]: I0430 03:23:32.741152 2349 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:23:32.741480 kubelet[2349]: I0430 03:23:32.741430 2349 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:23:32.741592 kubelet[2349]: I0430 03:23:32.741572 2349 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:23:32.747887 kubelet[2349]: E0430 03:23:32.747838 2349 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.3-0-0c5ff7085f\" not found" Apr 30 03:23:32.778968 kubelet[2349]: I0430 03:23:32.778920 2349 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:32.779336 kubelet[2349]: E0430 03:23:32.779296 2349 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.227.96.87:6443/api/v1/nodes\": dial tcp 64.227.96.87:6443: connect: connection refused" node="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:32.812737 kubelet[2349]: I0430 03:23:32.811789 2349 topology_manager.go:215] "Topology Admit Handler" podUID="acbaec2bd9013c64f16cf43e34305637" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:32.813925 kubelet[2349]: I0430 03:23:32.813202 2349 topology_manager.go:215] "Topology Admit Handler" podUID="2f7c3d5c12a98b4b98eadfac3b2ce5ed" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:32.815477 kubelet[2349]: I0430 03:23:32.815432 2349 topology_manager.go:215] "Topology Admit Handler" podUID="0fef68fb92d4efae7933621d4561acc0" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:32.878024 kubelet[2349]: I0430 03:23:32.877851 2349 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f7c3d5c12a98b4b98eadfac3b2ce5ed-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-0-0c5ff7085f\" (UID: \"2f7c3d5c12a98b4b98eadfac3b2ce5ed\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:32.878618 kubelet[2349]: I0430 03:23:32.878250 2349 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/acbaec2bd9013c64f16cf43e34305637-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-0-0c5ff7085f\" (UID: \"acbaec2bd9013c64f16cf43e34305637\") " pod="kube-system/kube-apiserver-ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:32.878618 kubelet[2349]: I0430 03:23:32.878300 2349 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f7c3d5c12a98b4b98eadfac3b2ce5ed-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-0-0c5ff7085f\" (UID: \"2f7c3d5c12a98b4b98eadfac3b2ce5ed\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:32.878618 kubelet[2349]: I0430 03:23:32.878343 2349 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f7c3d5c12a98b4b98eadfac3b2ce5ed-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-0-0c5ff7085f\" (UID: \"2f7c3d5c12a98b4b98eadfac3b2ce5ed\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:32.878618 kubelet[2349]: I0430 03:23:32.878368 2349 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2f7c3d5c12a98b4b98eadfac3b2ce5ed-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-0-0c5ff7085f\" (UID: \"2f7c3d5c12a98b4b98eadfac3b2ce5ed\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:32.878618 kubelet[2349]: I0430 03:23:32.878430 2349 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f7c3d5c12a98b4b98eadfac3b2ce5ed-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-0-0c5ff7085f\" (UID: \"2f7c3d5c12a98b4b98eadfac3b2ce5ed\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:32.878874 kubelet[2349]: I0430 03:23:32.878456 2349 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0fef68fb92d4efae7933621d4561acc0-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-0-0c5ff7085f\" (UID: \"0fef68fb92d4efae7933621d4561acc0\") " pod="kube-system/kube-scheduler-ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:32.878874 kubelet[2349]: I0430 03:23:32.878472 2349 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/acbaec2bd9013c64f16cf43e34305637-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-0-0c5ff7085f\" (UID: \"acbaec2bd9013c64f16cf43e34305637\") " pod="kube-system/kube-apiserver-ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:32.878874 kubelet[2349]: I0430 03:23:32.878558 2349 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/acbaec2bd9013c64f16cf43e34305637-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-0-0c5ff7085f\" (UID: \"acbaec2bd9013c64f16cf43e34305637\") " pod="kube-system/kube-apiserver-ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:32.879906 kubelet[2349]: E0430 03:23:32.879848 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.96.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-0-0c5ff7085f?timeout=10s\": dial tcp 64.227.96.87:6443: connect: connection refused" interval="400ms" Apr 30 03:23:32.980559 kubelet[2349]: I0430 03:23:32.980504 2349 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:32.981091 kubelet[2349]: E0430 03:23:32.981022 2349 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.227.96.87:6443/api/v1/nodes\": dial tcp 64.227.96.87:6443: connect: connection refused" node="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:33.119185 kubelet[2349]: E0430 03:23:33.119075 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:23:33.120056 containerd[1586]: time="2025-04-30T03:23:33.119906207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-0-0c5ff7085f,Uid:acbaec2bd9013c64f16cf43e34305637,Namespace:kube-system,Attempt:0,}" Apr 30 03:23:33.121926 systemd-resolved[1478]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Apr 30 03:23:33.123345 kubelet[2349]: E0430 03:23:33.122841 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:23:33.124370 kubelet[2349]: E0430 03:23:33.123999 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:23:33.127099 containerd[1586]: time="2025-04-30T03:23:33.127038908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-0-0c5ff7085f,Uid:2f7c3d5c12a98b4b98eadfac3b2ce5ed,Namespace:kube-system,Attempt:0,}" Apr 30 03:23:33.127372 containerd[1586]: time="2025-04-30T03:23:33.127042765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-0-0c5ff7085f,Uid:0fef68fb92d4efae7933621d4561acc0,Namespace:kube-system,Attempt:0,}" Apr 30 03:23:33.281116 kubelet[2349]: E0430 03:23:33.280975 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.96.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-0-0c5ff7085f?timeout=10s\": dial tcp 64.227.96.87:6443: connect: connection refused" interval="800ms" Apr 30 03:23:33.382893 kubelet[2349]: I0430 03:23:33.382686 2349 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:33.383395 kubelet[2349]: E0430 03:23:33.383164 2349 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.227.96.87:6443/api/v1/nodes\": dial tcp 64.227.96.87:6443: connect: connection refused" node="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:33.524385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3592873403.mount: Deactivated successfully. Apr 30 03:23:33.530405 containerd[1586]: time="2025-04-30T03:23:33.529389425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:23:33.531235 containerd[1586]: time="2025-04-30T03:23:33.531102771Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 30 03:23:33.533509 containerd[1586]: time="2025-04-30T03:23:33.533451542Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:23:33.535204 containerd[1586]: time="2025-04-30T03:23:33.534338875Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:23:33.535204 containerd[1586]: time="2025-04-30T03:23:33.534802834Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:23:33.535204 containerd[1586]: time="2025-04-30T03:23:33.534943503Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:23:33.535204 containerd[1586]: time="2025-04-30T03:23:33.535158350Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:23:33.539610 containerd[1586]: time="2025-04-30T03:23:33.539549674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:23:33.540745 containerd[1586]: time="2025-04-30T03:23:33.540691590Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 413.450382ms" Apr 30 03:23:33.542632 containerd[1586]: time="2025-04-30T03:23:33.542596787Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 415.145917ms" Apr 30 03:23:33.545919 containerd[1586]: time="2025-04-30T03:23:33.545878248Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 425.886543ms" Apr 30 03:23:33.705107 containerd[1586]: time="2025-04-30T03:23:33.704682736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:23:33.705107 containerd[1586]: time="2025-04-30T03:23:33.705038941Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:23:33.706309 containerd[1586]: time="2025-04-30T03:23:33.705830441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:23:33.706309 containerd[1586]: time="2025-04-30T03:23:33.705913104Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:23:33.706309 containerd[1586]: time="2025-04-30T03:23:33.705927954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:23:33.706905 containerd[1586]: time="2025-04-30T03:23:33.705162639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:23:33.707483 containerd[1586]: time="2025-04-30T03:23:33.707396968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:23:33.707864 containerd[1586]: time="2025-04-30T03:23:33.707741084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:23:33.724164 containerd[1586]: time="2025-04-30T03:23:33.723087298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:23:33.724164 containerd[1586]: time="2025-04-30T03:23:33.723156862Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:23:33.724164 containerd[1586]: time="2025-04-30T03:23:33.723173405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:23:33.724164 containerd[1586]: time="2025-04-30T03:23:33.723289066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:23:33.799381 kubelet[2349]: W0430 03:23:33.798797 2349 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.227.96.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-0-0c5ff7085f&limit=500&resourceVersion=0": dial tcp 64.227.96.87:6443: connect: connection refused Apr 30 03:23:33.799381 kubelet[2349]: E0430 03:23:33.799217 2349 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://64.227.96.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-0-0c5ff7085f&limit=500&resourceVersion=0": dial tcp 64.227.96.87:6443: connect: connection refused Apr 30 03:23:33.805352 containerd[1586]: time="2025-04-30T03:23:33.802974091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-0-0c5ff7085f,Uid:acbaec2bd9013c64f16cf43e34305637,Namespace:kube-system,Attempt:0,} returns sandbox id \"b655e2fa8cf8941ebe4451ebca5548dace712012a71ffb275bcc751665f71f78\"" Apr 30 03:23:33.808367 kubelet[2349]: E0430 03:23:33.807940 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:23:33.821004 containerd[1586]: time="2025-04-30T03:23:33.820765966Z" level=info msg="CreateContainer within sandbox \"b655e2fa8cf8941ebe4451ebca5548dace712012a71ffb275bcc751665f71f78\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 03:23:33.843794 containerd[1586]: time="2025-04-30T03:23:33.843477627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-0-0c5ff7085f,Uid:2f7c3d5c12a98b4b98eadfac3b2ce5ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"152c622dd10b7637425e6f6ef49f534f115d3d0787122f0d96e76bc3e0221e39\"" Apr 30 03:23:33.845968 kubelet[2349]: E0430 03:23:33.845587 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:23:33.849894 containerd[1586]: time="2025-04-30T03:23:33.849839412Z" level=info msg="CreateContainer within sandbox \"152c622dd10b7637425e6f6ef49f534f115d3d0787122f0d96e76bc3e0221e39\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 03:23:33.856196 containerd[1586]: time="2025-04-30T03:23:33.856082097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-0-0c5ff7085f,Uid:0fef68fb92d4efae7933621d4561acc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"14771fa1c905383842b4980109cc6f6a898ed57783bd79212982185c46635ec7\"" Apr 30 03:23:33.857966 kubelet[2349]: E0430 03:23:33.857933 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:23:33.860556 containerd[1586]: time="2025-04-30T03:23:33.860515554Z" level=info msg="CreateContainer within sandbox \"b655e2fa8cf8941ebe4451ebca5548dace712012a71ffb275bcc751665f71f78\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e5cdc8b85f6c2f8bfab1de0e85b663793fde22db80d157faf1310cd1e0a815e2\"" Apr 30 03:23:33.862014 containerd[1586]: time="2025-04-30T03:23:33.861978289Z" level=info msg="StartContainer for \"e5cdc8b85f6c2f8bfab1de0e85b663793fde22db80d157faf1310cd1e0a815e2\"" Apr 30 03:23:33.863088 containerd[1586]: time="2025-04-30T03:23:33.863031662Z" level=info msg="CreateContainer within sandbox \"14771fa1c905383842b4980109cc6f6a898ed57783bd79212982185c46635ec7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 03:23:33.872143 containerd[1586]: time="2025-04-30T03:23:33.871941409Z" level=info msg="CreateContainer within sandbox \"152c622dd10b7637425e6f6ef49f534f115d3d0787122f0d96e76bc3e0221e39\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"54df01c12bf6b9407cc04666fb96e352c7f7fb897affabce8eb97bf86e8b8ab6\"" Apr 30 03:23:33.873320 containerd[1586]: time="2025-04-30T03:23:33.873200763Z" level=info msg="StartContainer for \"54df01c12bf6b9407cc04666fb96e352c7f7fb897affabce8eb97bf86e8b8ab6\"" Apr 30 03:23:33.875981 containerd[1586]: time="2025-04-30T03:23:33.875911878Z" level=info msg="CreateContainer within sandbox \"14771fa1c905383842b4980109cc6f6a898ed57783bd79212982185c46635ec7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d07a1ccfdb6440dd6978bc1dc02ef1a4635bfac18c918634170abea38b157aff\"" Apr 30 03:23:33.876689 containerd[1586]: time="2025-04-30T03:23:33.876576422Z" level=info msg="StartContainer for \"d07a1ccfdb6440dd6978bc1dc02ef1a4635bfac18c918634170abea38b157aff\"" Apr 30 03:23:33.978855 kubelet[2349]: W0430 03:23:33.978282 2349 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.227.96.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.227.96.87:6443: connect: connection refused Apr 30 03:23:33.978855 kubelet[2349]: E0430 03:23:33.978365 2349 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://64.227.96.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.227.96.87:6443: connect: connection refused Apr 30 03:23:34.019028 containerd[1586]: time="2025-04-30T03:23:34.018967136Z" level=info msg="StartContainer for \"e5cdc8b85f6c2f8bfab1de0e85b663793fde22db80d157faf1310cd1e0a815e2\" returns successfully" Apr 30 03:23:34.052870 containerd[1586]: time="2025-04-30T03:23:34.052201661Z" level=info msg="StartContainer for \"54df01c12bf6b9407cc04666fb96e352c7f7fb897affabce8eb97bf86e8b8ab6\" returns successfully" Apr 30 03:23:34.074050 containerd[1586]: time="2025-04-30T03:23:34.073822456Z" level=info msg="StartContainer for \"d07a1ccfdb6440dd6978bc1dc02ef1a4635bfac18c918634170abea38b157aff\" returns successfully" Apr 30 03:23:34.084819 kubelet[2349]: E0430 03:23:34.084024 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.96.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-0-0c5ff7085f?timeout=10s\": dial tcp 64.227.96.87:6443: connect: connection refused" interval="1.6s" Apr 30 03:23:34.102468 kubelet[2349]: W0430 03:23:34.101256 2349 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.227.96.87:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.227.96.87:6443: connect: connection refused Apr 30 03:23:34.102468 kubelet[2349]: E0430 03:23:34.101377 2349 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://64.227.96.87:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.227.96.87:6443: connect: connection refused Apr 30 03:23:34.155524 kubelet[2349]: W0430 03:23:34.154868 2349 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.227.96.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.227.96.87:6443: connect: connection refused Apr 30 03:23:34.155871 kubelet[2349]: E0430 03:23:34.155848 2349 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://64.227.96.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.227.96.87:6443: connect: connection refused Apr 30 03:23:34.187273 kubelet[2349]: I0430 03:23:34.187167 2349 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:34.190163 kubelet[2349]: E0430 03:23:34.190097 2349 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.227.96.87:6443/api/v1/nodes\": dial tcp 64.227.96.87:6443: connect: connection refused" node="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:34.742143 kubelet[2349]: E0430 03:23:34.742055 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:23:34.757445 kubelet[2349]: E0430 03:23:34.754313 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:23:34.762384 kubelet[2349]: E0430 03:23:34.760951 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:23:35.766247 kubelet[2349]: E0430 03:23:35.763462 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:23:35.766247 kubelet[2349]: E0430 03:23:35.764284 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:23:35.794714 kubelet[2349]: I0430 03:23:35.794671 2349 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:36.003598 kubelet[2349]: E0430 03:23:36.003522 2349 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.3-0-0c5ff7085f\" not found" node="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:36.099210 kubelet[2349]: I0430 03:23:36.099045 2349 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:36.111587 kubelet[2349]: E0430 03:23:36.111530 2349 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.3-0-0c5ff7085f\" not found" Apr 30 03:23:36.213357 kubelet[2349]: E0430 03:23:36.211699 2349 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.3-0-0c5ff7085f\" not found" Apr 30 03:23:36.312368 kubelet[2349]: E0430 03:23:36.312288 2349 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.3-0-0c5ff7085f\" not found" Apr 30 03:23:36.413549 kubelet[2349]: E0430 03:23:36.413038 2349 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.3-0-0c5ff7085f\" not found" Apr 30 03:23:36.514019 kubelet[2349]: E0430 03:23:36.513956 2349 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.3-0-0c5ff7085f\" not found" Apr 30 03:23:36.614819 kubelet[2349]: E0430 03:23:36.614752 2349 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.3-0-0c5ff7085f\" not found" Apr 30 03:23:36.655278 kubelet[2349]: I0430 03:23:36.655206 2349 apiserver.go:52] "Watching apiserver" Apr 30 03:23:36.678216 kubelet[2349]: I0430 03:23:36.677719 2349 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:23:38.178262 systemd[1]: Reloading requested from client PID 2622 ('systemctl') (unit session-7.scope)... Apr 30 03:23:38.178283 systemd[1]: Reloading... Apr 30 03:23:38.291364 zram_generator::config[2664]: No configuration found. Apr 30 03:23:38.446893 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:23:38.557304 systemd[1]: Reloading finished in 378 ms. Apr 30 03:23:38.600980 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:23:38.615414 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:23:38.615901 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:23:38.631914 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:23:38.823742 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:23:38.828633 (kubelet)[2721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:23:38.949484 kubelet[2721]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:23:38.949484 kubelet[2721]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:23:38.949484 kubelet[2721]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:23:38.951423 kubelet[2721]: I0430 03:23:38.951305 2721 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:23:38.959308 kubelet[2721]: I0430 03:23:38.959257 2721 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 03:23:38.959308 kubelet[2721]: I0430 03:23:38.959298 2721 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:23:38.959684 kubelet[2721]: I0430 03:23:38.959655 2721 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 03:23:38.961426 kubelet[2721]: I0430 03:23:38.961381 2721 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 03:23:38.964600 kubelet[2721]: I0430 03:23:38.964050 2721 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:23:38.975902 kubelet[2721]: I0430 03:23:38.975517 2721 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:23:38.976794 kubelet[2721]: I0430 03:23:38.976648 2721 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:23:38.977134 kubelet[2721]: I0430 03:23:38.976777 2721 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-0-0c5ff7085f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 03:23:38.977281 kubelet[2721]: I0430 03:23:38.977154 2721 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:23:38.977281 kubelet[2721]: I0430 03:23:38.977174 2721 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 03:23:38.977281 kubelet[2721]: I0430 03:23:38.977254 2721 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:23:38.977632 kubelet[2721]: I0430 03:23:38.977606 2721 kubelet.go:400] "Attempting to sync node with API server" Apr 30 03:23:38.977632 kubelet[2721]: I0430 03:23:38.977633 2721 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:23:38.977725 kubelet[2721]: I0430 03:23:38.977669 2721 kubelet.go:312] "Adding apiserver pod source" Apr 30 03:23:38.977725 kubelet[2721]: I0430 03:23:38.977696 2721 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:23:38.980038 kubelet[2721]: I0430 03:23:38.979951 2721 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:23:38.982036 kubelet[2721]: I0430 03:23:38.982001 2721 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:23:38.983357 kubelet[2721]: I0430 03:23:38.982793 2721 server.go:1264] "Started kubelet" Apr 30 03:23:38.992568 kubelet[2721]: I0430 03:23:38.992417 2721 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:23:38.998473 kubelet[2721]: I0430 03:23:38.996601 2721 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:23:38.998473 kubelet[2721]: I0430 03:23:38.997701 2721 server.go:455] "Adding debug handlers to kubelet server" Apr 30 03:23:39.001674 kubelet[2721]: I0430 03:23:39.001610 2721 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:23:39.003581 kubelet[2721]: I0430 03:23:39.003554 2721 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:23:39.003838 kubelet[2721]: I0430 03:23:39.003824 2721 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 03:23:39.004534 kubelet[2721]: I0430 03:23:39.004514 2721 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:23:39.004769 kubelet[2721]: I0430 03:23:39.004759 2721 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:23:39.017378 kubelet[2721]: I0430 03:23:39.017238 2721 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:23:39.017718 kubelet[2721]: I0430 03:23:39.017680 2721 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:23:39.023088 kubelet[2721]: I0430 03:23:39.023057 2721 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:23:39.029022 kubelet[2721]: E0430 03:23:39.028762 2721 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:23:39.031264 kubelet[2721]: I0430 03:23:39.030027 2721 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:23:39.041685 kubelet[2721]: I0430 03:23:39.041640 2721 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:23:39.043485 kubelet[2721]: I0430 03:23:39.043445 2721 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:23:39.043485 kubelet[2721]: I0430 03:23:39.043494 2721 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 03:23:39.043672 kubelet[2721]: E0430 03:23:39.043600 2721 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:23:39.105307 kubelet[2721]: I0430 03:23:39.105256 2721 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:39.121753 kubelet[2721]: I0430 03:23:39.121704 2721 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:39.123135 kubelet[2721]: I0430 03:23:39.121922 2721 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:39.145471 kubelet[2721]: E0430 03:23:39.144030 2721 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 03:23:39.145471 kubelet[2721]: I0430 03:23:39.144714 2721 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:23:39.145471 kubelet[2721]: I0430 03:23:39.144730 2721 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:23:39.145471 kubelet[2721]: I0430 03:23:39.144762 2721 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:23:39.145471 kubelet[2721]: I0430 03:23:39.145017 2721 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 03:23:39.145471 kubelet[2721]: I0430 03:23:39.145034 2721 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 03:23:39.145471 kubelet[2721]: I0430 03:23:39.145067 2721 policy_none.go:49] "None policy: Start" Apr 30 03:23:39.147423 kubelet[2721]: I0430 03:23:39.146164 2721 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:23:39.147423 kubelet[2721]: I0430 03:23:39.146203 2721 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:23:39.147740 kubelet[2721]: I0430 03:23:39.147682 2721 state_mem.go:75] "Updated machine memory state" Apr 30 03:23:39.154563 kubelet[2721]: I0430 03:23:39.151489 2721 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:23:39.154563 kubelet[2721]: I0430 03:23:39.151761 2721 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:23:39.154563 kubelet[2721]: I0430 03:23:39.153116 2721 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:23:39.344608 kubelet[2721]: I0430 03:23:39.344220 2721 topology_manager.go:215] "Topology Admit Handler" podUID="acbaec2bd9013c64f16cf43e34305637" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:39.344608 kubelet[2721]: I0430 03:23:39.344435 2721 topology_manager.go:215] "Topology Admit Handler" podUID="2f7c3d5c12a98b4b98eadfac3b2ce5ed" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:39.344999 kubelet[2721]: I0430 03:23:39.344973 2721 topology_manager.go:215] "Topology Admit Handler" podUID="0fef68fb92d4efae7933621d4561acc0" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:39.360593 kubelet[2721]: W0430 03:23:39.358209 2721 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:23:39.360593 kubelet[2721]: W0430 03:23:39.360114 2721 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:23:39.360593 kubelet[2721]: W0430 03:23:39.360248 2721 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:23:39.412775 kubelet[2721]: I0430 03:23:39.412579 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f7c3d5c12a98b4b98eadfac3b2ce5ed-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-0-0c5ff7085f\" (UID: \"2f7c3d5c12a98b4b98eadfac3b2ce5ed\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:39.412775 kubelet[2721]: I0430 03:23:39.412643 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0fef68fb92d4efae7933621d4561acc0-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-0-0c5ff7085f\" (UID: \"0fef68fb92d4efae7933621d4561acc0\") " pod="kube-system/kube-scheduler-ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:39.412775 kubelet[2721]: I0430 03:23:39.412673 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/acbaec2bd9013c64f16cf43e34305637-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-0-0c5ff7085f\" (UID: \"acbaec2bd9013c64f16cf43e34305637\") " pod="kube-system/kube-apiserver-ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:39.412775 kubelet[2721]: I0430 03:23:39.412703 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f7c3d5c12a98b4b98eadfac3b2ce5ed-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-0-0c5ff7085f\" (UID: \"2f7c3d5c12a98b4b98eadfac3b2ce5ed\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:39.412775 kubelet[2721]: I0430 03:23:39.412730 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f7c3d5c12a98b4b98eadfac3b2ce5ed-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-0-0c5ff7085f\" (UID: \"2f7c3d5c12a98b4b98eadfac3b2ce5ed\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:39.413149 kubelet[2721]: I0430 03:23:39.412754 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f7c3d5c12a98b4b98eadfac3b2ce5ed-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-0-0c5ff7085f\" (UID: \"2f7c3d5c12a98b4b98eadfac3b2ce5ed\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:39.413149 kubelet[2721]: I0430 03:23:39.412778 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/acbaec2bd9013c64f16cf43e34305637-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-0-0c5ff7085f\" (UID: \"acbaec2bd9013c64f16cf43e34305637\") " pod="kube-system/kube-apiserver-ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:39.413149 kubelet[2721]: I0430 03:23:39.412800 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/acbaec2bd9013c64f16cf43e34305637-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-0-0c5ff7085f\" (UID: \"acbaec2bd9013c64f16cf43e34305637\") " pod="kube-system/kube-apiserver-ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:39.413149 kubelet[2721]: I0430 03:23:39.412820 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2f7c3d5c12a98b4b98eadfac3b2ce5ed-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-0-0c5ff7085f\" (UID: \"2f7c3d5c12a98b4b98eadfac3b2ce5ed\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:39.661829 kubelet[2721]: E0430 03:23:39.661690 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:23:39.662747 kubelet[2721]: E0430 03:23:39.662717 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:23:39.662972 kubelet[2721]: E0430 03:23:39.662943 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:23:39.979999 kubelet[2721]: I0430 03:23:39.979631 2721 apiserver.go:52] "Watching apiserver" Apr 30 03:23:40.007162 kubelet[2721]: I0430 03:23:40.007111 2721 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:23:40.081561 kubelet[2721]: E0430 03:23:40.080858 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:23:40.082665 kubelet[2721]: E0430 03:23:40.082579 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:23:40.112356 kubelet[2721]: W0430 03:23:40.109838 2721 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:23:40.112356 kubelet[2721]: E0430 03:23:40.109905 2721 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.3-0-0c5ff7085f\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.3-0-0c5ff7085f" Apr 30 03:23:40.112356 kubelet[2721]: E0430 03:23:40.110305 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:23:40.195593 kubelet[2721]: I0430 03:23:40.194988 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.3-0-0c5ff7085f" podStartSLOduration=1.194966916 podStartE2EDuration="1.194966916s" podCreationTimestamp="2025-04-30 03:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:23:40.154578145 +0000 UTC m=+1.316697509" watchObservedRunningTime="2025-04-30 03:23:40.194966916 +0000 UTC m=+1.357086280" Apr 30 03:23:40.229372 kubelet[2721]: I0430 03:23:40.227521 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.3-0-0c5ff7085f" podStartSLOduration=1.227497954 podStartE2EDuration="1.227497954s" podCreationTimestamp="2025-04-30 03:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:23:40.199423553 +0000 UTC m=+1.361542916" watchObservedRunningTime="2025-04-30 03:23:40.227497954 +0000 UTC m=+1.389617317" Apr 30 03:23:40.267851 kubelet[2721]: I0430 03:23:40.265737 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.3-0-0c5ff7085f" podStartSLOduration=1.2657181149999999 podStartE2EDuration="1.265718115s" podCreationTimestamp="2025-04-30 03:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:23:40.229837586 +0000 UTC m=+1.391956949" watchObservedRunningTime="2025-04-30 03:23:40.265718115 +0000 UTC m=+1.427837479" Apr 30 03:23:41.085360 kubelet[2721]: E0430 03:23:41.083133 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:23:41.090357 kubelet[2721]: E0430 03:23:41.087969 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:23:43.098748 kubelet[2721]: E0430 03:23:43.098669 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:23:44.027644 kubelet[2721]: E0430 03:23:44.027229 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:23:44.092721 kubelet[2721]: E0430 03:23:44.092664 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:23:44.500566 sudo[1797]: pam_unix(sudo:session): session closed for user root Apr 30 03:23:44.504817 sshd[1791]: pam_unix(sshd:session): session closed for user core Apr 30 03:23:44.508510 systemd[1]: sshd@6-64.227.96.87:22-139.178.89.65:51696.service: Deactivated successfully. Apr 30 03:23:44.514392 systemd-logind[1563]: Session 7 logged out. Waiting for processes to exit. Apr 30 03:23:44.514957 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 03:23:44.517084 systemd-logind[1563]: Removed session 7. Apr 30 03:23:50.106820 kubelet[2721]: E0430 03:23:50.106775 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:23:51.973908 kubelet[2721]: I0430 03:23:51.973365 2721 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 03:23:51.975758 containerd[1586]: time="2025-04-30T03:23:51.973763693Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 03:23:51.978974 kubelet[2721]: I0430 03:23:51.976285 2721 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 03:23:52.534197 kubelet[2721]: I0430 03:23:52.534120 2721 topology_manager.go:215] "Topology Admit Handler" podUID="ab1e27c3-1d43-4092-8a43-b00a20ca8e38" podNamespace="kube-system" podName="kube-proxy-7rkrv" Apr 30 03:23:52.629351 kubelet[2721]: I0430 03:23:52.629240 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fp25\" (UniqueName: \"kubernetes.io/projected/ab1e27c3-1d43-4092-8a43-b00a20ca8e38-kube-api-access-2fp25\") pod \"kube-proxy-7rkrv\" (UID: \"ab1e27c3-1d43-4092-8a43-b00a20ca8e38\") " pod="kube-system/kube-proxy-7rkrv" Apr 30 03:23:52.629351 kubelet[2721]: I0430 03:23:52.629312 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ab1e27c3-1d43-4092-8a43-b00a20ca8e38-kube-proxy\") pod \"kube-proxy-7rkrv\" (UID: \"ab1e27c3-1d43-4092-8a43-b00a20ca8e38\") " pod="kube-system/kube-proxy-7rkrv" Apr 30 03:23:52.629351 kubelet[2721]: I0430 03:23:52.629356 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab1e27c3-1d43-4092-8a43-b00a20ca8e38-xtables-lock\") pod \"kube-proxy-7rkrv\" (UID: \"ab1e27c3-1d43-4092-8a43-b00a20ca8e38\") " pod="kube-system/kube-proxy-7rkrv" Apr 30 03:23:52.629712 kubelet[2721]: I0430 03:23:52.629373 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab1e27c3-1d43-4092-8a43-b00a20ca8e38-lib-modules\") pod \"kube-proxy-7rkrv\" (UID: \"ab1e27c3-1d43-4092-8a43-b00a20ca8e38\") " pod="kube-system/kube-proxy-7rkrv" Apr 30 03:23:52.839133 kubelet[2721]: E0430 03:23:52.838988 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:23:52.839874 containerd[1586]: time="2025-04-30T03:23:52.839805828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7rkrv,Uid:ab1e27c3-1d43-4092-8a43-b00a20ca8e38,Namespace:kube-system,Attempt:0,}" Apr 30 03:23:52.882689 containerd[1586]: time="2025-04-30T03:23:52.882383815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:23:52.882689 containerd[1586]: time="2025-04-30T03:23:52.882476194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:23:52.882689 containerd[1586]: time="2025-04-30T03:23:52.882490127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:23:52.883158 containerd[1586]: time="2025-04-30T03:23:52.883047315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:23:52.947520 containerd[1586]: time="2025-04-30T03:23:52.947473152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7rkrv,Uid:ab1e27c3-1d43-4092-8a43-b00a20ca8e38,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c8c1fa9b18e3172d3cccdd7f12a09465d9a2d65de3a3a3b48f49cc5a63f04ee\"" Apr 30 03:23:52.948775 kubelet[2721]: E0430 03:23:52.948699 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:23:52.954183 containerd[1586]: time="2025-04-30T03:23:52.954096084Z" level=info msg="CreateContainer within sandbox \"4c8c1fa9b18e3172d3cccdd7f12a09465d9a2d65de3a3a3b48f49cc5a63f04ee\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 03:23:52.976696 containerd[1586]: time="2025-04-30T03:23:52.976637870Z" level=info msg="CreateContainer within sandbox \"4c8c1fa9b18e3172d3cccdd7f12a09465d9a2d65de3a3a3b48f49cc5a63f04ee\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"09bde7042d3ba25a7511c70d8425d226ecab3f33fe54bff301f6ffb746505b9a\"" Apr 30 03:23:52.979361 containerd[1586]: time="2025-04-30T03:23:52.979198203Z" level=info msg="StartContainer for \"09bde7042d3ba25a7511c70d8425d226ecab3f33fe54bff301f6ffb746505b9a\"" Apr 30 03:23:53.093532 kubelet[2721]: I0430 03:23:53.091869 2721 topology_manager.go:215] "Topology Admit Handler" podUID="28b57270-f9f9-4bbd-a151-afd474a85235" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-5qr8x" Apr 30 03:23:53.127058 containerd[1586]: time="2025-04-30T03:23:53.126774329Z" level=info msg="StartContainer for \"09bde7042d3ba25a7511c70d8425d226ecab3f33fe54bff301f6ffb746505b9a\" returns successfully" Apr 30 03:23:53.127244 kubelet[2721]: E0430 03:23:53.127175 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:23:53.233474 kubelet[2721]: I0430 03:23:53.233359 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzlbf\" (UniqueName: \"kubernetes.io/projected/28b57270-f9f9-4bbd-a151-afd474a85235-kube-api-access-kzlbf\") pod \"tigera-operator-797db67f8-5qr8x\" (UID: \"28b57270-f9f9-4bbd-a151-afd474a85235\") " pod="tigera-operator/tigera-operator-797db67f8-5qr8x" Apr 30 03:23:53.233474 kubelet[2721]: I0430 03:23:53.233421 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/28b57270-f9f9-4bbd-a151-afd474a85235-var-lib-calico\") pod \"tigera-operator-797db67f8-5qr8x\" (UID: \"28b57270-f9f9-4bbd-a151-afd474a85235\") " pod="tigera-operator/tigera-operator-797db67f8-5qr8x" Apr 30 03:23:53.413524 containerd[1586]: time="2025-04-30T03:23:53.413387316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-5qr8x,Uid:28b57270-f9f9-4bbd-a151-afd474a85235,Namespace:tigera-operator,Attempt:0,}" Apr 30 03:23:53.445418 containerd[1586]: time="2025-04-30T03:23:53.445164223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:23:53.445418 containerd[1586]: time="2025-04-30T03:23:53.445231115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:23:53.445418 containerd[1586]: time="2025-04-30T03:23:53.445246771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:23:53.446201 containerd[1586]: time="2025-04-30T03:23:53.445367493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:23:53.540280 containerd[1586]: time="2025-04-30T03:23:53.540227649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-5qr8x,Uid:28b57270-f9f9-4bbd-a151-afd474a85235,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3327e72befa09c3ba19c81a372b26398af11c05cd00dcee4c847aa92d3008f14\"" Apr 30 03:23:53.549752 containerd[1586]: time="2025-04-30T03:23:53.549703144Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" Apr 30 03:23:53.757519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount495155461.mount: Deactivated successfully. Apr 30 03:23:54.135199 kubelet[2721]: E0430 03:23:54.134723 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:23:55.139581 kubelet[2721]: E0430 03:23:55.139545 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:23:56.219574 update_engine[1565]: I20250430 03:23:56.219426 1565 update_attempter.cc:509] Updating boot flags... Apr 30 03:23:56.256487 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3049) Apr 30 03:23:56.319506 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3053) Apr 30 03:24:00.051552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2637039014.mount: Deactivated successfully. Apr 30 03:24:00.712950 containerd[1586]: time="2025-04-30T03:24:00.712874664Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:00.714450 containerd[1586]: time="2025-04-30T03:24:00.714289258Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" Apr 30 03:24:00.715299 containerd[1586]: time="2025-04-30T03:24:00.715076724Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:00.719710 containerd[1586]: time="2025-04-30T03:24:00.718107827Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:00.719710 containerd[1586]: time="2025-04-30T03:24:00.719501175Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 7.16927187s" Apr 30 03:24:00.719710 containerd[1586]: time="2025-04-30T03:24:00.719555194Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" Apr 30 03:24:00.728039 containerd[1586]: time="2025-04-30T03:24:00.727989006Z" level=info msg="CreateContainer within sandbox \"3327e72befa09c3ba19c81a372b26398af11c05cd00dcee4c847aa92d3008f14\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 30 03:24:00.776288 containerd[1586]: time="2025-04-30T03:24:00.776091643Z" level=info msg="CreateContainer within sandbox \"3327e72befa09c3ba19c81a372b26398af11c05cd00dcee4c847aa92d3008f14\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c806e49483b439044d7be5083e40d66725d35daaf8bbcb6986df3bca728db2a9\"" Apr 30 03:24:00.777849 containerd[1586]: time="2025-04-30T03:24:00.777796615Z" level=info msg="StartContainer for \"c806e49483b439044d7be5083e40d66725d35daaf8bbcb6986df3bca728db2a9\"" Apr 30 03:24:00.849213 containerd[1586]: time="2025-04-30T03:24:00.848721815Z" level=info msg="StartContainer for \"c806e49483b439044d7be5083e40d66725d35daaf8bbcb6986df3bca728db2a9\" returns successfully" Apr 30 03:24:01.181304 kubelet[2721]: I0430 03:24:01.180858 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7rkrv" podStartSLOduration=9.180833704 podStartE2EDuration="9.180833704s" podCreationTimestamp="2025-04-30 03:23:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:23:54.151142379 +0000 UTC m=+15.313261742" watchObservedRunningTime="2025-04-30 03:24:01.180833704 +0000 UTC m=+22.342953068" Apr 30 03:24:04.437288 kubelet[2721]: I0430 03:24:04.437062 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-5qr8x" podStartSLOduration=4.254557044 podStartE2EDuration="11.437043765s" podCreationTimestamp="2025-04-30 03:23:53 +0000 UTC" firstStartedPulling="2025-04-30 03:23:53.542462089 +0000 UTC m=+14.704581431" lastFinishedPulling="2025-04-30 03:24:00.724948796 +0000 UTC m=+21.887068152" observedRunningTime="2025-04-30 03:24:01.181191699 +0000 UTC m=+22.343311063" watchObservedRunningTime="2025-04-30 03:24:04.437043765 +0000 UTC m=+25.599163129" Apr 30 03:24:04.455373 kubelet[2721]: I0430 03:24:04.454985 2721 topology_manager.go:215] "Topology Admit Handler" podUID="5b047238-3151-4303-b273-637d47278f65" podNamespace="calico-system" podName="calico-typha-74d5cd55c6-t99vp" Apr 30 03:24:04.522351 kubelet[2721]: I0430 03:24:04.522013 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b047238-3151-4303-b273-637d47278f65-tigera-ca-bundle\") pod \"calico-typha-74d5cd55c6-t99vp\" (UID: \"5b047238-3151-4303-b273-637d47278f65\") " pod="calico-system/calico-typha-74d5cd55c6-t99vp" Apr 30 03:24:04.522351 kubelet[2721]: I0430 03:24:04.522110 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5b047238-3151-4303-b273-637d47278f65-typha-certs\") pod \"calico-typha-74d5cd55c6-t99vp\" (UID: \"5b047238-3151-4303-b273-637d47278f65\") " pod="calico-system/calico-typha-74d5cd55c6-t99vp" Apr 30 03:24:04.522351 kubelet[2721]: I0430 03:24:04.522147 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7gpr\" (UniqueName: \"kubernetes.io/projected/5b047238-3151-4303-b273-637d47278f65-kube-api-access-h7gpr\") pod \"calico-typha-74d5cd55c6-t99vp\" (UID: \"5b047238-3151-4303-b273-637d47278f65\") " pod="calico-system/calico-typha-74d5cd55c6-t99vp" Apr 30 03:24:04.548085 kubelet[2721]: I0430 03:24:04.548033 2721 topology_manager.go:215] "Topology Admit Handler" podUID="9b492117-004b-46b6-ac13-7611080e97ce" podNamespace="calico-system" podName="calico-node-8n8t2" Apr 30 03:24:04.622969 kubelet[2721]: I0430 03:24:04.622863 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9b492117-004b-46b6-ac13-7611080e97ce-var-run-calico\") pod \"calico-node-8n8t2\" (UID: \"9b492117-004b-46b6-ac13-7611080e97ce\") " pod="calico-system/calico-node-8n8t2" Apr 30 03:24:04.622969 kubelet[2721]: I0430 03:24:04.622910 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9b492117-004b-46b6-ac13-7611080e97ce-flexvol-driver-host\") pod \"calico-node-8n8t2\" (UID: \"9b492117-004b-46b6-ac13-7611080e97ce\") " pod="calico-system/calico-node-8n8t2" Apr 30 03:24:04.622969 kubelet[2721]: I0430 03:24:04.622931 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b492117-004b-46b6-ac13-7611080e97ce-lib-modules\") pod \"calico-node-8n8t2\" (UID: \"9b492117-004b-46b6-ac13-7611080e97ce\") " pod="calico-system/calico-node-8n8t2" Apr 30 03:24:04.622969 kubelet[2721]: I0430 03:24:04.622975 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9b492117-004b-46b6-ac13-7611080e97ce-cni-bin-dir\") pod \"calico-node-8n8t2\" (UID: \"9b492117-004b-46b6-ac13-7611080e97ce\") " pod="calico-system/calico-node-8n8t2" Apr 30 03:24:04.623229 kubelet[2721]: I0430 03:24:04.622998 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b492117-004b-46b6-ac13-7611080e97ce-tigera-ca-bundle\") pod \"calico-node-8n8t2\" (UID: \"9b492117-004b-46b6-ac13-7611080e97ce\") " pod="calico-system/calico-node-8n8t2" Apr 30 03:24:04.623229 kubelet[2721]: I0430 03:24:04.623019 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9b492117-004b-46b6-ac13-7611080e97ce-node-certs\") pod \"calico-node-8n8t2\" (UID: \"9b492117-004b-46b6-ac13-7611080e97ce\") " pod="calico-system/calico-node-8n8t2" Apr 30 03:24:04.623229 kubelet[2721]: I0430 03:24:04.623034 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d2t9\" (UniqueName: \"kubernetes.io/projected/9b492117-004b-46b6-ac13-7611080e97ce-kube-api-access-7d2t9\") pod \"calico-node-8n8t2\" (UID: \"9b492117-004b-46b6-ac13-7611080e97ce\") " pod="calico-system/calico-node-8n8t2" Apr 30 03:24:04.623229 kubelet[2721]: I0430 03:24:04.623059 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9b492117-004b-46b6-ac13-7611080e97ce-var-lib-calico\") pod \"calico-node-8n8t2\" (UID: \"9b492117-004b-46b6-ac13-7611080e97ce\") " pod="calico-system/calico-node-8n8t2" Apr 30 03:24:04.623229 kubelet[2721]: I0430 03:24:04.623076 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b492117-004b-46b6-ac13-7611080e97ce-xtables-lock\") pod \"calico-node-8n8t2\" (UID: \"9b492117-004b-46b6-ac13-7611080e97ce\") " pod="calico-system/calico-node-8n8t2" Apr 30 03:24:04.623410 kubelet[2721]: I0430 03:24:04.623092 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9b492117-004b-46b6-ac13-7611080e97ce-cni-net-dir\") pod \"calico-node-8n8t2\" (UID: \"9b492117-004b-46b6-ac13-7611080e97ce\") " pod="calico-system/calico-node-8n8t2" Apr 30 03:24:04.623410 kubelet[2721]: I0430 03:24:04.623122 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9b492117-004b-46b6-ac13-7611080e97ce-policysync\") pod \"calico-node-8n8t2\" (UID: \"9b492117-004b-46b6-ac13-7611080e97ce\") " pod="calico-system/calico-node-8n8t2" Apr 30 03:24:04.623410 kubelet[2721]: I0430 03:24:04.623139 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9b492117-004b-46b6-ac13-7611080e97ce-cni-log-dir\") pod \"calico-node-8n8t2\" (UID: \"9b492117-004b-46b6-ac13-7611080e97ce\") " pod="calico-system/calico-node-8n8t2" Apr 30 03:24:04.693281 kubelet[2721]: I0430 03:24:04.692039 2721 topology_manager.go:215] "Topology Admit Handler" podUID="67c47235-153a-4d06-ba98-7cf5056b9032" podNamespace="calico-system" podName="csi-node-driver-b29ps" Apr 30 03:24:04.695458 kubelet[2721]: E0430 03:24:04.695046 2721 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b29ps" podUID="67c47235-153a-4d06-ba98-7cf5056b9032" Apr 30 03:24:04.725337 kubelet[2721]: I0430 03:24:04.725289 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/67c47235-153a-4d06-ba98-7cf5056b9032-registration-dir\") pod \"csi-node-driver-b29ps\" (UID: \"67c47235-153a-4d06-ba98-7cf5056b9032\") " pod="calico-system/csi-node-driver-b29ps" Apr 30 03:24:04.726510 kubelet[2721]: I0430 03:24:04.726430 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/67c47235-153a-4d06-ba98-7cf5056b9032-socket-dir\") pod \"csi-node-driver-b29ps\" (UID: \"67c47235-153a-4d06-ba98-7cf5056b9032\") " pod="calico-system/csi-node-driver-b29ps" Apr 30 03:24:04.726670 kubelet[2721]: I0430 03:24:04.726609 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/67c47235-153a-4d06-ba98-7cf5056b9032-varrun\") pod \"csi-node-driver-b29ps\" (UID: \"67c47235-153a-4d06-ba98-7cf5056b9032\") " pod="calico-system/csi-node-driver-b29ps" Apr 30 03:24:04.731961 kubelet[2721]: I0430 03:24:04.730801 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67c47235-153a-4d06-ba98-7cf5056b9032-kubelet-dir\") pod \"csi-node-driver-b29ps\" (UID: \"67c47235-153a-4d06-ba98-7cf5056b9032\") " pod="calico-system/csi-node-driver-b29ps" Apr 30 03:24:04.731961 kubelet[2721]: I0430 03:24:04.731494 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4wzr\" (UniqueName: \"kubernetes.io/projected/67c47235-153a-4d06-ba98-7cf5056b9032-kube-api-access-f4wzr\") pod \"csi-node-driver-b29ps\" (UID: \"67c47235-153a-4d06-ba98-7cf5056b9032\") " pod="calico-system/csi-node-driver-b29ps" Apr 30 03:24:04.745601 kubelet[2721]: E0430 03:24:04.745557 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.747352 kubelet[2721]: W0430 03:24:04.745609 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.747352 kubelet[2721]: E0430 03:24:04.747166 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.761373 kubelet[2721]: E0430 03:24:04.760134 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.761373 kubelet[2721]: W0430 03:24:04.760167 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.766678 kubelet[2721]: E0430 03:24:04.766625 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.766864 kubelet[2721]: E0430 03:24:04.766748 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.766864 kubelet[2721]: W0430 03:24:04.766765 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.766864 kubelet[2721]: E0430 03:24:04.766786 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.768451 kubelet[2721]: E0430 03:24:04.768413 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.768451 kubelet[2721]: W0430 03:24:04.768444 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.768665 kubelet[2721]: E0430 03:24:04.768471 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.780300 kubelet[2721]: E0430 03:24:04.780259 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.780300 kubelet[2721]: W0430 03:24:04.780289 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.780560 kubelet[2721]: E0430 03:24:04.780538 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.782559 kubelet[2721]: E0430 03:24:04.782429 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.782559 kubelet[2721]: W0430 03:24:04.782463 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.782559 kubelet[2721]: E0430 03:24:04.782494 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.791951 kubelet[2721]: E0430 03:24:04.790972 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:24:04.828421 kubelet[2721]: E0430 03:24:04.825717 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.828421 kubelet[2721]: W0430 03:24:04.825763 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.828421 kubelet[2721]: E0430 03:24:04.825792 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.833689 kubelet[2721]: E0430 03:24:04.833469 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.833689 kubelet[2721]: W0430 03:24:04.833531 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.833689 kubelet[2721]: E0430 03:24:04.833575 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.837223 kubelet[2721]: E0430 03:24:04.835872 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.837223 kubelet[2721]: W0430 03:24:04.835902 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.837223 kubelet[2721]: E0430 03:24:04.835934 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.840375 kubelet[2721]: E0430 03:24:04.838869 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.840375 kubelet[2721]: W0430 03:24:04.838902 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.840375 kubelet[2721]: E0430 03:24:04.838934 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.842009 kubelet[2721]: E0430 03:24:04.840818 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.842009 kubelet[2721]: W0430 03:24:04.840847 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.842009 kubelet[2721]: E0430 03:24:04.840880 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.847367 kubelet[2721]: E0430 03:24:04.845402 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.847367 kubelet[2721]: W0430 03:24:04.845433 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.847367 kubelet[2721]: E0430 03:24:04.845463 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.849357 kubelet[2721]: E0430 03:24:04.847919 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.849885 kubelet[2721]: W0430 03:24:04.849645 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.849885 kubelet[2721]: E0430 03:24:04.849705 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.851758 kubelet[2721]: E0430 03:24:04.851380 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.851758 kubelet[2721]: W0430 03:24:04.851402 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.851758 kubelet[2721]: E0430 03:24:04.851425 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.855389 kubelet[2721]: E0430 03:24:04.852943 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.855389 kubelet[2721]: W0430 03:24:04.852970 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.855676 containerd[1586]: time="2025-04-30T03:24:04.854978749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74d5cd55c6-t99vp,Uid:5b047238-3151-4303-b273-637d47278f65,Namespace:calico-system,Attempt:0,}" Apr 30 03:24:04.856577 kubelet[2721]: E0430 03:24:04.856514 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.856765 kubelet[2721]: E0430 03:24:04.856744 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.856856 kubelet[2721]: W0430 03:24:04.856841 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.856942 kubelet[2721]: E0430 03:24:04.856929 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.857237 kubelet[2721]: E0430 03:24:04.857224 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.857314 kubelet[2721]: W0430 03:24:04.857304 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.857411 kubelet[2721]: E0430 03:24:04.857400 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.857733 kubelet[2721]: E0430 03:24:04.857715 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.857811 kubelet[2721]: W0430 03:24:04.857801 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.857890 kubelet[2721]: E0430 03:24:04.857879 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.860119 kubelet[2721]: E0430 03:24:04.860083 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.860466 kubelet[2721]: W0430 03:24:04.860446 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.861045 kubelet[2721]: E0430 03:24:04.861025 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.861317 kubelet[2721]: E0430 03:24:04.861305 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:24:04.868299 kubelet[2721]: E0430 03:24:04.867457 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.868299 kubelet[2721]: W0430 03:24:04.867522 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.868299 kubelet[2721]: E0430 03:24:04.867595 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.894706 containerd[1586]: time="2025-04-30T03:24:04.894317230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8n8t2,Uid:9b492117-004b-46b6-ac13-7611080e97ce,Namespace:calico-system,Attempt:0,}" Apr 30 03:24:04.895843 kubelet[2721]: E0430 03:24:04.895805 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.896655 kubelet[2721]: W0430 03:24:04.896418 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.896655 kubelet[2721]: E0430 03:24:04.896472 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.897052 kubelet[2721]: E0430 03:24:04.897033 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.897134 kubelet[2721]: W0430 03:24:04.897123 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.897193 kubelet[2721]: E0430 03:24:04.897182 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.897565 kubelet[2721]: E0430 03:24:04.897534 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.897760 kubelet[2721]: W0430 03:24:04.897743 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.897875 kubelet[2721]: E0430 03:24:04.897862 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.899516 kubelet[2721]: E0430 03:24:04.899361 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.899516 kubelet[2721]: W0430 03:24:04.899378 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.899516 kubelet[2721]: E0430 03:24:04.899395 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.899954 kubelet[2721]: E0430 03:24:04.899841 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.899954 kubelet[2721]: W0430 03:24:04.899854 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.899954 kubelet[2721]: E0430 03:24:04.899874 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.900321 kubelet[2721]: E0430 03:24:04.900223 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.900321 kubelet[2721]: W0430 03:24:04.900233 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.900321 kubelet[2721]: E0430 03:24:04.900250 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.900758 kubelet[2721]: E0430 03:24:04.900503 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.900758 kubelet[2721]: W0430 03:24:04.900511 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.900758 kubelet[2721]: E0430 03:24:04.900525 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.901066 kubelet[2721]: E0430 03:24:04.901043 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.901268 kubelet[2721]: W0430 03:24:04.901164 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.901268 kubelet[2721]: E0430 03:24:04.901184 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.905549 kubelet[2721]: E0430 03:24:04.905402 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.905549 kubelet[2721]: W0430 03:24:04.905426 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.905549 kubelet[2721]: E0430 03:24:04.905451 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.912631 kubelet[2721]: E0430 03:24:04.912600 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.913973 kubelet[2721]: W0430 03:24:04.913476 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.913973 kubelet[2721]: E0430 03:24:04.913590 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.914763 kubelet[2721]: E0430 03:24:04.914449 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.914763 kubelet[2721]: W0430 03:24:04.914464 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.917364 kubelet[2721]: E0430 03:24:04.914930 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.918139 kubelet[2721]: E0430 03:24:04.918050 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.918139 kubelet[2721]: W0430 03:24:04.918070 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.918139 kubelet[2721]: E0430 03:24:04.918094 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.932067 kubelet[2721]: E0430 03:24:04.931860 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:24:04.932067 kubelet[2721]: W0430 03:24:04.931894 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:24:04.932067 kubelet[2721]: E0430 03:24:04.931927 2721 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:24:04.935653 containerd[1586]: time="2025-04-30T03:24:04.934897779Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:24:04.935653 containerd[1586]: time="2025-04-30T03:24:04.934990894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:24:04.935653 containerd[1586]: time="2025-04-30T03:24:04.935002856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:04.937350 containerd[1586]: time="2025-04-30T03:24:04.935232381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:04.978169 containerd[1586]: time="2025-04-30T03:24:04.977662286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:24:04.981318 containerd[1586]: time="2025-04-30T03:24:04.977771991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:24:04.981318 containerd[1586]: time="2025-04-30T03:24:04.977892362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:04.983011 containerd[1586]: time="2025-04-30T03:24:04.981648506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:05.110527 containerd[1586]: time="2025-04-30T03:24:05.110478907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8n8t2,Uid:9b492117-004b-46b6-ac13-7611080e97ce,Namespace:calico-system,Attempt:0,} returns sandbox id \"e4f756d19fac8a04bffb84989a9840691942ffe910316fa2bfeec61c50a09970\"" Apr 30 03:24:05.112571 kubelet[2721]: E0430 03:24:05.112378 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:24:05.123440 containerd[1586]: time="2025-04-30T03:24:05.123169483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 03:24:05.126673 containerd[1586]: time="2025-04-30T03:24:05.126611519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74d5cd55c6-t99vp,Uid:5b047238-3151-4303-b273-637d47278f65,Namespace:calico-system,Attempt:0,} returns sandbox id \"ba2940a328d3d7a57e08274a6bb24eaded256530adffb21da195977c42f1bb9b\"" Apr 30 03:24:05.128498 kubelet[2721]: E0430 03:24:05.128437 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:24:06.044690 kubelet[2721]: E0430 03:24:06.044620 2721 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b29ps" podUID="67c47235-153a-4d06-ba98-7cf5056b9032" Apr 30 03:24:07.547234 containerd[1586]: time="2025-04-30T03:24:07.546987466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:07.548914 containerd[1586]: time="2025-04-30T03:24:07.548408490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" Apr 30 03:24:07.549631 containerd[1586]: time="2025-04-30T03:24:07.549260714Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:07.552942 containerd[1586]: time="2025-04-30T03:24:07.552880307Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:07.554428 containerd[1586]: time="2025-04-30T03:24:07.553859649Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 2.43063364s" Apr 30 03:24:07.554428 containerd[1586]: time="2025-04-30T03:24:07.553910898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" Apr 30 03:24:07.557131 containerd[1586]: time="2025-04-30T03:24:07.557068506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" Apr 30 03:24:07.566107 containerd[1586]: time="2025-04-30T03:24:07.565670562Z" level=info msg="CreateContainer within sandbox \"e4f756d19fac8a04bffb84989a9840691942ffe910316fa2bfeec61c50a09970\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 03:24:07.593566 containerd[1586]: time="2025-04-30T03:24:07.593221333Z" level=info msg="CreateContainer within sandbox \"e4f756d19fac8a04bffb84989a9840691942ffe910316fa2bfeec61c50a09970\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"684fcb7be6eb261934ee080ce5a839c371f8cb465868d8df80e06be750dab887\"" Apr 30 03:24:07.595857 containerd[1586]: time="2025-04-30T03:24:07.595811258Z" level=info msg="StartContainer for \"684fcb7be6eb261934ee080ce5a839c371f8cb465868d8df80e06be750dab887\"" Apr 30 03:24:07.706922 containerd[1586]: time="2025-04-30T03:24:07.705851894Z" level=info msg="StartContainer for \"684fcb7be6eb261934ee080ce5a839c371f8cb465868d8df80e06be750dab887\" returns successfully" Apr 30 03:24:07.764257 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-684fcb7be6eb261934ee080ce5a839c371f8cb465868d8df80e06be750dab887-rootfs.mount: Deactivated successfully. Apr 30 03:24:07.786243 containerd[1586]: time="2025-04-30T03:24:07.786083695Z" level=info msg="shim disconnected" id=684fcb7be6eb261934ee080ce5a839c371f8cb465868d8df80e06be750dab887 namespace=k8s.io Apr 30 03:24:07.786243 containerd[1586]: time="2025-04-30T03:24:07.786197104Z" level=warning msg="cleaning up after shim disconnected" id=684fcb7be6eb261934ee080ce5a839c371f8cb465868d8df80e06be750dab887 namespace=k8s.io Apr 30 03:24:07.786243 containerd[1586]: time="2025-04-30T03:24:07.786222513Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:24:07.809727 containerd[1586]: time="2025-04-30T03:24:07.807575158Z" level=warning msg="cleanup warnings time=\"2025-04-30T03:24:07Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 03:24:08.046185 kubelet[2721]: E0430 03:24:08.046130 2721 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b29ps" podUID="67c47235-153a-4d06-ba98-7cf5056b9032" Apr 30 03:24:08.191266 kubelet[2721]: E0430 03:24:08.191081 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:24:10.044800 kubelet[2721]: E0430 03:24:10.044536 2721 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b29ps" podUID="67c47235-153a-4d06-ba98-7cf5056b9032" Apr 30 03:24:10.778305 containerd[1586]: time="2025-04-30T03:24:10.778228486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:10.779435 containerd[1586]: time="2025-04-30T03:24:10.779371163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" Apr 30 03:24:10.780467 containerd[1586]: time="2025-04-30T03:24:10.780388594Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:10.783038 containerd[1586]: time="2025-04-30T03:24:10.782605446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:10.783542 containerd[1586]: time="2025-04-30T03:24:10.783505613Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 3.226379304s" Apr 30 03:24:10.783667 containerd[1586]: time="2025-04-30T03:24:10.783565993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" Apr 30 03:24:10.789095 containerd[1586]: time="2025-04-30T03:24:10.789052831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 03:24:10.814446 containerd[1586]: time="2025-04-30T03:24:10.814231474Z" level=info msg="CreateContainer within sandbox \"ba2940a328d3d7a57e08274a6bb24eaded256530adffb21da195977c42f1bb9b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 03:24:10.831727 containerd[1586]: time="2025-04-30T03:24:10.831644565Z" level=info msg="CreateContainer within sandbox \"ba2940a328d3d7a57e08274a6bb24eaded256530adffb21da195977c42f1bb9b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4fc7153873e72024a50ccab8f78403dfeee26d96e73dcf2dbeecdc4236893ef8\"" Apr 30 03:24:10.833908 containerd[1586]: time="2025-04-30T03:24:10.833847316Z" level=info msg="StartContainer for \"4fc7153873e72024a50ccab8f78403dfeee26d96e73dcf2dbeecdc4236893ef8\"" Apr 30 03:24:11.001241 containerd[1586]: time="2025-04-30T03:24:11.001189054Z" level=info msg="StartContainer for \"4fc7153873e72024a50ccab8f78403dfeee26d96e73dcf2dbeecdc4236893ef8\" returns successfully" Apr 30 03:24:11.198606 kubelet[2721]: E0430 03:24:11.198567 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:24:12.044447 kubelet[2721]: E0430 03:24:12.044316 2721 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b29ps" podUID="67c47235-153a-4d06-ba98-7cf5056b9032" Apr 30 03:24:12.201052 kubelet[2721]: I0430 03:24:12.200373 2721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:24:12.201052 kubelet[2721]: E0430 03:24:12.200967 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:24:14.044963 kubelet[2721]: E0430 03:24:14.044415 2721 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b29ps" podUID="67c47235-153a-4d06-ba98-7cf5056b9032" Apr 30 03:24:15.697261 containerd[1586]: time="2025-04-30T03:24:15.697144144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:15.698539 containerd[1586]: time="2025-04-30T03:24:15.698482070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" Apr 30 03:24:15.699390 containerd[1586]: time="2025-04-30T03:24:15.699283861Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:15.701528 containerd[1586]: time="2025-04-30T03:24:15.701308564Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:15.702398 containerd[1586]: time="2025-04-30T03:24:15.702368682Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 4.912973407s" Apr 30 03:24:15.702487 containerd[1586]: time="2025-04-30T03:24:15.702402269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" Apr 30 03:24:15.706101 containerd[1586]: time="2025-04-30T03:24:15.706056874Z" level=info msg="CreateContainer within sandbox \"e4f756d19fac8a04bffb84989a9840691942ffe910316fa2bfeec61c50a09970\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 03:24:15.720065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2158073148.mount: Deactivated successfully. Apr 30 03:24:15.723834 containerd[1586]: time="2025-04-30T03:24:15.723771884Z" level=info msg="CreateContainer within sandbox \"e4f756d19fac8a04bffb84989a9840691942ffe910316fa2bfeec61c50a09970\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9010999f35053631374d551ba12ccb8334f47ae785f7a8d5ca288c3cc3994c96\"" Apr 30 03:24:15.724750 containerd[1586]: time="2025-04-30T03:24:15.724712409Z" level=info msg="StartContainer for \"9010999f35053631374d551ba12ccb8334f47ae785f7a8d5ca288c3cc3994c96\"" Apr 30 03:24:15.853958 containerd[1586]: time="2025-04-30T03:24:15.853726327Z" level=info msg="StartContainer for \"9010999f35053631374d551ba12ccb8334f47ae785f7a8d5ca288c3cc3994c96\" returns successfully" Apr 30 03:24:16.044814 kubelet[2721]: E0430 03:24:16.044158 2721 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b29ps" podUID="67c47235-153a-4d06-ba98-7cf5056b9032" Apr 30 03:24:16.217873 kubelet[2721]: E0430 03:24:16.217574 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:24:16.237232 kubelet[2721]: I0430 03:24:16.236857 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-74d5cd55c6-t99vp" podStartSLOduration=6.580648884 podStartE2EDuration="12.236838782s" podCreationTimestamp="2025-04-30 03:24:04 +0000 UTC" firstStartedPulling="2025-04-30 03:24:05.129567296 +0000 UTC m=+26.291686642" lastFinishedPulling="2025-04-30 03:24:10.785757184 +0000 UTC m=+31.947876540" observedRunningTime="2025-04-30 03:24:11.215491358 +0000 UTC m=+32.377610721" watchObservedRunningTime="2025-04-30 03:24:16.236838782 +0000 UTC m=+37.398958164" Apr 30 03:24:16.759859 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9010999f35053631374d551ba12ccb8334f47ae785f7a8d5ca288c3cc3994c96-rootfs.mount: Deactivated successfully. Apr 30 03:24:16.761619 containerd[1586]: time="2025-04-30T03:24:16.760964822Z" level=info msg="shim disconnected" id=9010999f35053631374d551ba12ccb8334f47ae785f7a8d5ca288c3cc3994c96 namespace=k8s.io Apr 30 03:24:16.761619 containerd[1586]: time="2025-04-30T03:24:16.761043975Z" level=warning msg="cleaning up after shim disconnected" id=9010999f35053631374d551ba12ccb8334f47ae785f7a8d5ca288c3cc3994c96 namespace=k8s.io Apr 30 03:24:16.761619 containerd[1586]: time="2025-04-30T03:24:16.761055619Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:24:16.798211 kubelet[2721]: I0430 03:24:16.797541 2721 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 03:24:16.846181 kubelet[2721]: I0430 03:24:16.846103 2721 topology_manager.go:215] "Topology Admit Handler" podUID="39887ce7-27ce-4a68-a0cb-cc6961010eef" podNamespace="kube-system" podName="coredns-7db6d8ff4d-trcpw" Apr 30 03:24:16.848154 kubelet[2721]: I0430 03:24:16.847396 2721 topology_manager.go:215] "Topology Admit Handler" podUID="ca50be34-7ff2-4c44-99a2-9d71206348f1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9hc9l" Apr 30 03:24:16.856301 kubelet[2721]: I0430 03:24:16.856246 2721 topology_manager.go:215] "Topology Admit Handler" podUID="2319ef4b-2c33-4712-bebd-81dcb419db1f" podNamespace="calico-system" podName="calico-kube-controllers-65cd484dd7-znmv5" Apr 30 03:24:16.868890 kubelet[2721]: I0430 03:24:16.868830 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5fzc\" (UniqueName: \"kubernetes.io/projected/2319ef4b-2c33-4712-bebd-81dcb419db1f-kube-api-access-m5fzc\") pod \"calico-kube-controllers-65cd484dd7-znmv5\" (UID: \"2319ef4b-2c33-4712-bebd-81dcb419db1f\") " pod="calico-system/calico-kube-controllers-65cd484dd7-znmv5" Apr 30 03:24:16.868890 kubelet[2721]: I0430 03:24:16.868890 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca50be34-7ff2-4c44-99a2-9d71206348f1-config-volume\") pod \"coredns-7db6d8ff4d-9hc9l\" (UID: \"ca50be34-7ff2-4c44-99a2-9d71206348f1\") " pod="kube-system/coredns-7db6d8ff4d-9hc9l" Apr 30 03:24:16.869080 kubelet[2721]: I0430 03:24:16.868914 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2319ef4b-2c33-4712-bebd-81dcb419db1f-tigera-ca-bundle\") pod \"calico-kube-controllers-65cd484dd7-znmv5\" (UID: \"2319ef4b-2c33-4712-bebd-81dcb419db1f\") " pod="calico-system/calico-kube-controllers-65cd484dd7-znmv5" Apr 30 03:24:16.869080 kubelet[2721]: I0430 03:24:16.868941 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mwjq\" (UniqueName: \"kubernetes.io/projected/39887ce7-27ce-4a68-a0cb-cc6961010eef-kube-api-access-9mwjq\") pod \"coredns-7db6d8ff4d-trcpw\" (UID: \"39887ce7-27ce-4a68-a0cb-cc6961010eef\") " pod="kube-system/coredns-7db6d8ff4d-trcpw" Apr 30 03:24:16.869080 kubelet[2721]: I0430 03:24:16.868985 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zn56\" (UniqueName: \"kubernetes.io/projected/ca50be34-7ff2-4c44-99a2-9d71206348f1-kube-api-access-5zn56\") pod \"coredns-7db6d8ff4d-9hc9l\" (UID: \"ca50be34-7ff2-4c44-99a2-9d71206348f1\") " pod="kube-system/coredns-7db6d8ff4d-9hc9l" Apr 30 03:24:16.869080 kubelet[2721]: I0430 03:24:16.869011 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39887ce7-27ce-4a68-a0cb-cc6961010eef-config-volume\") pod \"coredns-7db6d8ff4d-trcpw\" (UID: \"39887ce7-27ce-4a68-a0cb-cc6961010eef\") " pod="kube-system/coredns-7db6d8ff4d-trcpw" Apr 30 03:24:16.875583 kubelet[2721]: I0430 03:24:16.875526 2721 topology_manager.go:215] "Topology Admit Handler" podUID="9c63b596-a9ae-4e22-9c6f-207ff0492217" podNamespace="calico-apiserver" podName="calico-apiserver-6678cff58b-ph7rm" Apr 30 03:24:16.875874 kubelet[2721]: I0430 03:24:16.875845 2721 topology_manager.go:215] "Topology Admit Handler" podUID="474f9b63-fbd6-4b8c-889a-f6e7b01ee1f4" podNamespace="calico-apiserver" podName="calico-apiserver-6678cff58b-ff657" Apr 30 03:24:16.969673 kubelet[2721]: I0430 03:24:16.969565 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7btm\" (UniqueName: \"kubernetes.io/projected/9c63b596-a9ae-4e22-9c6f-207ff0492217-kube-api-access-s7btm\") pod \"calico-apiserver-6678cff58b-ph7rm\" (UID: \"9c63b596-a9ae-4e22-9c6f-207ff0492217\") " pod="calico-apiserver/calico-apiserver-6678cff58b-ph7rm" Apr 30 03:24:16.969673 kubelet[2721]: I0430 03:24:16.969608 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9c63b596-a9ae-4e22-9c6f-207ff0492217-calico-apiserver-certs\") pod \"calico-apiserver-6678cff58b-ph7rm\" (UID: \"9c63b596-a9ae-4e22-9c6f-207ff0492217\") " pod="calico-apiserver/calico-apiserver-6678cff58b-ph7rm" Apr 30 03:24:16.969673 kubelet[2721]: I0430 03:24:16.969682 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/474f9b63-fbd6-4b8c-889a-f6e7b01ee1f4-calico-apiserver-certs\") pod \"calico-apiserver-6678cff58b-ff657\" (UID: \"474f9b63-fbd6-4b8c-889a-f6e7b01ee1f4\") " pod="calico-apiserver/calico-apiserver-6678cff58b-ff657" Apr 30 03:24:16.969931 kubelet[2721]: I0430 03:24:16.969745 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj7jh\" (UniqueName: \"kubernetes.io/projected/474f9b63-fbd6-4b8c-889a-f6e7b01ee1f4-kube-api-access-gj7jh\") pod \"calico-apiserver-6678cff58b-ff657\" (UID: \"474f9b63-fbd6-4b8c-889a-f6e7b01ee1f4\") " pod="calico-apiserver/calico-apiserver-6678cff58b-ff657" Apr 30 03:24:17.160983 kubelet[2721]: E0430 03:24:17.160893 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:24:17.163299 containerd[1586]: time="2025-04-30T03:24:17.163151595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hc9l,Uid:ca50be34-7ff2-4c44-99a2-9d71206348f1,Namespace:kube-system,Attempt:0,}" Apr 30 03:24:17.165975 containerd[1586]: time="2025-04-30T03:24:17.165318052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65cd484dd7-znmv5,Uid:2319ef4b-2c33-4712-bebd-81dcb419db1f,Namespace:calico-system,Attempt:0,}" Apr 30 03:24:17.171480 kubelet[2721]: E0430 03:24:17.169185 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:24:17.171662 containerd[1586]: time="2025-04-30T03:24:17.170569551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-trcpw,Uid:39887ce7-27ce-4a68-a0cb-cc6961010eef,Namespace:kube-system,Attempt:0,}" Apr 30 03:24:17.193567 containerd[1586]: time="2025-04-30T03:24:17.192237980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6678cff58b-ph7rm,Uid:9c63b596-a9ae-4e22-9c6f-207ff0492217,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:24:17.195991 containerd[1586]: time="2025-04-30T03:24:17.195949379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6678cff58b-ff657,Uid:474f9b63-fbd6-4b8c-889a-f6e7b01ee1f4,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:24:17.221659 kubelet[2721]: E0430 03:24:17.221627 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:24:17.224391 containerd[1586]: time="2025-04-30T03:24:17.224244669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 03:24:17.554737 containerd[1586]: time="2025-04-30T03:24:17.554404529Z" level=error msg="Failed to destroy network for sandbox \"5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:24:17.559098 containerd[1586]: time="2025-04-30T03:24:17.559040457Z" level=error msg="Failed to destroy network for sandbox \"e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:24:17.562532 containerd[1586]: time="2025-04-30T03:24:17.562469447Z" level=error msg="encountered an error cleaning up failed sandbox \"5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:24:17.562898 containerd[1586]: time="2025-04-30T03:24:17.562577693Z" level=error msg="encountered an error cleaning up failed sandbox \"e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:24:17.570529 containerd[1586]: time="2025-04-30T03:24:17.569660893Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hc9l,Uid:ca50be34-7ff2-4c44-99a2-9d71206348f1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:24:17.571840 containerd[1586]: time="2025-04-30T03:24:17.571770114Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6678cff58b-ph7rm,Uid:9c63b596-a9ae-4e22-9c6f-207ff0492217,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:24:17.574833 containerd[1586]: time="2025-04-30T03:24:17.574765650Z" level=error msg="Failed to destroy network for sandbox \"49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:24:17.575318 containerd[1586]: time="2025-04-30T03:24:17.575282649Z" level=error msg="encountered an error cleaning up failed sandbox \"49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:24:17.575498 containerd[1586]: time="2025-04-30T03:24:17.575476165Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-trcpw,Uid:39887ce7-27ce-4a68-a0cb-cc6961010eef,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:24:17.575932 containerd[1586]: time="2025-04-30T03:24:17.575741390Z" level=error msg="Failed to destroy network for sandbox \"11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:24:17.576204 containerd[1586]: time="2025-04-30T03:24:17.576171585Z" level=error msg="encountered an error cleaning up failed sandbox \"11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:24:17.576253 containerd[1586]: time="2025-04-30T03:24:17.576213439Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65cd484dd7-znmv5,Uid:2319ef4b-2c33-4712-bebd-81dcb419db1f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:24:17.576374 containerd[1586]: time="2025-04-30T03:24:17.575849288Z" level=error msg="Failed to destroy network for sandbox \"e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:24:17.576523 containerd[1586]: time="2025-04-30T03:24:17.576497396Z" level=error msg="encountered an error cleaning up failed sandbox \"e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:24:17.576555 containerd[1586]: time="2025-04-30T03:24:17.576532698Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6678cff58b-ff657,Uid:474f9b63-fbd6-4b8c-889a-f6e7b01ee1f4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:24:17.577358 kubelet[2721]: E0430 03:24:17.576925 2721 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:24:17.577358 kubelet[2721]: E0430 03:24:17.576992 2721 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:24:17.577358 kubelet[2721]: E0430 03:24:17.577025 2721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9hc9l" Apr 30 03:24:17.577358 kubelet[2721]: E0430 03:24:17.577032 2721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6678cff58b-ph7rm" Apr 30 03:24:17.578858 kubelet[2721]: E0430 03:24:17.577049 2721 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9hc9l" Apr 30 03:24:17.578858 kubelet[2721]: E0430 03:24:17.577069 2721 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:24:17.578858 kubelet[2721]: E0430 03:24:17.577117 2721 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-9hc9l_kube-system(ca50be34-7ff2-4c44-99a2-9d71206348f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-9hc9l_kube-system(ca50be34-7ff2-4c44-99a2-9d71206348f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9hc9l" podUID="ca50be34-7ff2-4c44-99a2-9d71206348f1" Apr 30 03:24:17.579014 kubelet[2721]: E0430 03:24:17.577138 2721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-trcpw" Apr 30 03:24:17.579014 kubelet[2721]: E0430 03:24:17.577158 2721 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-trcpw" Apr 30 03:24:17.579014 kubelet[2721]: E0430 03:24:17.577196 2721 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-trcpw_kube-system(39887ce7-27ce-4a68-a0cb-cc6961010eef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-trcpw_kube-system(39887ce7-27ce-4a68-a0cb-cc6961010eef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-trcpw" podUID="39887ce7-27ce-4a68-a0cb-cc6961010eef" Apr 30 03:24:17.579134 kubelet[2721]: E0430 03:24:17.577260 2721 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:24:17.579134 kubelet[2721]: E0430 03:24:17.577281 2721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65cd484dd7-znmv5" Apr 30 03:24:17.579134 kubelet[2721]: E0430 03:24:17.577296 2721 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65cd484dd7-znmv5" Apr 30 03:24:17.579134 kubelet[2721]: E0430 03:24:17.576951 2721 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:24:17.579579 kubelet[2721]: E0430 03:24:17.577418 2721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6678cff58b-ff657" Apr 30 03:24:17.579579 kubelet[2721]: E0430 03:24:17.577437 2721 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6678cff58b-ff657" Apr 30 03:24:17.579579 kubelet[2721]: E0430 03:24:17.577468 2721 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6678cff58b-ff657_calico-apiserver(474f9b63-fbd6-4b8c-889a-f6e7b01ee1f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6678cff58b-ff657_calico-apiserver(474f9b63-fbd6-4b8c-889a-f6e7b01ee1f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6678cff58b-ff657" podUID="474f9b63-fbd6-4b8c-889a-f6e7b01ee1f4" Apr 30 03:24:17.579735 kubelet[2721]: E0430 03:24:17.577054 2721 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6678cff58b-ph7rm" Apr 30 03:24:17.579735 kubelet[2721]: E0430 03:24:17.577508 2721 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6678cff58b-ph7rm_calico-apiserver(9c63b596-a9ae-4e22-9c6f-207ff0492217)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6678cff58b-ph7rm_calico-apiserver(9c63b596-a9ae-4e22-9c6f-207ff0492217)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6678cff58b-ph7rm" podUID="9c63b596-a9ae-4e22-9c6f-207ff0492217" Apr 30 03:24:17.579877 kubelet[2721]: E0430 03:24:17.578544 2721 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65cd484dd7-znmv5_calico-system(2319ef4b-2c33-4712-bebd-81dcb419db1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65cd484dd7-znmv5_calico-system(2319ef4b-2c33-4712-bebd-81dcb419db1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65cd484dd7-znmv5" podUID="2319ef4b-2c33-4712-bebd-81dcb419db1f" Apr 30 03:24:18.051115 containerd[1586]: time="2025-04-30T03:24:18.049409110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b29ps,Uid:67c47235-153a-4d06-ba98-7cf5056b9032,Namespace:calico-system,Attempt:0,}" Apr 30 03:24:18.159799 containerd[1586]: time="2025-04-30T03:24:18.159742261Z" level=error msg="Failed to destroy network for sandbox \"237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:24:18.160138 containerd[1586]: time="2025-04-30T03:24:18.160109270Z" level=error msg="encountered an error cleaning up failed sandbox \"237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:24:18.160225 containerd[1586]: time="2025-04-30T03:24:18.160167562Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b29ps,Uid:67c47235-153a-4d06-ba98-7cf5056b9032,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:24:18.160492 kubelet[2721]: E0430 03:24:18.160446 2721 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:24:18.160595 kubelet[2721]: E0430 03:24:18.160516 2721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b29ps" Apr 30 03:24:18.160595 kubelet[2721]: E0430 03:24:18.160537 2721 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b29ps" Apr 30 03:24:18.160693 kubelet[2721]: E0430 03:24:18.160593 2721 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b29ps_calico-system(67c47235-153a-4d06-ba98-7cf5056b9032)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b29ps_calico-system(67c47235-153a-4d06-ba98-7cf5056b9032)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b29ps" podUID="67c47235-153a-4d06-ba98-7cf5056b9032" Apr 30 03:24:18.167420 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4-shm.mount: Deactivated successfully. Apr 30 03:24:18.225288 kubelet[2721]: I0430 03:24:18.225234 2721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" Apr 30 03:24:18.233266 kubelet[2721]: I0430 03:24:18.232296 2721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" Apr 30 03:24:18.236056 containerd[1586]: time="2025-04-30T03:24:18.234975638Z" level=info msg="StopPodSandbox for \"e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a\"" Apr 30 03:24:18.236318 containerd[1586]: time="2025-04-30T03:24:18.236277236Z" level=info msg="StopPodSandbox for \"11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de\"" Apr 30 03:24:18.237794 containerd[1586]: time="2025-04-30T03:24:18.237431389Z" level=info msg="Ensure that sandbox 11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de in task-service has been cleanup successfully" Apr 30 03:24:18.239278 containerd[1586]: time="2025-04-30T03:24:18.237467079Z" level=info msg="Ensure that sandbox e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a in task-service has been cleanup successfully" Apr 30 03:24:18.241866 kubelet[2721]: I0430 03:24:18.241777 2721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" Apr 30 03:24:18.245093 containerd[1586]: time="2025-04-30T03:24:18.243398821Z" level=info msg="StopPodSandbox for \"5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17\"" Apr 30 03:24:18.245093 containerd[1586]: time="2025-04-30T03:24:18.244775641Z" level=info msg="Ensure that sandbox 5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17 in task-service has been cleanup successfully" Apr 30 03:24:18.248371 kubelet[2721]: I0430 03:24:18.247457 2721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" Apr 30 03:24:18.253400 containerd[1586]: time="2025-04-30T03:24:18.252690999Z" level=info msg="StopPodSandbox for \"237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4\"" Apr 30 03:24:18.254102 containerd[1586]: time="2025-04-30T03:24:18.253784912Z" level=info msg="Ensure that sandbox 237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4 in task-service has been cleanup successfully" Apr 30 03:24:18.258820 kubelet[2721]: I0430 03:24:18.257902 2721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" Apr 30 03:24:18.263386 containerd[1586]: time="2025-04-30T03:24:18.263199201Z" level=info msg="StopPodSandbox for \"e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f\"" Apr 30 03:24:18.265359 containerd[1586]: time="2025-04-30T03:24:18.265165064Z" level=info msg="Ensure that sandbox e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f in task-service has been cleanup successfully" Apr 30 03:24:18.268903 kubelet[2721]: I0430 03:24:18.268865 2721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" Apr 30 03:24:18.272860 containerd[1586]: time="2025-04-30T03:24:18.271949647Z" level=info msg="StopPodSandbox for \"49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf\"" Apr 30 03:24:18.279694 containerd[1586]: time="2025-04-30T03:24:18.279206301Z" level=info msg="Ensure that sandbox 49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf in task-service has been cleanup successfully" Apr 30 03:24:18.409939 containerd[1586]: time="2025-04-30T03:24:18.409875750Z" level=error msg="StopPodSandbox for \"237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4\" failed" error="failed to destroy network for sandbox \"237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:24:18.410639 kubelet[2721]: E0430 03:24:18.410585 2721 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" Apr 30 03:24:18.411019 kubelet[2721]: E0430 03:24:18.410662 2721 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4"} Apr 30 03:24:18.411019 kubelet[2721]: E0430 03:24:18.410768 2721 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"67c47235-153a-4d06-ba98-7cf5056b9032\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:24:18.411019 kubelet[2721]: E0430 03:24:18.410802 2721 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"67c47235-153a-4d06-ba98-7cf5056b9032\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b29ps" podUID="67c47235-153a-4d06-ba98-7cf5056b9032" Apr 30 03:24:18.422817 containerd[1586]: time="2025-04-30T03:24:18.422528199Z" level=error msg="StopPodSandbox for \"49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf\" failed" error="failed to destroy network for sandbox \"49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:24:18.423580 kubelet[2721]: E0430 03:24:18.422861 2721 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" Apr 30 03:24:18.423580 kubelet[2721]: E0430 03:24:18.422932 2721 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf"} Apr 30 03:24:18.423580 kubelet[2721]: E0430 03:24:18.422977 2721 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"39887ce7-27ce-4a68-a0cb-cc6961010eef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:24:18.423580 kubelet[2721]: E0430 03:24:18.423007 2721 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"39887ce7-27ce-4a68-a0cb-cc6961010eef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-trcpw" podUID="39887ce7-27ce-4a68-a0cb-cc6961010eef" Apr 30 03:24:18.429154 containerd[1586]: time="2025-04-30T03:24:18.429038240Z" level=error msg="StopPodSandbox for \"e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f\" failed" error="failed to destroy network for sandbox \"e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:24:18.429657 kubelet[2721]: E0430 03:24:18.429497 2721 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" Apr 30 03:24:18.429657 kubelet[2721]: E0430 03:24:18.429575 2721 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f"} Apr 30 03:24:18.429657 kubelet[2721]: E0430 03:24:18.429649 2721 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"474f9b63-fbd6-4b8c-889a-f6e7b01ee1f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:24:18.430472 kubelet[2721]: E0430 03:24:18.429684 2721 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"474f9b63-fbd6-4b8c-889a-f6e7b01ee1f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6678cff58b-ff657" podUID="474f9b63-fbd6-4b8c-889a-f6e7b01ee1f4" Apr 30 03:24:18.444749 containerd[1586]: time="2025-04-30T03:24:18.444589875Z" level=error msg="StopPodSandbox for \"11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de\" failed" error="failed to destroy network for sandbox \"11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:24:18.445186 kubelet[2721]: E0430 03:24:18.445084 2721 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" Apr 30 03:24:18.445186 kubelet[2721]: E0430 03:24:18.445149 2721 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de"} Apr 30 03:24:18.447356 kubelet[2721]: E0430 03:24:18.445914 2721 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2319ef4b-2c33-4712-bebd-81dcb419db1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:24:18.447356 kubelet[2721]: E0430 03:24:18.445990 2721 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2319ef4b-2c33-4712-bebd-81dcb419db1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65cd484dd7-znmv5" podUID="2319ef4b-2c33-4712-bebd-81dcb419db1f" Apr 30 03:24:18.448120 containerd[1586]: time="2025-04-30T03:24:18.448069209Z" level=error msg="StopPodSandbox for \"e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a\" failed" error="failed to destroy network for sandbox \"e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:24:18.449230 kubelet[2721]: E0430 03:24:18.449172 2721 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" Apr 30 03:24:18.449394 kubelet[2721]: E0430 03:24:18.449252 2721 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a"} Apr 30 03:24:18.449394 kubelet[2721]: E0430 03:24:18.449299 2721 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ca50be34-7ff2-4c44-99a2-9d71206348f1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:24:18.449938 kubelet[2721]: E0430 03:24:18.449888 2721 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ca50be34-7ff2-4c44-99a2-9d71206348f1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9hc9l" podUID="ca50be34-7ff2-4c44-99a2-9d71206348f1" Apr 30 03:24:18.453246 containerd[1586]: time="2025-04-30T03:24:18.453165988Z" level=error msg="StopPodSandbox for \"5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17\" failed" error="failed to destroy network for sandbox \"5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:24:18.453681 kubelet[2721]: E0430 03:24:18.453634 2721 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" Apr 30 03:24:18.453813 kubelet[2721]: E0430 03:24:18.453700 2721 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17"} Apr 30 03:24:18.453813 kubelet[2721]: E0430 03:24:18.453754 2721 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c63b596-a9ae-4e22-9c6f-207ff0492217\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:24:18.453956 kubelet[2721]: E0430 03:24:18.453809 2721 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c63b596-a9ae-4e22-9c6f-207ff0492217\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6678cff58b-ph7rm" podUID="9c63b596-a9ae-4e22-9c6f-207ff0492217" Apr 30 03:24:18.749016 kubelet[2721]: I0430 03:24:18.747879 2721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:24:18.751453 kubelet[2721]: E0430 03:24:18.750826 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:24:19.274650 kubelet[2721]: E0430 03:24:19.274581 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:24:20.442824 systemd[1]: Started sshd@7-64.227.96.87:22-139.178.89.65:42040.service - OpenSSH per-connection server daemon (139.178.89.65:42040). Apr 30 03:24:20.547782 sshd[3709]: Accepted publickey for core from 139.178.89.65 port 42040 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:24:20.549891 sshd[3709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:24:20.564412 systemd-logind[1563]: New session 8 of user core. Apr 30 03:24:20.569312 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 03:24:20.821597 sshd[3709]: pam_unix(sshd:session): session closed for user core Apr 30 03:24:20.826771 systemd[1]: sshd@7-64.227.96.87:22-139.178.89.65:42040.service: Deactivated successfully. Apr 30 03:24:20.831570 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 03:24:20.833745 systemd-logind[1563]: Session 8 logged out. Waiting for processes to exit. Apr 30 03:24:20.835041 systemd-logind[1563]: Removed session 8. Apr 30 03:24:24.292601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1516418291.mount: Deactivated successfully. Apr 30 03:24:24.451946 containerd[1586]: time="2025-04-30T03:24:24.417971945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" Apr 30 03:24:24.479349 containerd[1586]: time="2025-04-30T03:24:24.479033858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:24.517906 containerd[1586]: time="2025-04-30T03:24:24.517635876Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:24.519175 containerd[1586]: time="2025-04-30T03:24:24.519128955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:24.523676 containerd[1586]: time="2025-04-30T03:24:24.523500853Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 7.295366551s" Apr 30 03:24:24.523676 containerd[1586]: time="2025-04-30T03:24:24.523578860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" Apr 30 03:24:24.610886 containerd[1586]: time="2025-04-30T03:24:24.610821134Z" level=info msg="CreateContainer within sandbox \"e4f756d19fac8a04bffb84989a9840691942ffe910316fa2bfeec61c50a09970\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 03:24:24.785214 containerd[1586]: time="2025-04-30T03:24:24.784178274Z" level=info msg="CreateContainer within sandbox \"e4f756d19fac8a04bffb84989a9840691942ffe910316fa2bfeec61c50a09970\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ccaeecb4ef021db815b08c5616a3a3d56a2e7cc0a3d3855fe64ce564a404bb18\"" Apr 30 03:24:24.789394 containerd[1586]: time="2025-04-30T03:24:24.789320328Z" level=info msg="StartContainer for \"ccaeecb4ef021db815b08c5616a3a3d56a2e7cc0a3d3855fe64ce564a404bb18\"" Apr 30 03:24:24.993409 containerd[1586]: time="2025-04-30T03:24:24.993170518Z" level=info msg="StartContainer for \"ccaeecb4ef021db815b08c5616a3a3d56a2e7cc0a3d3855fe64ce564a404bb18\" returns successfully" Apr 30 03:24:25.086607 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 03:24:25.089492 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 03:24:25.096445 systemd-journald[1137]: Under memory pressure, flushing caches. Apr 30 03:24:25.095769 systemd-resolved[1478]: Under memory pressure, flushing caches. Apr 30 03:24:25.095892 systemd-resolved[1478]: Flushed all caches. Apr 30 03:24:25.325462 kubelet[2721]: E0430 03:24:25.325129 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:24:25.404620 kubelet[2721]: I0430 03:24:25.404548 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8n8t2" podStartSLOduration=1.970201818 podStartE2EDuration="21.404473858s" podCreationTimestamp="2025-04-30 03:24:04 +0000 UTC" firstStartedPulling="2025-04-30 03:24:05.115798456 +0000 UTC m=+26.277917798" lastFinishedPulling="2025-04-30 03:24:24.550070484 +0000 UTC m=+45.712189838" observedRunningTime="2025-04-30 03:24:25.403092082 +0000 UTC m=+46.565211461" watchObservedRunningTime="2025-04-30 03:24:25.404473858 +0000 UTC m=+46.566593221" Apr 30 03:24:25.830784 systemd[1]: Started sshd@8-64.227.96.87:22-139.178.89.65:42050.service - OpenSSH per-connection server daemon (139.178.89.65:42050). Apr 30 03:24:25.914190 sshd[3796]: Accepted publickey for core from 139.178.89.65 port 42050 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:24:25.917261 sshd[3796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:24:25.924736 systemd-logind[1563]: New session 9 of user core. Apr 30 03:24:25.945518 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 03:24:26.118884 sshd[3796]: pam_unix(sshd:session): session closed for user core Apr 30 03:24:26.122690 systemd-logind[1563]: Session 9 logged out. Waiting for processes to exit. Apr 30 03:24:26.123617 systemd[1]: sshd@8-64.227.96.87:22-139.178.89.65:42050.service: Deactivated successfully. Apr 30 03:24:26.128665 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 03:24:26.132745 systemd-logind[1563]: Removed session 9. Apr 30 03:24:26.311380 kubelet[2721]: I0430 03:24:26.311213 2721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:24:26.312201 kubelet[2721]: E0430 03:24:26.312172 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:24:26.898356 kernel: bpftool[3931]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 03:24:27.144313 systemd-resolved[1478]: Under memory pressure, flushing caches. Apr 30 03:24:27.145532 systemd-journald[1137]: Under memory pressure, flushing caches. Apr 30 03:24:27.144344 systemd-resolved[1478]: Flushed all caches. Apr 30 03:24:27.206231 systemd-networkd[1222]: vxlan.calico: Link UP Apr 30 03:24:27.206241 systemd-networkd[1222]: vxlan.calico: Gained carrier Apr 30 03:24:29.191646 systemd-networkd[1222]: vxlan.calico: Gained IPv6LL Apr 30 03:24:30.949579 kubelet[2721]: I0430 03:24:30.949442 2721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:24:30.951703 kubelet[2721]: E0430 03:24:30.950667 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:24:31.050197 containerd[1586]: time="2025-04-30T03:24:31.049773141Z" level=info msg="StopPodSandbox for \"5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17\"" Apr 30 03:24:31.052441 containerd[1586]: time="2025-04-30T03:24:31.051461489Z" level=info msg="StopPodSandbox for \"e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a\"" Apr 30 03:24:31.134896 systemd[1]: Started sshd@9-64.227.96.87:22-139.178.89.65:48938.service - OpenSSH per-connection server daemon (139.178.89.65:48938). Apr 30 03:24:31.316707 sshd[4063]: Accepted publickey for core from 139.178.89.65 port 48938 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:24:31.322564 sshd[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:24:31.346048 systemd[1]: run-containerd-runc-k8s.io-ccaeecb4ef021db815b08c5616a3a3d56a2e7cc0a3d3855fe64ce564a404bb18-runc.HdVnbL.mount: Deactivated successfully. Apr 30 03:24:31.361267 systemd-logind[1563]: New session 10 of user core. Apr 30 03:24:31.365816 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 03:24:31.393742 kubelet[2721]: E0430 03:24:31.391491 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:24:31.625710 sshd[4063]: pam_unix(sshd:session): session closed for user core Apr 30 03:24:31.631872 systemd[1]: sshd@9-64.227.96.87:22-139.178.89.65:48938.service: Deactivated successfully. Apr 30 03:24:31.640939 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 03:24:31.643064 systemd-logind[1563]: Session 10 logged out. Waiting for processes to exit. Apr 30 03:24:31.645297 systemd-logind[1563]: Removed session 10. Apr 30 03:24:31.664137 containerd[1586]: 2025-04-30 03:24:31.300 [INFO][4052] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" Apr 30 03:24:31.664137 containerd[1586]: 2025-04-30 03:24:31.307 [INFO][4052] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" iface="eth0" netns="/var/run/netns/cni-ff650392-6f92-a345-6156-a3aec6f11a05" Apr 30 03:24:31.664137 containerd[1586]: 2025-04-30 03:24:31.307 [INFO][4052] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" iface="eth0" netns="/var/run/netns/cni-ff650392-6f92-a345-6156-a3aec6f11a05" Apr 30 03:24:31.664137 containerd[1586]: 2025-04-30 03:24:31.310 [INFO][4052] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" iface="eth0" netns="/var/run/netns/cni-ff650392-6f92-a345-6156-a3aec6f11a05" Apr 30 03:24:31.664137 containerd[1586]: 2025-04-30 03:24:31.310 [INFO][4052] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" Apr 30 03:24:31.664137 containerd[1586]: 2025-04-30 03:24:31.315 [INFO][4052] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" Apr 30 03:24:31.664137 containerd[1586]: 2025-04-30 03:24:31.624 [INFO][4080] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" HandleID="k8s-pod-network.e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--9hc9l-eth0" Apr 30 03:24:31.664137 containerd[1586]: 2025-04-30 03:24:31.626 [INFO][4080] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:24:31.664137 containerd[1586]: 2025-04-30 03:24:31.627 [INFO][4080] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:24:31.664137 containerd[1586]: 2025-04-30 03:24:31.647 [WARNING][4080] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" HandleID="k8s-pod-network.e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--9hc9l-eth0" Apr 30 03:24:31.664137 containerd[1586]: 2025-04-30 03:24:31.647 [INFO][4080] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" HandleID="k8s-pod-network.e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--9hc9l-eth0" Apr 30 03:24:31.664137 containerd[1586]: 2025-04-30 03:24:31.651 [INFO][4080] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:24:31.664137 containerd[1586]: 2025-04-30 03:24:31.656 [INFO][4052] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" Apr 30 03:24:31.672471 systemd[1]: run-netns-cni\x2dff650392\x2d6f92\x2da345\x2d6156\x2da3aec6f11a05.mount: Deactivated successfully. Apr 30 03:24:31.681531 containerd[1586]: time="2025-04-30T03:24:31.681435005Z" level=info msg="TearDown network for sandbox \"e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a\" successfully" Apr 30 03:24:31.681531 containerd[1586]: time="2025-04-30T03:24:31.681516562Z" level=info msg="StopPodSandbox for \"e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a\" returns successfully" Apr 30 03:24:31.682140 kubelet[2721]: E0430 03:24:31.682105 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:24:31.682432 containerd[1586]: 2025-04-30 03:24:31.344 [INFO][4051] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" Apr 30 03:24:31.682432 containerd[1586]: 2025-04-30 03:24:31.349 [INFO][4051] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" iface="eth0" netns="/var/run/netns/cni-c3c743ec-d8b0-500b-147b-0e54cd695c6d" Apr 30 03:24:31.682432 containerd[1586]: 2025-04-30 03:24:31.349 [INFO][4051] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" iface="eth0" netns="/var/run/netns/cni-c3c743ec-d8b0-500b-147b-0e54cd695c6d" Apr 30 03:24:31.682432 containerd[1586]: 2025-04-30 03:24:31.356 [INFO][4051] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" iface="eth0" netns="/var/run/netns/cni-c3c743ec-d8b0-500b-147b-0e54cd695c6d" Apr 30 03:24:31.682432 containerd[1586]: 2025-04-30 03:24:31.357 [INFO][4051] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" Apr 30 03:24:31.682432 containerd[1586]: 2025-04-30 03:24:31.357 [INFO][4051] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" Apr 30 03:24:31.682432 containerd[1586]: 2025-04-30 03:24:31.624 [INFO][4084] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" HandleID="k8s-pod-network.5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ph7rm-eth0" Apr 30 03:24:31.682432 containerd[1586]: 2025-04-30 03:24:31.628 [INFO][4084] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:24:31.682432 containerd[1586]: 2025-04-30 03:24:31.651 [INFO][4084] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:24:31.682432 containerd[1586]: 2025-04-30 03:24:31.661 [WARNING][4084] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" HandleID="k8s-pod-network.5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ph7rm-eth0" Apr 30 03:24:31.682432 containerd[1586]: 2025-04-30 03:24:31.661 [INFO][4084] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" HandleID="k8s-pod-network.5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ph7rm-eth0" Apr 30 03:24:31.682432 containerd[1586]: 2025-04-30 03:24:31.670 [INFO][4084] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:24:31.682432 containerd[1586]: 2025-04-30 03:24:31.678 [INFO][4051] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" Apr 30 03:24:31.684770 containerd[1586]: time="2025-04-30T03:24:31.683953896Z" level=info msg="TearDown network for sandbox \"5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17\" successfully" Apr 30 03:24:31.684770 containerd[1586]: time="2025-04-30T03:24:31.684004441Z" level=info msg="StopPodSandbox for \"5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17\" returns successfully" Apr 30 03:24:31.693507 systemd[1]: run-netns-cni\x2dc3c743ec\x2dd8b0\x2d500b\x2d147b\x2d0e54cd695c6d.mount: Deactivated successfully. Apr 30 03:24:31.698459 containerd[1586]: time="2025-04-30T03:24:31.698007749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hc9l,Uid:ca50be34-7ff2-4c44-99a2-9d71206348f1,Namespace:kube-system,Attempt:1,}" Apr 30 03:24:31.698459 containerd[1586]: time="2025-04-30T03:24:31.698169983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6678cff58b-ph7rm,Uid:9c63b596-a9ae-4e22-9c6f-207ff0492217,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:24:31.991778 systemd-networkd[1222]: cali0139d77aa9b: Link UP Apr 30 03:24:31.992201 systemd-networkd[1222]: cali0139d77aa9b: Gained carrier Apr 30 03:24:32.020973 containerd[1586]: 2025-04-30 03:24:31.832 [INFO][4119] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ph7rm-eth0 calico-apiserver-6678cff58b- calico-apiserver 9c63b596-a9ae-4e22-9c6f-207ff0492217 901 0 2025-04-30 03:24:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6678cff58b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-0-0c5ff7085f calico-apiserver-6678cff58b-ph7rm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0139d77aa9b [] []}} ContainerID="507a2b9d7a15a168b1463e79da36a5139eadb608ae2ed04f37a2f586d28d0022" Namespace="calico-apiserver" Pod="calico-apiserver-6678cff58b-ph7rm" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ph7rm-" Apr 30 03:24:32.020973 containerd[1586]: 2025-04-30 03:24:31.832 [INFO][4119] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="507a2b9d7a15a168b1463e79da36a5139eadb608ae2ed04f37a2f586d28d0022" Namespace="calico-apiserver" Pod="calico-apiserver-6678cff58b-ph7rm" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ph7rm-eth0" Apr 30 03:24:32.020973 containerd[1586]: 2025-04-30 03:24:31.890 [INFO][4144] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="507a2b9d7a15a168b1463e79da36a5139eadb608ae2ed04f37a2f586d28d0022" HandleID="k8s-pod-network.507a2b9d7a15a168b1463e79da36a5139eadb608ae2ed04f37a2f586d28d0022" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ph7rm-eth0" Apr 30 03:24:32.020973 containerd[1586]: 2025-04-30 03:24:31.913 [INFO][4144] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="507a2b9d7a15a168b1463e79da36a5139eadb608ae2ed04f37a2f586d28d0022" HandleID="k8s-pod-network.507a2b9d7a15a168b1463e79da36a5139eadb608ae2ed04f37a2f586d28d0022" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ph7rm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000385290), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-0-0c5ff7085f", "pod":"calico-apiserver-6678cff58b-ph7rm", "timestamp":"2025-04-30 03:24:31.890565081 +0000 UTC"}, Hostname:"ci-4081.3.3-0-0c5ff7085f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:24:32.020973 containerd[1586]: 2025-04-30 03:24:31.913 [INFO][4144] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:24:32.020973 containerd[1586]: 2025-04-30 03:24:31.913 [INFO][4144] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:24:32.020973 containerd[1586]: 2025-04-30 03:24:31.913 [INFO][4144] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-0-0c5ff7085f' Apr 30 03:24:32.020973 containerd[1586]: 2025-04-30 03:24:31.918 [INFO][4144] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.507a2b9d7a15a168b1463e79da36a5139eadb608ae2ed04f37a2f586d28d0022" host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:32.020973 containerd[1586]: 2025-04-30 03:24:31.931 [INFO][4144] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:32.020973 containerd[1586]: 2025-04-30 03:24:31.940 [INFO][4144] ipam/ipam.go 489: Trying affinity for 192.168.43.128/26 host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:32.020973 containerd[1586]: 2025-04-30 03:24:31.943 [INFO][4144] ipam/ipam.go 155: Attempting to load block cidr=192.168.43.128/26 host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:32.020973 containerd[1586]: 2025-04-30 03:24:31.947 [INFO][4144] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.43.128/26 host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:32.020973 containerd[1586]: 2025-04-30 03:24:31.948 [INFO][4144] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.43.128/26 handle="k8s-pod-network.507a2b9d7a15a168b1463e79da36a5139eadb608ae2ed04f37a2f586d28d0022" host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:32.020973 containerd[1586]: 2025-04-30 03:24:31.951 [INFO][4144] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.507a2b9d7a15a168b1463e79da36a5139eadb608ae2ed04f37a2f586d28d0022 Apr 30 03:24:32.020973 containerd[1586]: 2025-04-30 03:24:31.962 [INFO][4144] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.43.128/26 handle="k8s-pod-network.507a2b9d7a15a168b1463e79da36a5139eadb608ae2ed04f37a2f586d28d0022" host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:32.020973 containerd[1586]: 2025-04-30 03:24:31.973 [INFO][4144] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.43.129/26] block=192.168.43.128/26 handle="k8s-pod-network.507a2b9d7a15a168b1463e79da36a5139eadb608ae2ed04f37a2f586d28d0022" host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:32.020973 containerd[1586]: 2025-04-30 03:24:31.973 [INFO][4144] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.43.129/26] handle="k8s-pod-network.507a2b9d7a15a168b1463e79da36a5139eadb608ae2ed04f37a2f586d28d0022" host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:32.020973 containerd[1586]: 2025-04-30 03:24:31.973 [INFO][4144] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:24:32.020973 containerd[1586]: 2025-04-30 03:24:31.973 [INFO][4144] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.43.129/26] IPv6=[] ContainerID="507a2b9d7a15a168b1463e79da36a5139eadb608ae2ed04f37a2f586d28d0022" HandleID="k8s-pod-network.507a2b9d7a15a168b1463e79da36a5139eadb608ae2ed04f37a2f586d28d0022" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ph7rm-eth0" Apr 30 03:24:32.022456 containerd[1586]: 2025-04-30 03:24:31.979 [INFO][4119] cni-plugin/k8s.go 386: Populated endpoint ContainerID="507a2b9d7a15a168b1463e79da36a5139eadb608ae2ed04f37a2f586d28d0022" Namespace="calico-apiserver" Pod="calico-apiserver-6678cff58b-ph7rm" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ph7rm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ph7rm-eth0", GenerateName:"calico-apiserver-6678cff58b-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c63b596-a9ae-4e22-9c6f-207ff0492217", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 24, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6678cff58b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-0c5ff7085f", ContainerID:"", Pod:"calico-apiserver-6678cff58b-ph7rm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0139d77aa9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:24:32.022456 containerd[1586]: 2025-04-30 03:24:31.980 [INFO][4119] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.43.129/32] ContainerID="507a2b9d7a15a168b1463e79da36a5139eadb608ae2ed04f37a2f586d28d0022" Namespace="calico-apiserver" Pod="calico-apiserver-6678cff58b-ph7rm" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ph7rm-eth0" Apr 30 03:24:32.022456 containerd[1586]: 2025-04-30 03:24:31.980 [INFO][4119] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0139d77aa9b ContainerID="507a2b9d7a15a168b1463e79da36a5139eadb608ae2ed04f37a2f586d28d0022" Namespace="calico-apiserver" Pod="calico-apiserver-6678cff58b-ph7rm" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ph7rm-eth0" Apr 30 03:24:32.022456 containerd[1586]: 2025-04-30 03:24:31.985 [INFO][4119] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="507a2b9d7a15a168b1463e79da36a5139eadb608ae2ed04f37a2f586d28d0022" Namespace="calico-apiserver" Pod="calico-apiserver-6678cff58b-ph7rm" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ph7rm-eth0" Apr 30 03:24:32.022456 containerd[1586]: 2025-04-30 03:24:31.985 [INFO][4119] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="507a2b9d7a15a168b1463e79da36a5139eadb608ae2ed04f37a2f586d28d0022" Namespace="calico-apiserver" Pod="calico-apiserver-6678cff58b-ph7rm" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ph7rm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ph7rm-eth0", GenerateName:"calico-apiserver-6678cff58b-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c63b596-a9ae-4e22-9c6f-207ff0492217", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 24, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6678cff58b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-0c5ff7085f", ContainerID:"507a2b9d7a15a168b1463e79da36a5139eadb608ae2ed04f37a2f586d28d0022", Pod:"calico-apiserver-6678cff58b-ph7rm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0139d77aa9b", MAC:"2a:a3:7a:04:cd:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:24:32.022456 containerd[1586]: 2025-04-30 03:24:32.004 [INFO][4119] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="507a2b9d7a15a168b1463e79da36a5139eadb608ae2ed04f37a2f586d28d0022" Namespace="calico-apiserver" Pod="calico-apiserver-6678cff58b-ph7rm" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ph7rm-eth0" Apr 30 03:24:32.100003 systemd-networkd[1222]: cali9d51be7614a: Link UP Apr 30 03:24:32.102641 systemd-networkd[1222]: cali9d51be7614a: Gained carrier Apr 30 03:24:32.128033 containerd[1586]: time="2025-04-30T03:24:32.127866990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:24:32.129795 containerd[1586]: time="2025-04-30T03:24:32.128044752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:24:32.129795 containerd[1586]: time="2025-04-30T03:24:32.128062305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:32.129795 containerd[1586]: time="2025-04-30T03:24:32.128534223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:32.135810 containerd[1586]: 2025-04-30 03:24:31.827 [INFO][4123] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--9hc9l-eth0 coredns-7db6d8ff4d- kube-system ca50be34-7ff2-4c44-99a2-9d71206348f1 900 0 2025-04-30 03:23:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-0-0c5ff7085f coredns-7db6d8ff4d-9hc9l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9d51be7614a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d9b960654f7e01ee3663461aacea86e4e4e751ce99df70e0c22c273e005d4871" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hc9l" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--9hc9l-" Apr 30 03:24:32.135810 containerd[1586]: 2025-04-30 03:24:31.827 [INFO][4123] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d9b960654f7e01ee3663461aacea86e4e4e751ce99df70e0c22c273e005d4871" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hc9l" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--9hc9l-eth0" Apr 30 03:24:32.135810 containerd[1586]: 2025-04-30 03:24:31.891 [INFO][4145] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d9b960654f7e01ee3663461aacea86e4e4e751ce99df70e0c22c273e005d4871" HandleID="k8s-pod-network.d9b960654f7e01ee3663461aacea86e4e4e751ce99df70e0c22c273e005d4871" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--9hc9l-eth0" Apr 30 03:24:32.135810 containerd[1586]: 2025-04-30 03:24:31.914 [INFO][4145] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d9b960654f7e01ee3663461aacea86e4e4e751ce99df70e0c22c273e005d4871" HandleID="k8s-pod-network.d9b960654f7e01ee3663461aacea86e4e4e751ce99df70e0c22c273e005d4871" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--9hc9l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000265970), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-0-0c5ff7085f", "pod":"coredns-7db6d8ff4d-9hc9l", "timestamp":"2025-04-30 03:24:31.891863655 +0000 UTC"}, Hostname:"ci-4081.3.3-0-0c5ff7085f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:24:32.135810 containerd[1586]: 2025-04-30 03:24:31.914 [INFO][4145] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:24:32.135810 containerd[1586]: 2025-04-30 03:24:31.974 [INFO][4145] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:24:32.135810 containerd[1586]: 2025-04-30 03:24:31.974 [INFO][4145] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-0-0c5ff7085f' Apr 30 03:24:32.135810 containerd[1586]: 2025-04-30 03:24:31.980 [INFO][4145] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d9b960654f7e01ee3663461aacea86e4e4e751ce99df70e0c22c273e005d4871" host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:32.135810 containerd[1586]: 2025-04-30 03:24:32.008 [INFO][4145] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:32.135810 containerd[1586]: 2025-04-30 03:24:32.025 [INFO][4145] ipam/ipam.go 489: Trying affinity for 192.168.43.128/26 host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:32.135810 containerd[1586]: 2025-04-30 03:24:32.049 [INFO][4145] ipam/ipam.go 155: Attempting to load block cidr=192.168.43.128/26 host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:32.135810 containerd[1586]: 2025-04-30 03:24:32.061 [INFO][4145] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.43.128/26 host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:32.135810 containerd[1586]: 2025-04-30 03:24:32.061 [INFO][4145] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.43.128/26 handle="k8s-pod-network.d9b960654f7e01ee3663461aacea86e4e4e751ce99df70e0c22c273e005d4871" host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:32.135810 containerd[1586]: 2025-04-30 03:24:32.067 [INFO][4145] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d9b960654f7e01ee3663461aacea86e4e4e751ce99df70e0c22c273e005d4871 Apr 30 03:24:32.135810 containerd[1586]: 2025-04-30 03:24:32.074 [INFO][4145] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.43.128/26 handle="k8s-pod-network.d9b960654f7e01ee3663461aacea86e4e4e751ce99df70e0c22c273e005d4871" host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:32.135810 containerd[1586]: 2025-04-30 03:24:32.088 [INFO][4145] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.43.130/26] block=192.168.43.128/26 handle="k8s-pod-network.d9b960654f7e01ee3663461aacea86e4e4e751ce99df70e0c22c273e005d4871" host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:32.135810 containerd[1586]: 2025-04-30 03:24:32.088 [INFO][4145] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.43.130/26] handle="k8s-pod-network.d9b960654f7e01ee3663461aacea86e4e4e751ce99df70e0c22c273e005d4871" host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:32.135810 containerd[1586]: 2025-04-30 03:24:32.088 [INFO][4145] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:24:32.135810 containerd[1586]: 2025-04-30 03:24:32.088 [INFO][4145] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.43.130/26] IPv6=[] ContainerID="d9b960654f7e01ee3663461aacea86e4e4e751ce99df70e0c22c273e005d4871" HandleID="k8s-pod-network.d9b960654f7e01ee3663461aacea86e4e4e751ce99df70e0c22c273e005d4871" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--9hc9l-eth0" Apr 30 03:24:32.139649 containerd[1586]: 2025-04-30 03:24:32.092 [INFO][4123] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d9b960654f7e01ee3663461aacea86e4e4e751ce99df70e0c22c273e005d4871" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hc9l" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--9hc9l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--9hc9l-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ca50be34-7ff2-4c44-99a2-9d71206348f1", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-0c5ff7085f", ContainerID:"", Pod:"coredns-7db6d8ff4d-9hc9l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d51be7614a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:24:32.139649 containerd[1586]: 2025-04-30 03:24:32.092 [INFO][4123] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.43.130/32] ContainerID="d9b960654f7e01ee3663461aacea86e4e4e751ce99df70e0c22c273e005d4871" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hc9l" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--9hc9l-eth0" Apr 30 03:24:32.139649 containerd[1586]: 2025-04-30 03:24:32.092 [INFO][4123] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d51be7614a ContainerID="d9b960654f7e01ee3663461aacea86e4e4e751ce99df70e0c22c273e005d4871" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hc9l" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--9hc9l-eth0" Apr 30 03:24:32.139649 containerd[1586]: 2025-04-30 03:24:32.104 [INFO][4123] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d9b960654f7e01ee3663461aacea86e4e4e751ce99df70e0c22c273e005d4871" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hc9l" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--9hc9l-eth0" Apr 30 03:24:32.139649 containerd[1586]: 2025-04-30 03:24:32.105 [INFO][4123] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d9b960654f7e01ee3663461aacea86e4e4e751ce99df70e0c22c273e005d4871" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hc9l" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--9hc9l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--9hc9l-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ca50be34-7ff2-4c44-99a2-9d71206348f1", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-0c5ff7085f", ContainerID:"d9b960654f7e01ee3663461aacea86e4e4e751ce99df70e0c22c273e005d4871", Pod:"coredns-7db6d8ff4d-9hc9l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d51be7614a", MAC:"82:30:57:1d:e7:a7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:24:32.139649 containerd[1586]: 2025-04-30 03:24:32.122 [INFO][4123] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d9b960654f7e01ee3663461aacea86e4e4e751ce99df70e0c22c273e005d4871" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hc9l" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--9hc9l-eth0" Apr 30 03:24:32.239397 containerd[1586]: time="2025-04-30T03:24:32.237920454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:24:32.239397 containerd[1586]: time="2025-04-30T03:24:32.238024583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:24:32.239397 containerd[1586]: time="2025-04-30T03:24:32.238049688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:32.239397 containerd[1586]: time="2025-04-30T03:24:32.238213573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:32.300518 containerd[1586]: time="2025-04-30T03:24:32.300220100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6678cff58b-ph7rm,Uid:9c63b596-a9ae-4e22-9c6f-207ff0492217,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"507a2b9d7a15a168b1463e79da36a5139eadb608ae2ed04f37a2f586d28d0022\"" Apr 30 03:24:32.357991 containerd[1586]: time="2025-04-30T03:24:32.357928466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:24:32.390623 containerd[1586]: time="2025-04-30T03:24:32.390519994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hc9l,Uid:ca50be34-7ff2-4c44-99a2-9d71206348f1,Namespace:kube-system,Attempt:1,} returns sandbox id \"d9b960654f7e01ee3663461aacea86e4e4e751ce99df70e0c22c273e005d4871\"" Apr 30 03:24:32.394023 kubelet[2721]: E0430 03:24:32.393937 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:24:32.403311 containerd[1586]: time="2025-04-30T03:24:32.403029125Z" level=info msg="CreateContainer within sandbox \"d9b960654f7e01ee3663461aacea86e4e4e751ce99df70e0c22c273e005d4871\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:24:32.425863 containerd[1586]: time="2025-04-30T03:24:32.425672231Z" level=info msg="CreateContainer within sandbox \"d9b960654f7e01ee3663461aacea86e4e4e751ce99df70e0c22c273e005d4871\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f9ae81caa5176e3c847cafb8fdc5abb3a9c602437d3e07d6efa75d627e54c089\"" Apr 30 03:24:32.428120 containerd[1586]: time="2025-04-30T03:24:32.426981354Z" level=info msg="StartContainer for \"f9ae81caa5176e3c847cafb8fdc5abb3a9c602437d3e07d6efa75d627e54c089\"" Apr 30 03:24:32.509766 containerd[1586]: time="2025-04-30T03:24:32.509699049Z" level=info msg="StartContainer for \"f9ae81caa5176e3c847cafb8fdc5abb3a9c602437d3e07d6efa75d627e54c089\" returns successfully" Apr 30 03:24:33.047769 containerd[1586]: time="2025-04-30T03:24:33.047303729Z" level=info msg="StopPodSandbox for \"237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4\"" Apr 30 03:24:33.047769 containerd[1586]: time="2025-04-30T03:24:33.047600151Z" level=info msg="StopPodSandbox for \"49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf\"" Apr 30 03:24:33.244353 containerd[1586]: 2025-04-30 03:24:33.168 [INFO][4336] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" Apr 30 03:24:33.244353 containerd[1586]: 2025-04-30 03:24:33.171 [INFO][4336] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" iface="eth0" netns="/var/run/netns/cni-461effdf-634c-3f49-4c14-c4245f1ed9da" Apr 30 03:24:33.244353 containerd[1586]: 2025-04-30 03:24:33.172 [INFO][4336] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" iface="eth0" netns="/var/run/netns/cni-461effdf-634c-3f49-4c14-c4245f1ed9da" Apr 30 03:24:33.244353 containerd[1586]: 2025-04-30 03:24:33.178 [INFO][4336] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" iface="eth0" netns="/var/run/netns/cni-461effdf-634c-3f49-4c14-c4245f1ed9da" Apr 30 03:24:33.244353 containerd[1586]: 2025-04-30 03:24:33.178 [INFO][4336] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" Apr 30 03:24:33.244353 containerd[1586]: 2025-04-30 03:24:33.178 [INFO][4336] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" Apr 30 03:24:33.244353 containerd[1586]: 2025-04-30 03:24:33.222 [INFO][4349] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" HandleID="k8s-pod-network.49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--trcpw-eth0" Apr 30 03:24:33.244353 containerd[1586]: 2025-04-30 03:24:33.223 [INFO][4349] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:24:33.244353 containerd[1586]: 2025-04-30 03:24:33.223 [INFO][4349] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:24:33.244353 containerd[1586]: 2025-04-30 03:24:33.231 [WARNING][4349] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" HandleID="k8s-pod-network.49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--trcpw-eth0" Apr 30 03:24:33.244353 containerd[1586]: 2025-04-30 03:24:33.231 [INFO][4349] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" HandleID="k8s-pod-network.49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--trcpw-eth0" Apr 30 03:24:33.244353 containerd[1586]: 2025-04-30 03:24:33.235 [INFO][4349] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:24:33.244353 containerd[1586]: 2025-04-30 03:24:33.238 [INFO][4336] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" Apr 30 03:24:33.246598 containerd[1586]: time="2025-04-30T03:24:33.242092122Z" level=info msg="TearDown network for sandbox \"49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf\" successfully" Apr 30 03:24:33.246598 containerd[1586]: time="2025-04-30T03:24:33.244440433Z" level=info msg="StopPodSandbox for \"49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf\" returns successfully" Apr 30 03:24:33.247161 kubelet[2721]: E0430 03:24:33.245388 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:24:33.251578 containerd[1586]: time="2025-04-30T03:24:33.250406893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-trcpw,Uid:39887ce7-27ce-4a68-a0cb-cc6961010eef,Namespace:kube-system,Attempt:1,}" Apr 30 03:24:33.253972 systemd[1]: run-netns-cni\x2d461effdf\x2d634c\x2d3f49\x2d4c14\x2dc4245f1ed9da.mount: Deactivated successfully. Apr 30 03:24:33.294800 containerd[1586]: 2025-04-30 03:24:33.176 [INFO][4337] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" Apr 30 03:24:33.294800 containerd[1586]: 2025-04-30 03:24:33.178 [INFO][4337] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" iface="eth0" netns="/var/run/netns/cni-1f471d39-9f02-f290-91cf-d4879e1414ba" Apr 30 03:24:33.294800 containerd[1586]: 2025-04-30 03:24:33.179 [INFO][4337] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" iface="eth0" netns="/var/run/netns/cni-1f471d39-9f02-f290-91cf-d4879e1414ba" Apr 30 03:24:33.294800 containerd[1586]: 2025-04-30 03:24:33.179 [INFO][4337] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" iface="eth0" netns="/var/run/netns/cni-1f471d39-9f02-f290-91cf-d4879e1414ba" Apr 30 03:24:33.294800 containerd[1586]: 2025-04-30 03:24:33.179 [INFO][4337] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" Apr 30 03:24:33.294800 containerd[1586]: 2025-04-30 03:24:33.179 [INFO][4337] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" Apr 30 03:24:33.294800 containerd[1586]: 2025-04-30 03:24:33.258 [INFO][4351] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" HandleID="k8s-pod-network.237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-csi--node--driver--b29ps-eth0" Apr 30 03:24:33.294800 containerd[1586]: 2025-04-30 03:24:33.259 [INFO][4351] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:24:33.294800 containerd[1586]: 2025-04-30 03:24:33.259 [INFO][4351] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:24:33.294800 containerd[1586]: 2025-04-30 03:24:33.267 [WARNING][4351] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" HandleID="k8s-pod-network.237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-csi--node--driver--b29ps-eth0" Apr 30 03:24:33.294800 containerd[1586]: 2025-04-30 03:24:33.268 [INFO][4351] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" HandleID="k8s-pod-network.237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-csi--node--driver--b29ps-eth0" Apr 30 03:24:33.294800 containerd[1586]: 2025-04-30 03:24:33.275 [INFO][4351] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:24:33.294800 containerd[1586]: 2025-04-30 03:24:33.289 [INFO][4337] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" Apr 30 03:24:33.295653 containerd[1586]: time="2025-04-30T03:24:33.295242097Z" level=info msg="TearDown network for sandbox \"237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4\" successfully" Apr 30 03:24:33.296844 containerd[1586]: time="2025-04-30T03:24:33.295657183Z" level=info msg="StopPodSandbox for \"237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4\" returns successfully" Apr 30 03:24:33.302037 containerd[1586]: time="2025-04-30T03:24:33.301661990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b29ps,Uid:67c47235-153a-4d06-ba98-7cf5056b9032,Namespace:calico-system,Attempt:1,}" Apr 30 03:24:33.302428 systemd[1]: run-netns-cni\x2d1f471d39\x2d9f02\x2df290\x2d91cf\x2dd4879e1414ba.mount: Deactivated successfully. Apr 30 03:24:33.401743 kubelet[2721]: E0430 03:24:33.401705 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:24:33.429172 kubelet[2721]: I0430 03:24:33.426970 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9hc9l" podStartSLOduration=40.42694962 podStartE2EDuration="40.42694962s" podCreationTimestamp="2025-04-30 03:23:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:24:33.42683027 +0000 UTC m=+54.588949634" watchObservedRunningTime="2025-04-30 03:24:33.42694962 +0000 UTC m=+54.589068983" Apr 30 03:24:33.543555 systemd-networkd[1222]: cali9d51be7614a: Gained IPv6LL Apr 30 03:24:33.577175 systemd-networkd[1222]: cali81296ef1864: Link UP Apr 30 03:24:33.577747 systemd-networkd[1222]: cali81296ef1864: Gained carrier Apr 30 03:24:33.611087 systemd-networkd[1222]: cali0139d77aa9b: Gained IPv6LL Apr 30 03:24:33.614452 containerd[1586]: 2025-04-30 03:24:33.356 [INFO][4364] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--trcpw-eth0 coredns-7db6d8ff4d- kube-system 39887ce7-27ce-4a68-a0cb-cc6961010eef 928 0 2025-04-30 03:23:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-0-0c5ff7085f coredns-7db6d8ff4d-trcpw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali81296ef1864 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1399f4c52f136141e632dd53b09bf3cca9cdf0e7f0ac5351b58d4ca1041f4272" Namespace="kube-system" Pod="coredns-7db6d8ff4d-trcpw" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--trcpw-" Apr 30 03:24:33.614452 containerd[1586]: 2025-04-30 03:24:33.356 [INFO][4364] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1399f4c52f136141e632dd53b09bf3cca9cdf0e7f0ac5351b58d4ca1041f4272" Namespace="kube-system" Pod="coredns-7db6d8ff4d-trcpw" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--trcpw-eth0" Apr 30 03:24:33.614452 containerd[1586]: 2025-04-30 03:24:33.458 [INFO][4386] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1399f4c52f136141e632dd53b09bf3cca9cdf0e7f0ac5351b58d4ca1041f4272" HandleID="k8s-pod-network.1399f4c52f136141e632dd53b09bf3cca9cdf0e7f0ac5351b58d4ca1041f4272" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--trcpw-eth0" Apr 30 03:24:33.614452 containerd[1586]: 2025-04-30 03:24:33.494 [INFO][4386] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1399f4c52f136141e632dd53b09bf3cca9cdf0e7f0ac5351b58d4ca1041f4272" HandleID="k8s-pod-network.1399f4c52f136141e632dd53b09bf3cca9cdf0e7f0ac5351b58d4ca1041f4272" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--trcpw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030fae0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-0-0c5ff7085f", "pod":"coredns-7db6d8ff4d-trcpw", "timestamp":"2025-04-30 03:24:33.458821211 +0000 UTC"}, Hostname:"ci-4081.3.3-0-0c5ff7085f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:24:33.614452 containerd[1586]: 2025-04-30 03:24:33.495 [INFO][4386] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:24:33.614452 containerd[1586]: 2025-04-30 03:24:33.495 [INFO][4386] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:24:33.614452 containerd[1586]: 2025-04-30 03:24:33.495 [INFO][4386] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-0-0c5ff7085f' Apr 30 03:24:33.614452 containerd[1586]: 2025-04-30 03:24:33.499 [INFO][4386] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1399f4c52f136141e632dd53b09bf3cca9cdf0e7f0ac5351b58d4ca1041f4272" host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:33.614452 containerd[1586]: 2025-04-30 03:24:33.510 [INFO][4386] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:33.614452 containerd[1586]: 2025-04-30 03:24:33.522 [INFO][4386] ipam/ipam.go 489: Trying affinity for 192.168.43.128/26 host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:33.614452 containerd[1586]: 2025-04-30 03:24:33.526 [INFO][4386] ipam/ipam.go 155: Attempting to load block cidr=192.168.43.128/26 host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:33.614452 containerd[1586]: 2025-04-30 03:24:33.532 [INFO][4386] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.43.128/26 host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:33.614452 containerd[1586]: 2025-04-30 03:24:33.532 [INFO][4386] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.43.128/26 handle="k8s-pod-network.1399f4c52f136141e632dd53b09bf3cca9cdf0e7f0ac5351b58d4ca1041f4272" host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:33.614452 containerd[1586]: 2025-04-30 03:24:33.537 [INFO][4386] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1399f4c52f136141e632dd53b09bf3cca9cdf0e7f0ac5351b58d4ca1041f4272 Apr 30 03:24:33.614452 containerd[1586]: 2025-04-30 03:24:33.546 [INFO][4386] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.43.128/26 handle="k8s-pod-network.1399f4c52f136141e632dd53b09bf3cca9cdf0e7f0ac5351b58d4ca1041f4272" host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:33.614452 containerd[1586]: 2025-04-30 03:24:33.569 [INFO][4386] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.43.131/26] block=192.168.43.128/26 handle="k8s-pod-network.1399f4c52f136141e632dd53b09bf3cca9cdf0e7f0ac5351b58d4ca1041f4272" host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:33.614452 containerd[1586]: 2025-04-30 03:24:33.570 [INFO][4386] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.43.131/26] handle="k8s-pod-network.1399f4c52f136141e632dd53b09bf3cca9cdf0e7f0ac5351b58d4ca1041f4272" host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:33.614452 containerd[1586]: 2025-04-30 03:24:33.570 [INFO][4386] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:24:33.614452 containerd[1586]: 2025-04-30 03:24:33.570 [INFO][4386] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.43.131/26] IPv6=[] ContainerID="1399f4c52f136141e632dd53b09bf3cca9cdf0e7f0ac5351b58d4ca1041f4272" HandleID="k8s-pod-network.1399f4c52f136141e632dd53b09bf3cca9cdf0e7f0ac5351b58d4ca1041f4272" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--trcpw-eth0" Apr 30 03:24:33.615854 containerd[1586]: 2025-04-30 03:24:33.573 [INFO][4364] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1399f4c52f136141e632dd53b09bf3cca9cdf0e7f0ac5351b58d4ca1041f4272" Namespace="kube-system" Pod="coredns-7db6d8ff4d-trcpw" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--trcpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--trcpw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"39887ce7-27ce-4a68-a0cb-cc6961010eef", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-0c5ff7085f", ContainerID:"", Pod:"coredns-7db6d8ff4d-trcpw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81296ef1864", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:24:33.615854 containerd[1586]: 2025-04-30 03:24:33.573 [INFO][4364] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.43.131/32] ContainerID="1399f4c52f136141e632dd53b09bf3cca9cdf0e7f0ac5351b58d4ca1041f4272" Namespace="kube-system" Pod="coredns-7db6d8ff4d-trcpw" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--trcpw-eth0" Apr 30 03:24:33.615854 containerd[1586]: 2025-04-30 03:24:33.574 [INFO][4364] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali81296ef1864 ContainerID="1399f4c52f136141e632dd53b09bf3cca9cdf0e7f0ac5351b58d4ca1041f4272" Namespace="kube-system" Pod="coredns-7db6d8ff4d-trcpw" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--trcpw-eth0" Apr 30 03:24:33.615854 containerd[1586]: 2025-04-30 03:24:33.578 [INFO][4364] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1399f4c52f136141e632dd53b09bf3cca9cdf0e7f0ac5351b58d4ca1041f4272" Namespace="kube-system" Pod="coredns-7db6d8ff4d-trcpw" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--trcpw-eth0" Apr 30 03:24:33.615854 containerd[1586]: 2025-04-30 03:24:33.578 [INFO][4364] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1399f4c52f136141e632dd53b09bf3cca9cdf0e7f0ac5351b58d4ca1041f4272" Namespace="kube-system" Pod="coredns-7db6d8ff4d-trcpw" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--trcpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--trcpw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"39887ce7-27ce-4a68-a0cb-cc6961010eef", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-0c5ff7085f", ContainerID:"1399f4c52f136141e632dd53b09bf3cca9cdf0e7f0ac5351b58d4ca1041f4272", Pod:"coredns-7db6d8ff4d-trcpw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81296ef1864", MAC:"62:8e:e0:35:6d:7a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:24:33.615854 containerd[1586]: 2025-04-30 03:24:33.598 [INFO][4364] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1399f4c52f136141e632dd53b09bf3cca9cdf0e7f0ac5351b58d4ca1041f4272" Namespace="kube-system" Pod="coredns-7db6d8ff4d-trcpw" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--trcpw-eth0" Apr 30 03:24:33.664517 systemd-networkd[1222]: cali9139a752b1a: Link UP Apr 30 03:24:33.666245 systemd-networkd[1222]: cali9139a752b1a: Gained carrier Apr 30 03:24:33.697113 containerd[1586]: 2025-04-30 03:24:33.421 [INFO][4375] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--0--0c5ff7085f-k8s-csi--node--driver--b29ps-eth0 csi-node-driver- calico-system 67c47235-153a-4d06-ba98-7cf5056b9032 929 0 2025-04-30 03:24:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.3-0-0c5ff7085f csi-node-driver-b29ps eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9139a752b1a [] []}} ContainerID="0ad78c704ce8bf4b8e286b667fa9d8aac6797065bc93a67aa9bd4d77c78ee943" Namespace="calico-system" Pod="csi-node-driver-b29ps" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-csi--node--driver--b29ps-" Apr 30 03:24:33.697113 containerd[1586]: 2025-04-30 03:24:33.424 [INFO][4375] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0ad78c704ce8bf4b8e286b667fa9d8aac6797065bc93a67aa9bd4d77c78ee943" Namespace="calico-system" Pod="csi-node-driver-b29ps" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-csi--node--driver--b29ps-eth0" Apr 30 03:24:33.697113 containerd[1586]: 2025-04-30 03:24:33.537 [INFO][4396] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ad78c704ce8bf4b8e286b667fa9d8aac6797065bc93a67aa9bd4d77c78ee943" HandleID="k8s-pod-network.0ad78c704ce8bf4b8e286b667fa9d8aac6797065bc93a67aa9bd4d77c78ee943" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-csi--node--driver--b29ps-eth0" Apr 30 03:24:33.697113 containerd[1586]: 2025-04-30 03:24:33.570 [INFO][4396] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0ad78c704ce8bf4b8e286b667fa9d8aac6797065bc93a67aa9bd4d77c78ee943" HandleID="k8s-pod-network.0ad78c704ce8bf4b8e286b667fa9d8aac6797065bc93a67aa9bd4d77c78ee943" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-csi--node--driver--b29ps-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031ab70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-0-0c5ff7085f", "pod":"csi-node-driver-b29ps", "timestamp":"2025-04-30 03:24:33.536982496 +0000 UTC"}, Hostname:"ci-4081.3.3-0-0c5ff7085f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:24:33.697113 containerd[1586]: 2025-04-30 03:24:33.570 [INFO][4396] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:24:33.697113 containerd[1586]: 2025-04-30 03:24:33.570 [INFO][4396] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:24:33.697113 containerd[1586]: 2025-04-30 03:24:33.570 [INFO][4396] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-0-0c5ff7085f' Apr 30 03:24:33.697113 containerd[1586]: 2025-04-30 03:24:33.574 [INFO][4396] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0ad78c704ce8bf4b8e286b667fa9d8aac6797065bc93a67aa9bd4d77c78ee943" host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:33.697113 containerd[1586]: 2025-04-30 03:24:33.590 [INFO][4396] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:33.697113 containerd[1586]: 2025-04-30 03:24:33.618 [INFO][4396] ipam/ipam.go 489: Trying affinity for 192.168.43.128/26 host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:33.697113 containerd[1586]: 2025-04-30 03:24:33.623 [INFO][4396] ipam/ipam.go 155: Attempting to load block cidr=192.168.43.128/26 host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:33.697113 containerd[1586]: 2025-04-30 03:24:33.627 [INFO][4396] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.43.128/26 host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:33.697113 containerd[1586]: 2025-04-30 03:24:33.627 [INFO][4396] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.43.128/26 handle="k8s-pod-network.0ad78c704ce8bf4b8e286b667fa9d8aac6797065bc93a67aa9bd4d77c78ee943" host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:33.697113 containerd[1586]: 2025-04-30 03:24:33.632 [INFO][4396] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0ad78c704ce8bf4b8e286b667fa9d8aac6797065bc93a67aa9bd4d77c78ee943 Apr 30 03:24:33.697113 containerd[1586]: 2025-04-30 03:24:33.640 [INFO][4396] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.43.128/26 handle="k8s-pod-network.0ad78c704ce8bf4b8e286b667fa9d8aac6797065bc93a67aa9bd4d77c78ee943" host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:33.697113 containerd[1586]: 2025-04-30 03:24:33.652 [INFO][4396] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.43.132/26] block=192.168.43.128/26 handle="k8s-pod-network.0ad78c704ce8bf4b8e286b667fa9d8aac6797065bc93a67aa9bd4d77c78ee943" host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:33.697113 containerd[1586]: 2025-04-30 03:24:33.652 [INFO][4396] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.43.132/26] handle="k8s-pod-network.0ad78c704ce8bf4b8e286b667fa9d8aac6797065bc93a67aa9bd4d77c78ee943" host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:33.697113 containerd[1586]: 2025-04-30 03:24:33.652 [INFO][4396] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:24:33.697113 containerd[1586]: 2025-04-30 03:24:33.652 [INFO][4396] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.43.132/26] IPv6=[] ContainerID="0ad78c704ce8bf4b8e286b667fa9d8aac6797065bc93a67aa9bd4d77c78ee943" HandleID="k8s-pod-network.0ad78c704ce8bf4b8e286b667fa9d8aac6797065bc93a67aa9bd4d77c78ee943" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-csi--node--driver--b29ps-eth0" Apr 30 03:24:33.698533 containerd[1586]: 2025-04-30 03:24:33.656 [INFO][4375] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0ad78c704ce8bf4b8e286b667fa9d8aac6797065bc93a67aa9bd4d77c78ee943" Namespace="calico-system" Pod="csi-node-driver-b29ps" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-csi--node--driver--b29ps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--0c5ff7085f-k8s-csi--node--driver--b29ps-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"67c47235-153a-4d06-ba98-7cf5056b9032", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 24, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-0c5ff7085f", ContainerID:"", Pod:"csi-node-driver-b29ps", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.43.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9139a752b1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:24:33.698533 containerd[1586]: 2025-04-30 03:24:33.656 [INFO][4375] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.43.132/32] ContainerID="0ad78c704ce8bf4b8e286b667fa9d8aac6797065bc93a67aa9bd4d77c78ee943" Namespace="calico-system" Pod="csi-node-driver-b29ps" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-csi--node--driver--b29ps-eth0" Apr 30 03:24:33.698533 containerd[1586]: 2025-04-30 03:24:33.656 [INFO][4375] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9139a752b1a ContainerID="0ad78c704ce8bf4b8e286b667fa9d8aac6797065bc93a67aa9bd4d77c78ee943" Namespace="calico-system" Pod="csi-node-driver-b29ps" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-csi--node--driver--b29ps-eth0" Apr 30 03:24:33.698533 containerd[1586]: 2025-04-30 03:24:33.666 [INFO][4375] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ad78c704ce8bf4b8e286b667fa9d8aac6797065bc93a67aa9bd4d77c78ee943" Namespace="calico-system" Pod="csi-node-driver-b29ps" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-csi--node--driver--b29ps-eth0" Apr 30 03:24:33.698533 containerd[1586]: 2025-04-30 03:24:33.667 [INFO][4375] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0ad78c704ce8bf4b8e286b667fa9d8aac6797065bc93a67aa9bd4d77c78ee943" Namespace="calico-system" Pod="csi-node-driver-b29ps" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-csi--node--driver--b29ps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--0c5ff7085f-k8s-csi--node--driver--b29ps-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"67c47235-153a-4d06-ba98-7cf5056b9032", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 24, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-0c5ff7085f", ContainerID:"0ad78c704ce8bf4b8e286b667fa9d8aac6797065bc93a67aa9bd4d77c78ee943", Pod:"csi-node-driver-b29ps", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.43.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9139a752b1a", MAC:"aa:d3:3c:9d:92:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:24:33.698533 containerd[1586]: 2025-04-30 03:24:33.684 [INFO][4375] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0ad78c704ce8bf4b8e286b667fa9d8aac6797065bc93a67aa9bd4d77c78ee943" Namespace="calico-system" Pod="csi-node-driver-b29ps" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-csi--node--driver--b29ps-eth0" Apr 30 03:24:33.739082 containerd[1586]: time="2025-04-30T03:24:33.730061770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:24:33.739082 containerd[1586]: time="2025-04-30T03:24:33.730125958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:24:33.739082 containerd[1586]: time="2025-04-30T03:24:33.730156240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:33.739082 containerd[1586]: time="2025-04-30T03:24:33.730309122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:33.771910 containerd[1586]: time="2025-04-30T03:24:33.771772892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:24:33.772231 containerd[1586]: time="2025-04-30T03:24:33.772028342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:24:33.772231 containerd[1586]: time="2025-04-30T03:24:33.772091394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:33.774645 containerd[1586]: time="2025-04-30T03:24:33.774549058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:33.830697 containerd[1586]: time="2025-04-30T03:24:33.830557476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-trcpw,Uid:39887ce7-27ce-4a68-a0cb-cc6961010eef,Namespace:kube-system,Attempt:1,} returns sandbox id \"1399f4c52f136141e632dd53b09bf3cca9cdf0e7f0ac5351b58d4ca1041f4272\"" Apr 30 03:24:33.832063 kubelet[2721]: E0430 03:24:33.832025 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:24:33.840563 containerd[1586]: time="2025-04-30T03:24:33.840494790Z" level=info msg="CreateContainer within sandbox \"1399f4c52f136141e632dd53b09bf3cca9cdf0e7f0ac5351b58d4ca1041f4272\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:24:33.840915 containerd[1586]: time="2025-04-30T03:24:33.840893570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b29ps,Uid:67c47235-153a-4d06-ba98-7cf5056b9032,Namespace:calico-system,Attempt:1,} returns sandbox id \"0ad78c704ce8bf4b8e286b667fa9d8aac6797065bc93a67aa9bd4d77c78ee943\"" Apr 30 03:24:33.858971 containerd[1586]: time="2025-04-30T03:24:33.858291141Z" level=info msg="CreateContainer within sandbox \"1399f4c52f136141e632dd53b09bf3cca9cdf0e7f0ac5351b58d4ca1041f4272\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a9f3e3e938d62098fb3809203507888dfed6f366addca6cd7d1eca8a049dc387\"" Apr 30 03:24:33.859150 containerd[1586]: time="2025-04-30T03:24:33.858993125Z" level=info msg="StartContainer for \"a9f3e3e938d62098fb3809203507888dfed6f366addca6cd7d1eca8a049dc387\"" Apr 30 03:24:33.939958 containerd[1586]: time="2025-04-30T03:24:33.939907902Z" level=info msg="StartContainer for \"a9f3e3e938d62098fb3809203507888dfed6f366addca6cd7d1eca8a049dc387\" returns successfully" Apr 30 03:24:34.048701 containerd[1586]: time="2025-04-30T03:24:34.048413418Z" level=info msg="StopPodSandbox for \"e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f\"" Apr 30 03:24:34.049417 containerd[1586]: time="2025-04-30T03:24:34.048427986Z" level=info msg="StopPodSandbox for \"11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de\"" Apr 30 03:24:34.310094 containerd[1586]: 2025-04-30 03:24:34.150 [INFO][4577] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" Apr 30 03:24:34.310094 containerd[1586]: 2025-04-30 03:24:34.152 [INFO][4577] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" iface="eth0" netns="/var/run/netns/cni-06201d10-49a2-2a82-e6ad-94d51e2a5cf9" Apr 30 03:24:34.310094 containerd[1586]: 2025-04-30 03:24:34.152 [INFO][4577] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" iface="eth0" netns="/var/run/netns/cni-06201d10-49a2-2a82-e6ad-94d51e2a5cf9" Apr 30 03:24:34.310094 containerd[1586]: 2025-04-30 03:24:34.153 [INFO][4577] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" iface="eth0" netns="/var/run/netns/cni-06201d10-49a2-2a82-e6ad-94d51e2a5cf9" Apr 30 03:24:34.310094 containerd[1586]: 2025-04-30 03:24:34.153 [INFO][4577] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" Apr 30 03:24:34.310094 containerd[1586]: 2025-04-30 03:24:34.153 [INFO][4577] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" Apr 30 03:24:34.310094 containerd[1586]: 2025-04-30 03:24:34.292 [INFO][4590] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" HandleID="k8s-pod-network.11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--kube--controllers--65cd484dd7--znmv5-eth0" Apr 30 03:24:34.310094 containerd[1586]: 2025-04-30 03:24:34.292 [INFO][4590] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:24:34.310094 containerd[1586]: 2025-04-30 03:24:34.292 [INFO][4590] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:24:34.310094 containerd[1586]: 2025-04-30 03:24:34.301 [WARNING][4590] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" HandleID="k8s-pod-network.11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--kube--controllers--65cd484dd7--znmv5-eth0" Apr 30 03:24:34.310094 containerd[1586]: 2025-04-30 03:24:34.301 [INFO][4590] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" HandleID="k8s-pod-network.11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--kube--controllers--65cd484dd7--znmv5-eth0" Apr 30 03:24:34.310094 containerd[1586]: 2025-04-30 03:24:34.305 [INFO][4590] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:24:34.310094 containerd[1586]: 2025-04-30 03:24:34.307 [INFO][4577] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" Apr 30 03:24:34.310094 containerd[1586]: time="2025-04-30T03:24:34.309912261Z" level=info msg="TearDown network for sandbox \"11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de\" successfully" Apr 30 03:24:34.310094 containerd[1586]: time="2025-04-30T03:24:34.309942411Z" level=info msg="StopPodSandbox for \"11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de\" returns successfully" Apr 30 03:24:34.313988 containerd[1586]: time="2025-04-30T03:24:34.313553285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65cd484dd7-znmv5,Uid:2319ef4b-2c33-4712-bebd-81dcb419db1f,Namespace:calico-system,Attempt:1,}" Apr 30 03:24:34.322179 systemd[1]: run-netns-cni\x2d06201d10\x2d49a2\x2d2a82\x2de6ad\x2d94d51e2a5cf9.mount: Deactivated successfully. Apr 30 03:24:34.330366 containerd[1586]: 2025-04-30 03:24:34.161 [INFO][4578] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" Apr 30 03:24:34.330366 containerd[1586]: 2025-04-30 03:24:34.165 [INFO][4578] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" iface="eth0" netns="/var/run/netns/cni-b460bf8b-0bc6-cd58-4d92-314b5d26e2f3" Apr 30 03:24:34.330366 containerd[1586]: 2025-04-30 03:24:34.167 [INFO][4578] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" iface="eth0" netns="/var/run/netns/cni-b460bf8b-0bc6-cd58-4d92-314b5d26e2f3" Apr 30 03:24:34.330366 containerd[1586]: 2025-04-30 03:24:34.168 [INFO][4578] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" iface="eth0" netns="/var/run/netns/cni-b460bf8b-0bc6-cd58-4d92-314b5d26e2f3" Apr 30 03:24:34.330366 containerd[1586]: 2025-04-30 03:24:34.168 [INFO][4578] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" Apr 30 03:24:34.330366 containerd[1586]: 2025-04-30 03:24:34.168 [INFO][4578] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" Apr 30 03:24:34.330366 containerd[1586]: 2025-04-30 03:24:34.299 [INFO][4595] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" HandleID="k8s-pod-network.e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ff657-eth0" Apr 30 03:24:34.330366 containerd[1586]: 2025-04-30 03:24:34.300 [INFO][4595] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:24:34.330366 containerd[1586]: 2025-04-30 03:24:34.305 [INFO][4595] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:24:34.330366 containerd[1586]: 2025-04-30 03:24:34.317 [WARNING][4595] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" HandleID="k8s-pod-network.e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ff657-eth0" Apr 30 03:24:34.330366 containerd[1586]: 2025-04-30 03:24:34.317 [INFO][4595] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" HandleID="k8s-pod-network.e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ff657-eth0" Apr 30 03:24:34.330366 containerd[1586]: 2025-04-30 03:24:34.323 [INFO][4595] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:24:34.330366 containerd[1586]: 2025-04-30 03:24:34.326 [INFO][4578] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" Apr 30 03:24:34.333414 containerd[1586]: time="2025-04-30T03:24:34.331246806Z" level=info msg="TearDown network for sandbox \"e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f\" successfully" Apr 30 03:24:34.333414 containerd[1586]: time="2025-04-30T03:24:34.331295114Z" level=info msg="StopPodSandbox for \"e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f\" returns successfully" Apr 30 03:24:34.333414 containerd[1586]: time="2025-04-30T03:24:34.332630587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6678cff58b-ff657,Uid:474f9b63-fbd6-4b8c-889a-f6e7b01ee1f4,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:24:34.344556 systemd[1]: run-netns-cni\x2db460bf8b\x2d0bc6\x2dcd58\x2d4d92\x2d314b5d26e2f3.mount: Deactivated successfully. Apr 30 03:24:34.460135 kubelet[2721]: E0430 03:24:34.459619 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:24:34.460135 kubelet[2721]: E0430 03:24:34.459853 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:24:34.631797 systemd-networkd[1222]: cali0c7d56e546b: Link UP Apr 30 03:24:34.633622 systemd-networkd[1222]: cali0c7d56e546b: Gained carrier Apr 30 03:24:34.649210 kubelet[2721]: I0430 03:24:34.647021 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-trcpw" podStartSLOduration=41.646995564 podStartE2EDuration="41.646995564s" podCreationTimestamp="2025-04-30 03:23:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:24:34.491928638 +0000 UTC m=+55.654048004" watchObservedRunningTime="2025-04-30 03:24:34.646995564 +0000 UTC m=+55.809114927" Apr 30 03:24:34.659339 containerd[1586]: 2025-04-30 03:24:34.432 [INFO][4603] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--0--0c5ff7085f-k8s-calico--kube--controllers--65cd484dd7--znmv5-eth0 calico-kube-controllers-65cd484dd7- calico-system 2319ef4b-2c33-4712-bebd-81dcb419db1f 961 0 2025-04-30 03:24:04 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:65cd484dd7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.3-0-0c5ff7085f calico-kube-controllers-65cd484dd7-znmv5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0c7d56e546b [] []}} ContainerID="4bfcde3b209affb1bef3bf98ebfe7402569e4e3b635c2c88b8234fd6645c9abe" Namespace="calico-system" Pod="calico-kube-controllers-65cd484dd7-znmv5" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-calico--kube--controllers--65cd484dd7--znmv5-" Apr 30 03:24:34.659339 containerd[1586]: 2025-04-30 03:24:34.432 [INFO][4603] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4bfcde3b209affb1bef3bf98ebfe7402569e4e3b635c2c88b8234fd6645c9abe" Namespace="calico-system" Pod="calico-kube-controllers-65cd484dd7-znmv5" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-calico--kube--controllers--65cd484dd7--znmv5-eth0" Apr 30 03:24:34.659339 containerd[1586]: 2025-04-30 03:24:34.539 [INFO][4627] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4bfcde3b209affb1bef3bf98ebfe7402569e4e3b635c2c88b8234fd6645c9abe" HandleID="k8s-pod-network.4bfcde3b209affb1bef3bf98ebfe7402569e4e3b635c2c88b8234fd6645c9abe" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--kube--controllers--65cd484dd7--znmv5-eth0" Apr 30 03:24:34.659339 containerd[1586]: 2025-04-30 03:24:34.557 [INFO][4627] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4bfcde3b209affb1bef3bf98ebfe7402569e4e3b635c2c88b8234fd6645c9abe" HandleID="k8s-pod-network.4bfcde3b209affb1bef3bf98ebfe7402569e4e3b635c2c88b8234fd6645c9abe" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--kube--controllers--65cd484dd7--znmv5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bc9c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-0-0c5ff7085f", "pod":"calico-kube-controllers-65cd484dd7-znmv5", "timestamp":"2025-04-30 03:24:34.539826052 +0000 UTC"}, Hostname:"ci-4081.3.3-0-0c5ff7085f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:24:34.659339 containerd[1586]: 2025-04-30 03:24:34.557 [INFO][4627] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:24:34.659339 containerd[1586]: 2025-04-30 03:24:34.557 [INFO][4627] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:24:34.659339 containerd[1586]: 2025-04-30 03:24:34.558 [INFO][4627] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-0-0c5ff7085f' Apr 30 03:24:34.659339 containerd[1586]: 2025-04-30 03:24:34.562 [INFO][4627] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4bfcde3b209affb1bef3bf98ebfe7402569e4e3b635c2c88b8234fd6645c9abe" host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:34.659339 containerd[1586]: 2025-04-30 03:24:34.571 [INFO][4627] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:34.659339 containerd[1586]: 2025-04-30 03:24:34.583 [INFO][4627] ipam/ipam.go 489: Trying affinity for 192.168.43.128/26 host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:34.659339 containerd[1586]: 2025-04-30 03:24:34.586 [INFO][4627] ipam/ipam.go 155: Attempting to load block cidr=192.168.43.128/26 host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:34.659339 containerd[1586]: 2025-04-30 03:24:34.590 [INFO][4627] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.43.128/26 host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:34.659339 containerd[1586]: 2025-04-30 03:24:34.590 [INFO][4627] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.43.128/26 handle="k8s-pod-network.4bfcde3b209affb1bef3bf98ebfe7402569e4e3b635c2c88b8234fd6645c9abe" host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:34.659339 containerd[1586]: 2025-04-30 03:24:34.594 [INFO][4627] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4bfcde3b209affb1bef3bf98ebfe7402569e4e3b635c2c88b8234fd6645c9abe Apr 30 03:24:34.659339 containerd[1586]: 2025-04-30 03:24:34.603 [INFO][4627] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.43.128/26 handle="k8s-pod-network.4bfcde3b209affb1bef3bf98ebfe7402569e4e3b635c2c88b8234fd6645c9abe" host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:34.659339 containerd[1586]: 2025-04-30 03:24:34.614 [INFO][4627] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.43.133/26] block=192.168.43.128/26 handle="k8s-pod-network.4bfcde3b209affb1bef3bf98ebfe7402569e4e3b635c2c88b8234fd6645c9abe" host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:34.659339 containerd[1586]: 2025-04-30 03:24:34.614 [INFO][4627] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.43.133/26] handle="k8s-pod-network.4bfcde3b209affb1bef3bf98ebfe7402569e4e3b635c2c88b8234fd6645c9abe" host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:34.659339 containerd[1586]: 2025-04-30 03:24:34.615 [INFO][4627] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:24:34.659339 containerd[1586]: 2025-04-30 03:24:34.615 [INFO][4627] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.43.133/26] IPv6=[] ContainerID="4bfcde3b209affb1bef3bf98ebfe7402569e4e3b635c2c88b8234fd6645c9abe" HandleID="k8s-pod-network.4bfcde3b209affb1bef3bf98ebfe7402569e4e3b635c2c88b8234fd6645c9abe" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--kube--controllers--65cd484dd7--znmv5-eth0" Apr 30 03:24:34.663102 containerd[1586]: 2025-04-30 03:24:34.621 [INFO][4603] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4bfcde3b209affb1bef3bf98ebfe7402569e4e3b635c2c88b8234fd6645c9abe" Namespace="calico-system" Pod="calico-kube-controllers-65cd484dd7-znmv5" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-calico--kube--controllers--65cd484dd7--znmv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--0c5ff7085f-k8s-calico--kube--controllers--65cd484dd7--znmv5-eth0", GenerateName:"calico-kube-controllers-65cd484dd7-", Namespace:"calico-system", SelfLink:"", UID:"2319ef4b-2c33-4712-bebd-81dcb419db1f", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 24, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65cd484dd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-0c5ff7085f", ContainerID:"", Pod:"calico-kube-controllers-65cd484dd7-znmv5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.43.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0c7d56e546b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:24:34.663102 containerd[1586]: 2025-04-30 03:24:34.622 [INFO][4603] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.43.133/32] ContainerID="4bfcde3b209affb1bef3bf98ebfe7402569e4e3b635c2c88b8234fd6645c9abe" Namespace="calico-system" Pod="calico-kube-controllers-65cd484dd7-znmv5" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-calico--kube--controllers--65cd484dd7--znmv5-eth0" Apr 30 03:24:34.663102 containerd[1586]: 2025-04-30 03:24:34.622 [INFO][4603] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c7d56e546b ContainerID="4bfcde3b209affb1bef3bf98ebfe7402569e4e3b635c2c88b8234fd6645c9abe" Namespace="calico-system" Pod="calico-kube-controllers-65cd484dd7-znmv5" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-calico--kube--controllers--65cd484dd7--znmv5-eth0" Apr 30 03:24:34.663102 containerd[1586]: 2025-04-30 03:24:34.631 [INFO][4603] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4bfcde3b209affb1bef3bf98ebfe7402569e4e3b635c2c88b8234fd6645c9abe" Namespace="calico-system" Pod="calico-kube-controllers-65cd484dd7-znmv5" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-calico--kube--controllers--65cd484dd7--znmv5-eth0" Apr 30 03:24:34.663102 containerd[1586]: 2025-04-30 03:24:34.633 [INFO][4603] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4bfcde3b209affb1bef3bf98ebfe7402569e4e3b635c2c88b8234fd6645c9abe" Namespace="calico-system" Pod="calico-kube-controllers-65cd484dd7-znmv5" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-calico--kube--controllers--65cd484dd7--znmv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--0c5ff7085f-k8s-calico--kube--controllers--65cd484dd7--znmv5-eth0", GenerateName:"calico-kube-controllers-65cd484dd7-", Namespace:"calico-system", SelfLink:"", UID:"2319ef4b-2c33-4712-bebd-81dcb419db1f", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 24, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65cd484dd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-0c5ff7085f", ContainerID:"4bfcde3b209affb1bef3bf98ebfe7402569e4e3b635c2c88b8234fd6645c9abe", Pod:"calico-kube-controllers-65cd484dd7-znmv5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.43.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0c7d56e546b", MAC:"f2:76:ec:d1:44:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:24:34.663102 containerd[1586]: 2025-04-30 03:24:34.652 [INFO][4603] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4bfcde3b209affb1bef3bf98ebfe7402569e4e3b635c2c88b8234fd6645c9abe" Namespace="calico-system" Pod="calico-kube-controllers-65cd484dd7-znmv5" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-calico--kube--controllers--65cd484dd7--znmv5-eth0" Apr 30 03:24:34.716832 systemd-networkd[1222]: calibd768560f74: Link UP Apr 30 03:24:34.724801 systemd-networkd[1222]: calibd768560f74: Gained carrier Apr 30 03:24:34.736444 containerd[1586]: time="2025-04-30T03:24:34.734937711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:24:34.736444 containerd[1586]: time="2025-04-30T03:24:34.735125661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:24:34.736444 containerd[1586]: time="2025-04-30T03:24:34.735201889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:34.736444 containerd[1586]: time="2025-04-30T03:24:34.735511516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:34.773587 containerd[1586]: 2025-04-30 03:24:34.506 [INFO][4613] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ff657-eth0 calico-apiserver-6678cff58b- calico-apiserver 474f9b63-fbd6-4b8c-889a-f6e7b01ee1f4 962 0 2025-04-30 03:24:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6678cff58b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-0-0c5ff7085f calico-apiserver-6678cff58b-ff657 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibd768560f74 [] []}} ContainerID="1913523d6e48d0ddfaf07cd1f4ee00bb5b4d317a759021d68860582771ef2e97" Namespace="calico-apiserver" Pod="calico-apiserver-6678cff58b-ff657" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ff657-" Apr 30 03:24:34.773587 containerd[1586]: 2025-04-30 03:24:34.506 [INFO][4613] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1913523d6e48d0ddfaf07cd1f4ee00bb5b4d317a759021d68860582771ef2e97" Namespace="calico-apiserver" Pod="calico-apiserver-6678cff58b-ff657" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ff657-eth0" Apr 30 03:24:34.773587 containerd[1586]: 2025-04-30 03:24:34.589 [INFO][4635] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1913523d6e48d0ddfaf07cd1f4ee00bb5b4d317a759021d68860582771ef2e97" HandleID="k8s-pod-network.1913523d6e48d0ddfaf07cd1f4ee00bb5b4d317a759021d68860582771ef2e97" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ff657-eth0" Apr 30 03:24:34.773587 containerd[1586]: 2025-04-30 03:24:34.611 [INFO][4635] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1913523d6e48d0ddfaf07cd1f4ee00bb5b4d317a759021d68860582771ef2e97" HandleID="k8s-pod-network.1913523d6e48d0ddfaf07cd1f4ee00bb5b4d317a759021d68860582771ef2e97" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ff657-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00046cd80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-0-0c5ff7085f", "pod":"calico-apiserver-6678cff58b-ff657", "timestamp":"2025-04-30 03:24:34.589706508 +0000 UTC"}, Hostname:"ci-4081.3.3-0-0c5ff7085f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:24:34.773587 containerd[1586]: 2025-04-30 03:24:34.612 [INFO][4635] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:24:34.773587 containerd[1586]: 2025-04-30 03:24:34.615 [INFO][4635] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:24:34.773587 containerd[1586]: 2025-04-30 03:24:34.615 [INFO][4635] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-0-0c5ff7085f' Apr 30 03:24:34.773587 containerd[1586]: 2025-04-30 03:24:34.618 [INFO][4635] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1913523d6e48d0ddfaf07cd1f4ee00bb5b4d317a759021d68860582771ef2e97" host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:34.773587 containerd[1586]: 2025-04-30 03:24:34.627 [INFO][4635] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:34.773587 containerd[1586]: 2025-04-30 03:24:34.642 [INFO][4635] ipam/ipam.go 489: Trying affinity for 192.168.43.128/26 host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:34.773587 containerd[1586]: 2025-04-30 03:24:34.651 [INFO][4635] ipam/ipam.go 155: Attempting to load block cidr=192.168.43.128/26 host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:34.773587 containerd[1586]: 2025-04-30 03:24:34.662 [INFO][4635] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.43.128/26 host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:34.773587 containerd[1586]: 2025-04-30 03:24:34.663 [INFO][4635] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.43.128/26 handle="k8s-pod-network.1913523d6e48d0ddfaf07cd1f4ee00bb5b4d317a759021d68860582771ef2e97" host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:34.773587 containerd[1586]: 2025-04-30 03:24:34.667 [INFO][4635] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1913523d6e48d0ddfaf07cd1f4ee00bb5b4d317a759021d68860582771ef2e97 Apr 30 03:24:34.773587 containerd[1586]: 2025-04-30 03:24:34.679 [INFO][4635] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.43.128/26 handle="k8s-pod-network.1913523d6e48d0ddfaf07cd1f4ee00bb5b4d317a759021d68860582771ef2e97" host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:34.773587 containerd[1586]: 2025-04-30 03:24:34.694 [INFO][4635] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.43.134/26] block=192.168.43.128/26 handle="k8s-pod-network.1913523d6e48d0ddfaf07cd1f4ee00bb5b4d317a759021d68860582771ef2e97" host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:34.773587 containerd[1586]: 2025-04-30 03:24:34.694 [INFO][4635] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.43.134/26] handle="k8s-pod-network.1913523d6e48d0ddfaf07cd1f4ee00bb5b4d317a759021d68860582771ef2e97" host="ci-4081.3.3-0-0c5ff7085f" Apr 30 03:24:34.773587 containerd[1586]: 2025-04-30 03:24:34.694 [INFO][4635] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:24:34.773587 containerd[1586]: 2025-04-30 03:24:34.694 [INFO][4635] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.43.134/26] IPv6=[] ContainerID="1913523d6e48d0ddfaf07cd1f4ee00bb5b4d317a759021d68860582771ef2e97" HandleID="k8s-pod-network.1913523d6e48d0ddfaf07cd1f4ee00bb5b4d317a759021d68860582771ef2e97" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ff657-eth0" Apr 30 03:24:34.775062 containerd[1586]: 2025-04-30 03:24:34.706 [INFO][4613] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1913523d6e48d0ddfaf07cd1f4ee00bb5b4d317a759021d68860582771ef2e97" Namespace="calico-apiserver" Pod="calico-apiserver-6678cff58b-ff657" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ff657-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ff657-eth0", GenerateName:"calico-apiserver-6678cff58b-", Namespace:"calico-apiserver", SelfLink:"", UID:"474f9b63-fbd6-4b8c-889a-f6e7b01ee1f4", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 24, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6678cff58b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-0c5ff7085f", ContainerID:"", Pod:"calico-apiserver-6678cff58b-ff657", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd768560f74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:24:34.775062 containerd[1586]: 2025-04-30 03:24:34.706 [INFO][4613] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.43.134/32] ContainerID="1913523d6e48d0ddfaf07cd1f4ee00bb5b4d317a759021d68860582771ef2e97" Namespace="calico-apiserver" Pod="calico-apiserver-6678cff58b-ff657" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ff657-eth0" Apr 30 03:24:34.775062 containerd[1586]: 2025-04-30 03:24:34.706 [INFO][4613] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd768560f74 ContainerID="1913523d6e48d0ddfaf07cd1f4ee00bb5b4d317a759021d68860582771ef2e97" Namespace="calico-apiserver" Pod="calico-apiserver-6678cff58b-ff657" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ff657-eth0" Apr 30 03:24:34.775062 containerd[1586]: 2025-04-30 03:24:34.723 [INFO][4613] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1913523d6e48d0ddfaf07cd1f4ee00bb5b4d317a759021d68860582771ef2e97" Namespace="calico-apiserver" Pod="calico-apiserver-6678cff58b-ff657" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ff657-eth0" Apr 30 03:24:34.775062 containerd[1586]: 2025-04-30 03:24:34.734 [INFO][4613] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1913523d6e48d0ddfaf07cd1f4ee00bb5b4d317a759021d68860582771ef2e97" Namespace="calico-apiserver" Pod="calico-apiserver-6678cff58b-ff657" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ff657-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ff657-eth0", GenerateName:"calico-apiserver-6678cff58b-", Namespace:"calico-apiserver", SelfLink:"", UID:"474f9b63-fbd6-4b8c-889a-f6e7b01ee1f4", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 24, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6678cff58b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-0c5ff7085f", ContainerID:"1913523d6e48d0ddfaf07cd1f4ee00bb5b4d317a759021d68860582771ef2e97", Pod:"calico-apiserver-6678cff58b-ff657", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd768560f74", MAC:"d6:59:da:63:59:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:24:34.775062 containerd[1586]: 2025-04-30 03:24:34.767 [INFO][4613] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1913523d6e48d0ddfaf07cd1f4ee00bb5b4d317a759021d68860582771ef2e97" Namespace="calico-apiserver" Pod="calico-apiserver-6678cff58b-ff657" WorkloadEndpoint="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ff657-eth0" Apr 30 03:24:34.894295 containerd[1586]: time="2025-04-30T03:24:34.894072328Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:24:34.895081 containerd[1586]: time="2025-04-30T03:24:34.894551877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:24:34.895081 containerd[1586]: time="2025-04-30T03:24:34.894857822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:34.895796 containerd[1586]: time="2025-04-30T03:24:34.895504269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:34.967402 containerd[1586]: time="2025-04-30T03:24:34.966761723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65cd484dd7-znmv5,Uid:2319ef4b-2c33-4712-bebd-81dcb419db1f,Namespace:calico-system,Attempt:1,} returns sandbox id \"4bfcde3b209affb1bef3bf98ebfe7402569e4e3b635c2c88b8234fd6645c9abe\"" Apr 30 03:24:35.015506 systemd-networkd[1222]: cali81296ef1864: Gained IPv6LL Apr 30 03:24:35.142206 containerd[1586]: time="2025-04-30T03:24:35.142148092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6678cff58b-ff657,Uid:474f9b63-fbd6-4b8c-889a-f6e7b01ee1f4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1913523d6e48d0ddfaf07cd1f4ee00bb5b4d317a759021d68860582771ef2e97\"" Apr 30 03:24:35.148940 systemd-journald[1137]: Under memory pressure, flushing caches. Apr 30 03:24:35.147721 systemd-resolved[1478]: Under memory pressure, flushing caches. Apr 30 03:24:35.147797 systemd-resolved[1478]: Flushed all caches. Apr 30 03:24:35.209341 systemd-networkd[1222]: cali9139a752b1a: Gained IPv6LL Apr 30 03:24:35.482400 kubelet[2721]: E0430 03:24:35.477914 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:24:35.482400 kubelet[2721]: E0430 03:24:35.480621 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:24:36.296064 systemd-networkd[1222]: cali0c7d56e546b: Gained IPv6LL Apr 30 03:24:36.445552 containerd[1586]: time="2025-04-30T03:24:36.445185786Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:36.446243 containerd[1586]: time="2025-04-30T03:24:36.446048805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" Apr 30 03:24:36.448732 containerd[1586]: time="2025-04-30T03:24:36.447028471Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:36.451157 containerd[1586]: time="2025-04-30T03:24:36.451040952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:36.452527 containerd[1586]: time="2025-04-30T03:24:36.452236173Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 4.094237248s" Apr 30 03:24:36.452527 containerd[1586]: time="2025-04-30T03:24:36.452294383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:24:36.454933 containerd[1586]: time="2025-04-30T03:24:36.454775807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 03:24:36.457150 containerd[1586]: time="2025-04-30T03:24:36.457100986Z" level=info msg="CreateContainer within sandbox \"507a2b9d7a15a168b1463e79da36a5139eadb608ae2ed04f37a2f586d28d0022\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:24:36.478370 containerd[1586]: time="2025-04-30T03:24:36.478307623Z" level=info msg="CreateContainer within sandbox \"507a2b9d7a15a168b1463e79da36a5139eadb608ae2ed04f37a2f586d28d0022\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7b3840d34e3e531269495fd8c1e118fcaf12d3935b53e5d0e468e5b0c2054c42\"" Apr 30 03:24:36.482607 containerd[1586]: time="2025-04-30T03:24:36.482076976Z" level=info msg="StartContainer for \"7b3840d34e3e531269495fd8c1e118fcaf12d3935b53e5d0e468e5b0c2054c42\"" Apr 30 03:24:36.499089 kubelet[2721]: E0430 03:24:36.499049 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:24:36.559244 systemd[1]: run-containerd-runc-k8s.io-7b3840d34e3e531269495fd8c1e118fcaf12d3935b53e5d0e468e5b0c2054c42-runc.YKNQKQ.mount: Deactivated successfully. Apr 30 03:24:36.624600 containerd[1586]: time="2025-04-30T03:24:36.624519681Z" level=info msg="StartContainer for \"7b3840d34e3e531269495fd8c1e118fcaf12d3935b53e5d0e468e5b0c2054c42\" returns successfully" Apr 30 03:24:36.637407 systemd[1]: Started sshd@10-64.227.96.87:22-139.178.89.65:52568.service - OpenSSH per-connection server daemon (139.178.89.65:52568). Apr 30 03:24:36.680527 systemd-networkd[1222]: calibd768560f74: Gained IPv6LL Apr 30 03:24:36.732552 sshd[4799]: Accepted publickey for core from 139.178.89.65 port 52568 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:24:36.735108 sshd[4799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:24:36.742947 systemd-logind[1563]: New session 11 of user core. Apr 30 03:24:36.750963 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 03:24:37.148267 sshd[4799]: pam_unix(sshd:session): session closed for user core Apr 30 03:24:37.161750 systemd[1]: Started sshd@11-64.227.96.87:22-139.178.89.65:52580.service - OpenSSH per-connection server daemon (139.178.89.65:52580). Apr 30 03:24:37.165106 systemd[1]: sshd@10-64.227.96.87:22-139.178.89.65:52568.service: Deactivated successfully. Apr 30 03:24:37.177233 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 03:24:37.183548 systemd-logind[1563]: Session 11 logged out. Waiting for processes to exit. Apr 30 03:24:37.196113 systemd-journald[1137]: Under memory pressure, flushing caches. Apr 30 03:24:37.191973 systemd-resolved[1478]: Under memory pressure, flushing caches. Apr 30 03:24:37.192013 systemd-resolved[1478]: Flushed all caches. Apr 30 03:24:37.195907 systemd-logind[1563]: Removed session 11. Apr 30 03:24:37.228254 sshd[4819]: Accepted publickey for core from 139.178.89.65 port 52580 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:24:37.230073 sshd[4819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:24:37.238081 systemd-logind[1563]: New session 12 of user core. Apr 30 03:24:37.245873 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 03:24:37.485129 sshd[4819]: pam_unix(sshd:session): session closed for user core Apr 30 03:24:37.503791 systemd[1]: Started sshd@12-64.227.96.87:22-139.178.89.65:52596.service - OpenSSH per-connection server daemon (139.178.89.65:52596). Apr 30 03:24:37.504941 systemd[1]: sshd@11-64.227.96.87:22-139.178.89.65:52580.service: Deactivated successfully. Apr 30 03:24:37.512295 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 03:24:37.520262 systemd-logind[1563]: Session 12 logged out. Waiting for processes to exit. Apr 30 03:24:37.535243 systemd-logind[1563]: Removed session 12. Apr 30 03:24:37.541148 kubelet[2721]: E0430 03:24:37.540190 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:24:37.605443 sshd[4831]: Accepted publickey for core from 139.178.89.65 port 52596 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:24:37.608110 sshd[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:24:37.617436 systemd-logind[1563]: New session 13 of user core. Apr 30 03:24:37.621072 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 03:24:37.974831 sshd[4831]: pam_unix(sshd:session): session closed for user core Apr 30 03:24:37.986712 systemd[1]: sshd@12-64.227.96.87:22-139.178.89.65:52596.service: Deactivated successfully. Apr 30 03:24:37.995646 systemd-logind[1563]: Session 13 logged out. Waiting for processes to exit. Apr 30 03:24:37.995785 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 03:24:38.004618 systemd-logind[1563]: Removed session 13. Apr 30 03:24:38.551606 kubelet[2721]: I0430 03:24:38.551519 2721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:24:38.965566 containerd[1586]: time="2025-04-30T03:24:38.965497317Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:38.966891 containerd[1586]: time="2025-04-30T03:24:38.966580091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" Apr 30 03:24:38.967649 containerd[1586]: time="2025-04-30T03:24:38.967559553Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:38.971648 containerd[1586]: time="2025-04-30T03:24:38.970915330Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:38.971648 containerd[1586]: time="2025-04-30T03:24:38.971486776Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 2.516656312s" Apr 30 03:24:38.971648 containerd[1586]: time="2025-04-30T03:24:38.971523252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" Apr 30 03:24:38.973669 containerd[1586]: time="2025-04-30T03:24:38.973611312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" Apr 30 03:24:38.977748 containerd[1586]: time="2025-04-30T03:24:38.977592669Z" level=info msg="CreateContainer within sandbox \"0ad78c704ce8bf4b8e286b667fa9d8aac6797065bc93a67aa9bd4d77c78ee943\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 03:24:39.000684 containerd[1586]: time="2025-04-30T03:24:39.000074391Z" level=info msg="CreateContainer within sandbox \"0ad78c704ce8bf4b8e286b667fa9d8aac6797065bc93a67aa9bd4d77c78ee943\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ea99b01a2f854677140f1b055e4d8593d0cced6e522465cf84cb62cceacd30d5\"" Apr 30 03:24:39.002388 containerd[1586]: time="2025-04-30T03:24:39.002350276Z" level=info msg="StartContainer for \"ea99b01a2f854677140f1b055e4d8593d0cced6e522465cf84cb62cceacd30d5\"" Apr 30 03:24:39.086514 systemd[1]: run-containerd-runc-k8s.io-ea99b01a2f854677140f1b055e4d8593d0cced6e522465cf84cb62cceacd30d5-runc.ZZymDQ.mount: Deactivated successfully. Apr 30 03:24:39.124069 containerd[1586]: time="2025-04-30T03:24:39.124009599Z" level=info msg="StopPodSandbox for \"5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17\"" Apr 30 03:24:39.147892 containerd[1586]: time="2025-04-30T03:24:39.147457553Z" level=info msg="StartContainer for \"ea99b01a2f854677140f1b055e4d8593d0cced6e522465cf84cb62cceacd30d5\" returns successfully" Apr 30 03:24:39.259613 containerd[1586]: 2025-04-30 03:24:39.213 [WARNING][4907] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ph7rm-eth0", GenerateName:"calico-apiserver-6678cff58b-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c63b596-a9ae-4e22-9c6f-207ff0492217", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 24, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6678cff58b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-0c5ff7085f", ContainerID:"507a2b9d7a15a168b1463e79da36a5139eadb608ae2ed04f37a2f586d28d0022", Pod:"calico-apiserver-6678cff58b-ph7rm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0139d77aa9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:24:39.259613 containerd[1586]: 2025-04-30 03:24:39.213 [INFO][4907] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" Apr 30 03:24:39.259613 containerd[1586]: 2025-04-30 03:24:39.213 [INFO][4907] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" iface="eth0" netns="" Apr 30 03:24:39.259613 containerd[1586]: 2025-04-30 03:24:39.213 [INFO][4907] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" Apr 30 03:24:39.259613 containerd[1586]: 2025-04-30 03:24:39.213 [INFO][4907] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" Apr 30 03:24:39.259613 containerd[1586]: 2025-04-30 03:24:39.243 [INFO][4914] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" HandleID="k8s-pod-network.5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ph7rm-eth0" Apr 30 03:24:39.259613 containerd[1586]: 2025-04-30 03:24:39.243 [INFO][4914] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:24:39.259613 containerd[1586]: 2025-04-30 03:24:39.243 [INFO][4914] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:24:39.259613 containerd[1586]: 2025-04-30 03:24:39.252 [WARNING][4914] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" HandleID="k8s-pod-network.5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ph7rm-eth0" Apr 30 03:24:39.259613 containerd[1586]: 2025-04-30 03:24:39.252 [INFO][4914] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" HandleID="k8s-pod-network.5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ph7rm-eth0" Apr 30 03:24:39.259613 containerd[1586]: 2025-04-30 03:24:39.254 [INFO][4914] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:24:39.259613 containerd[1586]: 2025-04-30 03:24:39.256 [INFO][4907] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" Apr 30 03:24:39.259613 containerd[1586]: time="2025-04-30T03:24:39.259358246Z" level=info msg="TearDown network for sandbox \"5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17\" successfully" Apr 30 03:24:39.259613 containerd[1586]: time="2025-04-30T03:24:39.259383912Z" level=info msg="StopPodSandbox for \"5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17\" returns successfully" Apr 30 03:24:39.266379 containerd[1586]: time="2025-04-30T03:24:39.266011572Z" level=info msg="RemovePodSandbox for \"5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17\"" Apr 30 03:24:39.268314 containerd[1586]: time="2025-04-30T03:24:39.268244671Z" level=info msg="Forcibly stopping sandbox \"5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17\"" Apr 30 03:24:39.363182 containerd[1586]: 2025-04-30 03:24:39.315 [WARNING][4933] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ph7rm-eth0", GenerateName:"calico-apiserver-6678cff58b-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c63b596-a9ae-4e22-9c6f-207ff0492217", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 24, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6678cff58b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-0c5ff7085f", ContainerID:"507a2b9d7a15a168b1463e79da36a5139eadb608ae2ed04f37a2f586d28d0022", Pod:"calico-apiserver-6678cff58b-ph7rm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0139d77aa9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:24:39.363182 containerd[1586]: 2025-04-30 03:24:39.316 [INFO][4933] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" Apr 30 03:24:39.363182 containerd[1586]: 2025-04-30 03:24:39.316 [INFO][4933] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" iface="eth0" netns="" Apr 30 03:24:39.363182 containerd[1586]: 2025-04-30 03:24:39.316 [INFO][4933] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" Apr 30 03:24:39.363182 containerd[1586]: 2025-04-30 03:24:39.316 [INFO][4933] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" Apr 30 03:24:39.363182 containerd[1586]: 2025-04-30 03:24:39.347 [INFO][4940] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" HandleID="k8s-pod-network.5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ph7rm-eth0" Apr 30 03:24:39.363182 containerd[1586]: 2025-04-30 03:24:39.347 [INFO][4940] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:24:39.363182 containerd[1586]: 2025-04-30 03:24:39.347 [INFO][4940] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:24:39.363182 containerd[1586]: 2025-04-30 03:24:39.355 [WARNING][4940] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" HandleID="k8s-pod-network.5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ph7rm-eth0" Apr 30 03:24:39.363182 containerd[1586]: 2025-04-30 03:24:39.355 [INFO][4940] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" HandleID="k8s-pod-network.5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ph7rm-eth0" Apr 30 03:24:39.363182 containerd[1586]: 2025-04-30 03:24:39.358 [INFO][4940] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:24:39.363182 containerd[1586]: 2025-04-30 03:24:39.360 [INFO][4933] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17" Apr 30 03:24:39.363953 containerd[1586]: time="2025-04-30T03:24:39.363229888Z" level=info msg="TearDown network for sandbox \"5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17\" successfully" Apr 30 03:24:39.374145 containerd[1586]: time="2025-04-30T03:24:39.373925920Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:24:39.374145 containerd[1586]: time="2025-04-30T03:24:39.374035434Z" level=info msg="RemovePodSandbox \"5a9c7a940f8799f93497e36cd0119ac1780d2a811bad0bf9c1932892a4eeca17\" returns successfully" Apr 30 03:24:39.375538 containerd[1586]: time="2025-04-30T03:24:39.375348973Z" level=info msg="StopPodSandbox for \"e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a\"" Apr 30 03:24:39.494866 containerd[1586]: 2025-04-30 03:24:39.435 [WARNING][4959] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--9hc9l-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ca50be34-7ff2-4c44-99a2-9d71206348f1", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-0c5ff7085f", ContainerID:"d9b960654f7e01ee3663461aacea86e4e4e751ce99df70e0c22c273e005d4871", Pod:"coredns-7db6d8ff4d-9hc9l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d51be7614a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:24:39.494866 containerd[1586]: 2025-04-30 03:24:39.435 [INFO][4959] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" Apr 30 03:24:39.494866 containerd[1586]: 2025-04-30 03:24:39.435 [INFO][4959] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" iface="eth0" netns="" Apr 30 03:24:39.494866 containerd[1586]: 2025-04-30 03:24:39.435 [INFO][4959] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" Apr 30 03:24:39.494866 containerd[1586]: 2025-04-30 03:24:39.435 [INFO][4959] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" Apr 30 03:24:39.494866 containerd[1586]: 2025-04-30 03:24:39.479 [INFO][4969] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" HandleID="k8s-pod-network.e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--9hc9l-eth0" Apr 30 03:24:39.494866 containerd[1586]: 2025-04-30 03:24:39.479 [INFO][4969] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:24:39.494866 containerd[1586]: 2025-04-30 03:24:39.479 [INFO][4969] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:24:39.494866 containerd[1586]: 2025-04-30 03:24:39.487 [WARNING][4969] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" HandleID="k8s-pod-network.e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--9hc9l-eth0" Apr 30 03:24:39.494866 containerd[1586]: 2025-04-30 03:24:39.487 [INFO][4969] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" HandleID="k8s-pod-network.e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--9hc9l-eth0" Apr 30 03:24:39.494866 containerd[1586]: 2025-04-30 03:24:39.489 [INFO][4969] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:24:39.494866 containerd[1586]: 2025-04-30 03:24:39.492 [INFO][4959] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" Apr 30 03:24:39.496126 containerd[1586]: time="2025-04-30T03:24:39.494941744Z" level=info msg="TearDown network for sandbox \"e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a\" successfully" Apr 30 03:24:39.496126 containerd[1586]: time="2025-04-30T03:24:39.494990934Z" level=info msg="StopPodSandbox for \"e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a\" returns successfully" Apr 30 03:24:39.496126 containerd[1586]: time="2025-04-30T03:24:39.495970988Z" level=info msg="RemovePodSandbox for \"e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a\"" Apr 30 03:24:39.496126 containerd[1586]: time="2025-04-30T03:24:39.496024202Z" level=info msg="Forcibly stopping sandbox \"e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a\"" Apr 30 03:24:39.602573 containerd[1586]: 2025-04-30 03:24:39.551 [WARNING][4987] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--9hc9l-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ca50be34-7ff2-4c44-99a2-9d71206348f1", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-0c5ff7085f", ContainerID:"d9b960654f7e01ee3663461aacea86e4e4e751ce99df70e0c22c273e005d4871", Pod:"coredns-7db6d8ff4d-9hc9l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d51be7614a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:24:39.602573 containerd[1586]: 2025-04-30 03:24:39.551 [INFO][4987] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" Apr 30 03:24:39.602573 containerd[1586]: 2025-04-30 03:24:39.551 [INFO][4987] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" iface="eth0" netns="" Apr 30 03:24:39.602573 containerd[1586]: 2025-04-30 03:24:39.551 [INFO][4987] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" Apr 30 03:24:39.602573 containerd[1586]: 2025-04-30 03:24:39.551 [INFO][4987] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" Apr 30 03:24:39.602573 containerd[1586]: 2025-04-30 03:24:39.587 [INFO][4994] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" HandleID="k8s-pod-network.e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--9hc9l-eth0" Apr 30 03:24:39.602573 containerd[1586]: 2025-04-30 03:24:39.587 [INFO][4994] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:24:39.602573 containerd[1586]: 2025-04-30 03:24:39.587 [INFO][4994] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:24:39.602573 containerd[1586]: 2025-04-30 03:24:39.595 [WARNING][4994] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" HandleID="k8s-pod-network.e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--9hc9l-eth0" Apr 30 03:24:39.602573 containerd[1586]: 2025-04-30 03:24:39.595 [INFO][4994] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" HandleID="k8s-pod-network.e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--9hc9l-eth0" Apr 30 03:24:39.602573 containerd[1586]: 2025-04-30 03:24:39.598 [INFO][4994] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:24:39.602573 containerd[1586]: 2025-04-30 03:24:39.600 [INFO][4987] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a" Apr 30 03:24:39.604454 containerd[1586]: time="2025-04-30T03:24:39.602609654Z" level=info msg="TearDown network for sandbox \"e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a\" successfully" Apr 30 03:24:39.605698 containerd[1586]: time="2025-04-30T03:24:39.605654109Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:24:39.605864 containerd[1586]: time="2025-04-30T03:24:39.605731317Z" level=info msg="RemovePodSandbox \"e2ba273d00e5b05e20165b7fa956a4bc6e25752a94d2b5bf40ad4908aadef47a\" returns successfully" Apr 30 03:24:39.606280 containerd[1586]: time="2025-04-30T03:24:39.606251449Z" level=info msg="StopPodSandbox for \"e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f\"" Apr 30 03:24:39.698826 containerd[1586]: 2025-04-30 03:24:39.654 [WARNING][5012] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ff657-eth0", GenerateName:"calico-apiserver-6678cff58b-", Namespace:"calico-apiserver", SelfLink:"", UID:"474f9b63-fbd6-4b8c-889a-f6e7b01ee1f4", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 24, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6678cff58b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-0c5ff7085f", ContainerID:"1913523d6e48d0ddfaf07cd1f4ee00bb5b4d317a759021d68860582771ef2e97", Pod:"calico-apiserver-6678cff58b-ff657", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd768560f74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:24:39.698826 containerd[1586]: 2025-04-30 03:24:39.654 [INFO][5012] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" Apr 30 03:24:39.698826 containerd[1586]: 2025-04-30 03:24:39.654 [INFO][5012] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" iface="eth0" netns="" Apr 30 03:24:39.698826 containerd[1586]: 2025-04-30 03:24:39.654 [INFO][5012] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" Apr 30 03:24:39.698826 containerd[1586]: 2025-04-30 03:24:39.654 [INFO][5012] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" Apr 30 03:24:39.698826 containerd[1586]: 2025-04-30 03:24:39.682 [INFO][5019] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" HandleID="k8s-pod-network.e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ff657-eth0" Apr 30 03:24:39.698826 containerd[1586]: 2025-04-30 03:24:39.682 [INFO][5019] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:24:39.698826 containerd[1586]: 2025-04-30 03:24:39.682 [INFO][5019] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:24:39.698826 containerd[1586]: 2025-04-30 03:24:39.690 [WARNING][5019] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" HandleID="k8s-pod-network.e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ff657-eth0" Apr 30 03:24:39.698826 containerd[1586]: 2025-04-30 03:24:39.690 [INFO][5019] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" HandleID="k8s-pod-network.e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ff657-eth0" Apr 30 03:24:39.698826 containerd[1586]: 2025-04-30 03:24:39.693 [INFO][5019] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:24:39.698826 containerd[1586]: 2025-04-30 03:24:39.695 [INFO][5012] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" Apr 30 03:24:39.700234 containerd[1586]: time="2025-04-30T03:24:39.699435026Z" level=info msg="TearDown network for sandbox \"e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f\" successfully" Apr 30 03:24:39.700234 containerd[1586]: time="2025-04-30T03:24:39.699488330Z" level=info msg="StopPodSandbox for \"e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f\" returns successfully" Apr 30 03:24:39.700234 containerd[1586]: time="2025-04-30T03:24:39.700121674Z" level=info msg="RemovePodSandbox for \"e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f\"" Apr 30 03:24:39.700234 containerd[1586]: time="2025-04-30T03:24:39.700154537Z" level=info msg="Forcibly stopping sandbox \"e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f\"" Apr 30 03:24:39.814004 containerd[1586]: 2025-04-30 03:24:39.759 [WARNING][5037] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ff657-eth0", GenerateName:"calico-apiserver-6678cff58b-", Namespace:"calico-apiserver", SelfLink:"", UID:"474f9b63-fbd6-4b8c-889a-f6e7b01ee1f4", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 24, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6678cff58b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-0c5ff7085f", ContainerID:"1913523d6e48d0ddfaf07cd1f4ee00bb5b4d317a759021d68860582771ef2e97", Pod:"calico-apiserver-6678cff58b-ff657", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd768560f74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:24:39.814004 containerd[1586]: 2025-04-30 03:24:39.759 [INFO][5037] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" Apr 30 03:24:39.814004 containerd[1586]: 2025-04-30 03:24:39.759 [INFO][5037] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" iface="eth0" netns="" Apr 30 03:24:39.814004 containerd[1586]: 2025-04-30 03:24:39.759 [INFO][5037] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" Apr 30 03:24:39.814004 containerd[1586]: 2025-04-30 03:24:39.759 [INFO][5037] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" Apr 30 03:24:39.814004 containerd[1586]: 2025-04-30 03:24:39.790 [INFO][5044] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" HandleID="k8s-pod-network.e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ff657-eth0" Apr 30 03:24:39.814004 containerd[1586]: 2025-04-30 03:24:39.791 [INFO][5044] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:24:39.814004 containerd[1586]: 2025-04-30 03:24:39.791 [INFO][5044] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:24:39.814004 containerd[1586]: 2025-04-30 03:24:39.801 [WARNING][5044] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" HandleID="k8s-pod-network.e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ff657-eth0" Apr 30 03:24:39.814004 containerd[1586]: 2025-04-30 03:24:39.801 [INFO][5044] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" HandleID="k8s-pod-network.e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--apiserver--6678cff58b--ff657-eth0" Apr 30 03:24:39.814004 containerd[1586]: 2025-04-30 03:24:39.804 [INFO][5044] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:24:39.814004 containerd[1586]: 2025-04-30 03:24:39.810 [INFO][5037] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f" Apr 30 03:24:39.814719 containerd[1586]: time="2025-04-30T03:24:39.814050566Z" level=info msg="TearDown network for sandbox \"e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f\" successfully" Apr 30 03:24:39.820192 containerd[1586]: time="2025-04-30T03:24:39.820142977Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:24:39.820618 containerd[1586]: time="2025-04-30T03:24:39.820216093Z" level=info msg="RemovePodSandbox \"e2ca9ca3d4571b131cb199f5bf6bddaa11130e2fd786a8410f1e5b6fad2baf9f\" returns successfully" Apr 30 03:24:39.821347 containerd[1586]: time="2025-04-30T03:24:39.821145595Z" level=info msg="StopPodSandbox for \"49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf\"" Apr 30 03:24:39.922383 containerd[1586]: 2025-04-30 03:24:39.873 [WARNING][5062] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--trcpw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"39887ce7-27ce-4a68-a0cb-cc6961010eef", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-0c5ff7085f", ContainerID:"1399f4c52f136141e632dd53b09bf3cca9cdf0e7f0ac5351b58d4ca1041f4272", Pod:"coredns-7db6d8ff4d-trcpw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81296ef1864", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:24:39.922383 containerd[1586]: 2025-04-30 03:24:39.873 [INFO][5062] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" Apr 30 03:24:39.922383 containerd[1586]: 2025-04-30 03:24:39.873 [INFO][5062] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" iface="eth0" netns="" Apr 30 03:24:39.922383 containerd[1586]: 2025-04-30 03:24:39.873 [INFO][5062] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" Apr 30 03:24:39.922383 containerd[1586]: 2025-04-30 03:24:39.873 [INFO][5062] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" Apr 30 03:24:39.922383 containerd[1586]: 2025-04-30 03:24:39.902 [INFO][5070] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" HandleID="k8s-pod-network.49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--trcpw-eth0" Apr 30 03:24:39.922383 containerd[1586]: 2025-04-30 03:24:39.903 [INFO][5070] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:24:39.922383 containerd[1586]: 2025-04-30 03:24:39.903 [INFO][5070] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:24:39.922383 containerd[1586]: 2025-04-30 03:24:39.913 [WARNING][5070] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" HandleID="k8s-pod-network.49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--trcpw-eth0" Apr 30 03:24:39.922383 containerd[1586]: 2025-04-30 03:24:39.914 [INFO][5070] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" HandleID="k8s-pod-network.49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--trcpw-eth0" Apr 30 03:24:39.922383 containerd[1586]: 2025-04-30 03:24:39.917 [INFO][5070] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:24:39.922383 containerd[1586]: 2025-04-30 03:24:39.919 [INFO][5062] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" Apr 30 03:24:39.922383 containerd[1586]: time="2025-04-30T03:24:39.922204347Z" level=info msg="TearDown network for sandbox \"49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf\" successfully" Apr 30 03:24:39.922383 containerd[1586]: time="2025-04-30T03:24:39.922234500Z" level=info msg="StopPodSandbox for \"49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf\" returns successfully" Apr 30 03:24:39.925840 containerd[1586]: time="2025-04-30T03:24:39.923901739Z" level=info msg="RemovePodSandbox for \"49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf\"" Apr 30 03:24:39.925840 containerd[1586]: time="2025-04-30T03:24:39.923935495Z" level=info msg="Forcibly stopping sandbox \"49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf\"" Apr 30 03:24:40.042272 containerd[1586]: 2025-04-30 03:24:39.988 [WARNING][5088] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--trcpw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"39887ce7-27ce-4a68-a0cb-cc6961010eef", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-0c5ff7085f", ContainerID:"1399f4c52f136141e632dd53b09bf3cca9cdf0e7f0ac5351b58d4ca1041f4272", Pod:"coredns-7db6d8ff4d-trcpw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81296ef1864", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:24:40.042272 containerd[1586]: 2025-04-30 03:24:39.988 [INFO][5088] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" Apr 30 03:24:40.042272 containerd[1586]: 2025-04-30 03:24:39.988 [INFO][5088] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" iface="eth0" netns="" Apr 30 03:24:40.042272 containerd[1586]: 2025-04-30 03:24:39.988 [INFO][5088] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" Apr 30 03:24:40.042272 containerd[1586]: 2025-04-30 03:24:39.988 [INFO][5088] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" Apr 30 03:24:40.042272 containerd[1586]: 2025-04-30 03:24:40.024 [INFO][5097] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" HandleID="k8s-pod-network.49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--trcpw-eth0" Apr 30 03:24:40.042272 containerd[1586]: 2025-04-30 03:24:40.024 [INFO][5097] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:24:40.042272 containerd[1586]: 2025-04-30 03:24:40.025 [INFO][5097] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:24:40.042272 containerd[1586]: 2025-04-30 03:24:40.034 [WARNING][5097] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" HandleID="k8s-pod-network.49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--trcpw-eth0" Apr 30 03:24:40.042272 containerd[1586]: 2025-04-30 03:24:40.034 [INFO][5097] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" HandleID="k8s-pod-network.49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-coredns--7db6d8ff4d--trcpw-eth0" Apr 30 03:24:40.042272 containerd[1586]: 2025-04-30 03:24:40.036 [INFO][5097] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:24:40.042272 containerd[1586]: 2025-04-30 03:24:40.039 [INFO][5088] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf" Apr 30 03:24:40.044097 containerd[1586]: time="2025-04-30T03:24:40.042307335Z" level=info msg="TearDown network for sandbox \"49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf\" successfully" Apr 30 03:24:40.045534 containerd[1586]: time="2025-04-30T03:24:40.045482063Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:24:40.045656 containerd[1586]: time="2025-04-30T03:24:40.045571375Z" level=info msg="RemovePodSandbox \"49717b4072aabcb2f3f8f4dc9fd23a2d959f45bf7f4e64b44346c64d42fd0daf\" returns successfully" Apr 30 03:24:40.046434 containerd[1586]: time="2025-04-30T03:24:40.046378757Z" level=info msg="StopPodSandbox for \"237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4\"" Apr 30 03:24:40.159142 containerd[1586]: 2025-04-30 03:24:40.109 [WARNING][5115] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--0c5ff7085f-k8s-csi--node--driver--b29ps-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"67c47235-153a-4d06-ba98-7cf5056b9032", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 24, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-0c5ff7085f", ContainerID:"0ad78c704ce8bf4b8e286b667fa9d8aac6797065bc93a67aa9bd4d77c78ee943", Pod:"csi-node-driver-b29ps", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.43.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9139a752b1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:24:40.159142 containerd[1586]: 2025-04-30 03:24:40.110 [INFO][5115] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" Apr 30 03:24:40.159142 containerd[1586]: 2025-04-30 03:24:40.110 [INFO][5115] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" iface="eth0" netns="" Apr 30 03:24:40.159142 containerd[1586]: 2025-04-30 03:24:40.110 [INFO][5115] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" Apr 30 03:24:40.159142 containerd[1586]: 2025-04-30 03:24:40.110 [INFO][5115] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" Apr 30 03:24:40.159142 containerd[1586]: 2025-04-30 03:24:40.139 [INFO][5122] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" HandleID="k8s-pod-network.237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-csi--node--driver--b29ps-eth0" Apr 30 03:24:40.159142 containerd[1586]: 2025-04-30 03:24:40.140 [INFO][5122] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:24:40.159142 containerd[1586]: 2025-04-30 03:24:40.140 [INFO][5122] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:24:40.159142 containerd[1586]: 2025-04-30 03:24:40.147 [WARNING][5122] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" HandleID="k8s-pod-network.237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-csi--node--driver--b29ps-eth0" Apr 30 03:24:40.159142 containerd[1586]: 2025-04-30 03:24:40.147 [INFO][5122] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" HandleID="k8s-pod-network.237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-csi--node--driver--b29ps-eth0" Apr 30 03:24:40.159142 containerd[1586]: 2025-04-30 03:24:40.150 [INFO][5122] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:24:40.159142 containerd[1586]: 2025-04-30 03:24:40.156 [INFO][5115] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" Apr 30 03:24:40.160948 containerd[1586]: time="2025-04-30T03:24:40.159210033Z" level=info msg="TearDown network for sandbox \"237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4\" successfully" Apr 30 03:24:40.160948 containerd[1586]: time="2025-04-30T03:24:40.159256551Z" level=info msg="StopPodSandbox for \"237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4\" returns successfully" Apr 30 03:24:40.160948 containerd[1586]: time="2025-04-30T03:24:40.160205779Z" level=info msg="RemovePodSandbox for \"237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4\"" Apr 30 03:24:40.160948 containerd[1586]: time="2025-04-30T03:24:40.160242186Z" level=info msg="Forcibly stopping sandbox \"237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4\"" Apr 30 03:24:40.260079 containerd[1586]: 2025-04-30 03:24:40.213 [WARNING][5140] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--0c5ff7085f-k8s-csi--node--driver--b29ps-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"67c47235-153a-4d06-ba98-7cf5056b9032", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 24, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-0c5ff7085f", ContainerID:"0ad78c704ce8bf4b8e286b667fa9d8aac6797065bc93a67aa9bd4d77c78ee943", Pod:"csi-node-driver-b29ps", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.43.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9139a752b1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:24:40.260079 containerd[1586]: 2025-04-30 03:24:40.214 [INFO][5140] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" Apr 30 03:24:40.260079 containerd[1586]: 2025-04-30 03:24:40.214 [INFO][5140] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" iface="eth0" netns="" Apr 30 03:24:40.260079 containerd[1586]: 2025-04-30 03:24:40.214 [INFO][5140] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" Apr 30 03:24:40.260079 containerd[1586]: 2025-04-30 03:24:40.214 [INFO][5140] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" Apr 30 03:24:40.260079 containerd[1586]: 2025-04-30 03:24:40.243 [INFO][5147] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" HandleID="k8s-pod-network.237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-csi--node--driver--b29ps-eth0" Apr 30 03:24:40.260079 containerd[1586]: 2025-04-30 03:24:40.244 [INFO][5147] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:24:40.260079 containerd[1586]: 2025-04-30 03:24:40.244 [INFO][5147] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:24:40.260079 containerd[1586]: 2025-04-30 03:24:40.251 [WARNING][5147] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" HandleID="k8s-pod-network.237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-csi--node--driver--b29ps-eth0" Apr 30 03:24:40.260079 containerd[1586]: 2025-04-30 03:24:40.251 [INFO][5147] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" HandleID="k8s-pod-network.237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-csi--node--driver--b29ps-eth0" Apr 30 03:24:40.260079 containerd[1586]: 2025-04-30 03:24:40.253 [INFO][5147] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:24:40.260079 containerd[1586]: 2025-04-30 03:24:40.256 [INFO][5140] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4" Apr 30 03:24:40.260079 containerd[1586]: time="2025-04-30T03:24:40.258430760Z" level=info msg="TearDown network for sandbox \"237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4\" successfully" Apr 30 03:24:40.263710 containerd[1586]: time="2025-04-30T03:24:40.263654676Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:24:40.263934 containerd[1586]: time="2025-04-30T03:24:40.263916439Z" level=info msg="RemovePodSandbox \"237a41b3b450f32abb8dc43d67a471acc048a2c98ca36df5c32b2e5d280633e4\" returns successfully" Apr 30 03:24:40.264662 containerd[1586]: time="2025-04-30T03:24:40.264633949Z" level=info msg="StopPodSandbox for \"11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de\"" Apr 30 03:24:40.367656 containerd[1586]: 2025-04-30 03:24:40.319 [WARNING][5165] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--0c5ff7085f-k8s-calico--kube--controllers--65cd484dd7--znmv5-eth0", GenerateName:"calico-kube-controllers-65cd484dd7-", Namespace:"calico-system", SelfLink:"", UID:"2319ef4b-2c33-4712-bebd-81dcb419db1f", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 24, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65cd484dd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-0c5ff7085f", ContainerID:"4bfcde3b209affb1bef3bf98ebfe7402569e4e3b635c2c88b8234fd6645c9abe", Pod:"calico-kube-controllers-65cd484dd7-znmv5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.43.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0c7d56e546b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:24:40.367656 containerd[1586]: 2025-04-30 03:24:40.319 [INFO][5165] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" Apr 30 03:24:40.367656 containerd[1586]: 2025-04-30 03:24:40.319 [INFO][5165] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" iface="eth0" netns="" Apr 30 03:24:40.367656 containerd[1586]: 2025-04-30 03:24:40.319 [INFO][5165] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" Apr 30 03:24:40.367656 containerd[1586]: 2025-04-30 03:24:40.319 [INFO][5165] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" Apr 30 03:24:40.367656 containerd[1586]: 2025-04-30 03:24:40.353 [INFO][5172] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" HandleID="k8s-pod-network.11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--kube--controllers--65cd484dd7--znmv5-eth0" Apr 30 03:24:40.367656 containerd[1586]: 2025-04-30 03:24:40.354 [INFO][5172] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:24:40.367656 containerd[1586]: 2025-04-30 03:24:40.354 [INFO][5172] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:24:40.367656 containerd[1586]: 2025-04-30 03:24:40.361 [WARNING][5172] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" HandleID="k8s-pod-network.11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--kube--controllers--65cd484dd7--znmv5-eth0" Apr 30 03:24:40.367656 containerd[1586]: 2025-04-30 03:24:40.361 [INFO][5172] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" HandleID="k8s-pod-network.11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--kube--controllers--65cd484dd7--znmv5-eth0" Apr 30 03:24:40.367656 containerd[1586]: 2025-04-30 03:24:40.363 [INFO][5172] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:24:40.367656 containerd[1586]: 2025-04-30 03:24:40.365 [INFO][5165] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" Apr 30 03:24:40.368495 containerd[1586]: time="2025-04-30T03:24:40.368208876Z" level=info msg="TearDown network for sandbox \"11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de\" successfully" Apr 30 03:24:40.368495 containerd[1586]: time="2025-04-30T03:24:40.368242989Z" level=info msg="StopPodSandbox for \"11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de\" returns successfully" Apr 30 03:24:40.369364 containerd[1586]: time="2025-04-30T03:24:40.368942903Z" level=info msg="RemovePodSandbox for \"11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de\"" Apr 30 03:24:40.369364 containerd[1586]: time="2025-04-30T03:24:40.368983636Z" level=info msg="Forcibly stopping sandbox \"11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de\"" Apr 30 03:24:40.466342 containerd[1586]: 2025-04-30 03:24:40.421 [WARNING][5190] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--0c5ff7085f-k8s-calico--kube--controllers--65cd484dd7--znmv5-eth0", GenerateName:"calico-kube-controllers-65cd484dd7-", Namespace:"calico-system", SelfLink:"", UID:"2319ef4b-2c33-4712-bebd-81dcb419db1f", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 24, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65cd484dd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-0c5ff7085f", ContainerID:"4bfcde3b209affb1bef3bf98ebfe7402569e4e3b635c2c88b8234fd6645c9abe", Pod:"calico-kube-controllers-65cd484dd7-znmv5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.43.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0c7d56e546b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:24:40.466342 containerd[1586]: 2025-04-30 03:24:40.421 [INFO][5190] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" Apr 30 03:24:40.466342 containerd[1586]: 2025-04-30 03:24:40.421 [INFO][5190] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" iface="eth0" netns="" Apr 30 03:24:40.466342 containerd[1586]: 2025-04-30 03:24:40.421 [INFO][5190] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" Apr 30 03:24:40.466342 containerd[1586]: 2025-04-30 03:24:40.421 [INFO][5190] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" Apr 30 03:24:40.466342 containerd[1586]: 2025-04-30 03:24:40.452 [INFO][5197] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" HandleID="k8s-pod-network.11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--kube--controllers--65cd484dd7--znmv5-eth0" Apr 30 03:24:40.466342 containerd[1586]: 2025-04-30 03:24:40.452 [INFO][5197] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:24:40.466342 containerd[1586]: 2025-04-30 03:24:40.452 [INFO][5197] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:24:40.466342 containerd[1586]: 2025-04-30 03:24:40.459 [WARNING][5197] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" HandleID="k8s-pod-network.11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--kube--controllers--65cd484dd7--znmv5-eth0" Apr 30 03:24:40.466342 containerd[1586]: 2025-04-30 03:24:40.459 [INFO][5197] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" HandleID="k8s-pod-network.11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" Workload="ci--4081.3.3--0--0c5ff7085f-k8s-calico--kube--controllers--65cd484dd7--znmv5-eth0" Apr 30 03:24:40.466342 containerd[1586]: 2025-04-30 03:24:40.462 [INFO][5197] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:24:40.466342 containerd[1586]: 2025-04-30 03:24:40.464 [INFO][5190] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de" Apr 30 03:24:40.467364 containerd[1586]: time="2025-04-30T03:24:40.466934353Z" level=info msg="TearDown network for sandbox \"11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de\" successfully" Apr 30 03:24:40.470384 containerd[1586]: time="2025-04-30T03:24:40.470140408Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:24:40.470384 containerd[1586]: time="2025-04-30T03:24:40.470232039Z" level=info msg="RemovePodSandbox \"11c5b0c559593f8056d6e60660088b8d1db8ad6e328349b6955b285a3bb503de\" returns successfully" Apr 30 03:24:42.239989 containerd[1586]: time="2025-04-30T03:24:42.239930067Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:42.241482 containerd[1586]: time="2025-04-30T03:24:42.241232127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" Apr 30 03:24:42.243015 containerd[1586]: time="2025-04-30T03:24:42.242755605Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:42.250132 containerd[1586]: time="2025-04-30T03:24:42.250040061Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:42.251361 containerd[1586]: time="2025-04-30T03:24:42.251099839Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 3.277429234s" Apr 30 03:24:42.251361 containerd[1586]: time="2025-04-30T03:24:42.251160561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" Apr 30 03:24:42.254971 containerd[1586]: time="2025-04-30T03:24:42.254913200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:24:42.280991 containerd[1586]: time="2025-04-30T03:24:42.280796707Z" level=info msg="CreateContainer within sandbox \"4bfcde3b209affb1bef3bf98ebfe7402569e4e3b635c2c88b8234fd6645c9abe\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 03:24:42.301698 containerd[1586]: time="2025-04-30T03:24:42.301493292Z" level=info msg="CreateContainer within sandbox \"4bfcde3b209affb1bef3bf98ebfe7402569e4e3b635c2c88b8234fd6645c9abe\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1c3d2196a69bfd5207cdcc94ad2c153688ace4d24ca48f0d39e4aeda3b08aa52\"" Apr 30 03:24:42.302821 containerd[1586]: time="2025-04-30T03:24:42.302515065Z" level=info msg="StartContainer for \"1c3d2196a69bfd5207cdcc94ad2c153688ace4d24ca48f0d39e4aeda3b08aa52\"" Apr 30 03:24:42.414122 containerd[1586]: time="2025-04-30T03:24:42.414065580Z" level=info msg="StartContainer for \"1c3d2196a69bfd5207cdcc94ad2c153688ace4d24ca48f0d39e4aeda3b08aa52\" returns successfully" Apr 30 03:24:42.624475 kubelet[2721]: I0430 03:24:42.624306 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6678cff58b-ph7rm" podStartSLOduration=34.498333692 podStartE2EDuration="38.624266049s" podCreationTimestamp="2025-04-30 03:24:04 +0000 UTC" firstStartedPulling="2025-04-30 03:24:32.327679592 +0000 UTC m=+53.489798953" lastFinishedPulling="2025-04-30 03:24:36.453611953 +0000 UTC m=+57.615731310" observedRunningTime="2025-04-30 03:24:37.586242545 +0000 UTC m=+58.748361908" watchObservedRunningTime="2025-04-30 03:24:42.624266049 +0000 UTC m=+63.786385561" Apr 30 03:24:42.626627 kubelet[2721]: I0430 03:24:42.626085 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-65cd484dd7-znmv5" podStartSLOduration=31.344702109 podStartE2EDuration="38.626065583s" podCreationTimestamp="2025-04-30 03:24:04 +0000 UTC" firstStartedPulling="2025-04-30 03:24:34.971215702 +0000 UTC m=+56.133335045" lastFinishedPulling="2025-04-30 03:24:42.252579164 +0000 UTC m=+63.414698519" observedRunningTime="2025-04-30 03:24:42.622534443 +0000 UTC m=+63.784653822" watchObservedRunningTime="2025-04-30 03:24:42.626065583 +0000 UTC m=+63.788184946" Apr 30 03:24:42.991145 systemd[1]: Started sshd@13-64.227.96.87:22-139.178.89.65:52598.service - OpenSSH per-connection server daemon (139.178.89.65:52598). Apr 30 03:24:43.016862 containerd[1586]: time="2025-04-30T03:24:43.016782993Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:43.025298 containerd[1586]: time="2025-04-30T03:24:43.025222051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" Apr 30 03:24:43.035371 containerd[1586]: time="2025-04-30T03:24:43.035110131Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 780.127495ms" Apr 30 03:24:43.035371 containerd[1586]: time="2025-04-30T03:24:43.035188478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:24:43.042928 containerd[1586]: time="2025-04-30T03:24:43.042553242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 03:24:43.059297 containerd[1586]: time="2025-04-30T03:24:43.059090562Z" level=info msg="CreateContainer within sandbox \"1913523d6e48d0ddfaf07cd1f4ee00bb5b4d317a759021d68860582771ef2e97\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:24:43.082003 containerd[1586]: time="2025-04-30T03:24:43.081886818Z" level=info msg="CreateContainer within sandbox \"1913523d6e48d0ddfaf07cd1f4ee00bb5b4d317a759021d68860582771ef2e97\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"afe1286c0d480c6e11f4bbb7fc233aea5e8fa76b1875bf1e06b83701c0fecfd7\"" Apr 30 03:24:43.082869 containerd[1586]: time="2025-04-30T03:24:43.082818632Z" level=info msg="StartContainer for \"afe1286c0d480c6e11f4bbb7fc233aea5e8fa76b1875bf1e06b83701c0fecfd7\"" Apr 30 03:24:43.146918 sshd[5266]: Accepted publickey for core from 139.178.89.65 port 52598 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:24:43.151076 sshd[5266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:24:43.159920 systemd-logind[1563]: New session 14 of user core. Apr 30 03:24:43.171171 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 03:24:43.211481 containerd[1586]: time="2025-04-30T03:24:43.211412352Z" level=info msg="StartContainer for \"afe1286c0d480c6e11f4bbb7fc233aea5e8fa76b1875bf1e06b83701c0fecfd7\" returns successfully" Apr 30 03:24:43.860628 sshd[5266]: pam_unix(sshd:session): session closed for user core Apr 30 03:24:43.865571 systemd[1]: sshd@13-64.227.96.87:22-139.178.89.65:52598.service: Deactivated successfully. Apr 30 03:24:43.873656 systemd-logind[1563]: Session 14 logged out. Waiting for processes to exit. Apr 30 03:24:43.875982 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 03:24:43.878456 systemd-logind[1563]: Removed session 14. Apr 30 03:24:44.612827 kubelet[2721]: I0430 03:24:44.611694 2721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:24:44.757020 kubelet[2721]: I0430 03:24:44.755147 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6678cff58b-ff657" podStartSLOduration=32.866010242 podStartE2EDuration="40.75511716s" podCreationTimestamp="2025-04-30 03:24:04 +0000 UTC" firstStartedPulling="2025-04-30 03:24:35.152956657 +0000 UTC m=+56.315076016" lastFinishedPulling="2025-04-30 03:24:43.042063571 +0000 UTC m=+64.204182934" observedRunningTime="2025-04-30 03:24:43.689685131 +0000 UTC m=+64.851804494" watchObservedRunningTime="2025-04-30 03:24:44.75511716 +0000 UTC m=+65.917236527" Apr 30 03:24:45.127695 systemd-resolved[1478]: Under memory pressure, flushing caches. Apr 30 03:24:45.130139 systemd-journald[1137]: Under memory pressure, flushing caches. Apr 30 03:24:45.127753 systemd-resolved[1478]: Flushed all caches. Apr 30 03:24:45.756504 containerd[1586]: time="2025-04-30T03:24:45.756432016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:45.757736 containerd[1586]: time="2025-04-30T03:24:45.757663493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" Apr 30 03:24:45.759417 containerd[1586]: time="2025-04-30T03:24:45.758391053Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:45.762452 containerd[1586]: time="2025-04-30T03:24:45.762355562Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:45.764281 containerd[1586]: time="2025-04-30T03:24:45.763368692Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.720708954s" Apr 30 03:24:45.764281 containerd[1586]: time="2025-04-30T03:24:45.763435357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" Apr 30 03:24:45.767572 containerd[1586]: time="2025-04-30T03:24:45.767516747Z" level=info msg="CreateContainer within sandbox \"0ad78c704ce8bf4b8e286b667fa9d8aac6797065bc93a67aa9bd4d77c78ee943\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 03:24:45.790970 containerd[1586]: time="2025-04-30T03:24:45.790846610Z" level=info msg="CreateContainer within sandbox \"0ad78c704ce8bf4b8e286b667fa9d8aac6797065bc93a67aa9bd4d77c78ee943\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6ac6bdbe089e5191f94ed1024ca0b92922c3ca25f74bf5094b447579b2866bad\"" Apr 30 03:24:45.794109 containerd[1586]: time="2025-04-30T03:24:45.792441630Z" level=info msg="StartContainer for \"6ac6bdbe089e5191f94ed1024ca0b92922c3ca25f74bf5094b447579b2866bad\"" Apr 30 03:24:45.905267 containerd[1586]: time="2025-04-30T03:24:45.905224309Z" level=info msg="StartContainer for \"6ac6bdbe089e5191f94ed1024ca0b92922c3ca25f74bf5094b447579b2866bad\" returns successfully" Apr 30 03:24:46.407687 kubelet[2721]: I0430 03:24:46.407602 2721 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 03:24:46.407687 kubelet[2721]: I0430 03:24:46.407697 2721 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 03:24:46.639088 kubelet[2721]: I0430 03:24:46.638975 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-b29ps" podStartSLOduration=30.726982634 podStartE2EDuration="42.638945872s" podCreationTimestamp="2025-04-30 03:24:04 +0000 UTC" firstStartedPulling="2025-04-30 03:24:33.853129179 +0000 UTC m=+55.015248522" lastFinishedPulling="2025-04-30 03:24:45.765092378 +0000 UTC m=+66.927211760" observedRunningTime="2025-04-30 03:24:46.638910075 +0000 UTC m=+67.801029439" watchObservedRunningTime="2025-04-30 03:24:46.638945872 +0000 UTC m=+67.801065237" Apr 30 03:24:47.176629 systemd-resolved[1478]: Under memory pressure, flushing caches. Apr 30 03:24:47.176680 systemd-resolved[1478]: Flushed all caches. Apr 30 03:24:47.179379 systemd-journald[1137]: Under memory pressure, flushing caches. Apr 30 03:24:48.871705 systemd[1]: Started sshd@14-64.227.96.87:22-139.178.89.65:40964.service - OpenSSH per-connection server daemon (139.178.89.65:40964). Apr 30 03:24:48.974917 sshd[5385]: Accepted publickey for core from 139.178.89.65 port 40964 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:24:48.977192 sshd[5385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:24:48.984193 systemd-logind[1563]: New session 15 of user core. Apr 30 03:24:48.989290 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 03:24:49.317646 sshd[5385]: pam_unix(sshd:session): session closed for user core Apr 30 03:24:49.321877 systemd[1]: sshd@14-64.227.96.87:22-139.178.89.65:40964.service: Deactivated successfully. Apr 30 03:24:49.330073 systemd-logind[1563]: Session 15 logged out. Waiting for processes to exit. Apr 30 03:24:49.330812 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 03:24:49.333176 systemd-logind[1563]: Removed session 15. Apr 30 03:24:54.327934 systemd[1]: Started sshd@15-64.227.96.87:22-139.178.89.65:40974.service - OpenSSH per-connection server daemon (139.178.89.65:40974). Apr 30 03:24:54.389116 sshd[5401]: Accepted publickey for core from 139.178.89.65 port 40974 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:24:54.391559 sshd[5401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:24:54.397594 systemd-logind[1563]: New session 16 of user core. Apr 30 03:24:54.402918 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 03:24:54.555059 sshd[5401]: pam_unix(sshd:session): session closed for user core Apr 30 03:24:54.559785 systemd[1]: sshd@15-64.227.96.87:22-139.178.89.65:40974.service: Deactivated successfully. Apr 30 03:24:54.566524 systemd-logind[1563]: Session 16 logged out. Waiting for processes to exit. Apr 30 03:24:54.567384 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 03:24:54.569818 systemd-logind[1563]: Removed session 16. Apr 30 03:24:59.564950 systemd[1]: Started sshd@16-64.227.96.87:22-139.178.89.65:37644.service - OpenSSH per-connection server daemon (139.178.89.65:37644). Apr 30 03:24:59.614804 sshd[5417]: Accepted publickey for core from 139.178.89.65 port 37644 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:24:59.617508 sshd[5417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:24:59.623592 systemd-logind[1563]: New session 17 of user core. Apr 30 03:24:59.633795 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 03:24:59.788085 sshd[5417]: pam_unix(sshd:session): session closed for user core Apr 30 03:24:59.799460 systemd[1]: Started sshd@17-64.227.96.87:22-139.178.89.65:37654.service - OpenSSH per-connection server daemon (139.178.89.65:37654). Apr 30 03:24:59.800205 systemd[1]: sshd@16-64.227.96.87:22-139.178.89.65:37644.service: Deactivated successfully. Apr 30 03:24:59.807176 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 03:24:59.811861 systemd-logind[1563]: Session 17 logged out. Waiting for processes to exit. Apr 30 03:24:59.813891 systemd-logind[1563]: Removed session 17. Apr 30 03:24:59.851178 sshd[5429]: Accepted publickey for core from 139.178.89.65 port 37654 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:24:59.853511 sshd[5429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:24:59.860500 systemd-logind[1563]: New session 18 of user core. Apr 30 03:24:59.864701 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 03:25:00.275634 sshd[5429]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:00.285035 systemd[1]: Started sshd@18-64.227.96.87:22-139.178.89.65:37666.service - OpenSSH per-connection server daemon (139.178.89.65:37666). Apr 30 03:25:00.285775 systemd[1]: sshd@17-64.227.96.87:22-139.178.89.65:37654.service: Deactivated successfully. Apr 30 03:25:00.294169 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 03:25:00.295224 systemd-logind[1563]: Session 18 logged out. Waiting for processes to exit. Apr 30 03:25:00.298747 systemd-logind[1563]: Removed session 18. Apr 30 03:25:00.347220 sshd[5444]: Accepted publickey for core from 139.178.89.65 port 37666 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:25:00.349898 sshd[5444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:00.357448 systemd-logind[1563]: New session 19 of user core. Apr 30 03:25:00.362888 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 03:25:03.181536 systemd-journald[1137]: Under memory pressure, flushing caches. Apr 30 03:25:03.178854 systemd-resolved[1478]: Under memory pressure, flushing caches. Apr 30 03:25:03.178869 systemd-resolved[1478]: Flushed all caches. Apr 30 03:25:03.848165 sshd[5444]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:03.864786 systemd[1]: Started sshd@19-64.227.96.87:22-139.178.89.65:37680.service - OpenSSH per-connection server daemon (139.178.89.65:37680). Apr 30 03:25:03.873874 systemd[1]: sshd@18-64.227.96.87:22-139.178.89.65:37666.service: Deactivated successfully. Apr 30 03:25:03.906693 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 03:25:03.912005 systemd-logind[1563]: Session 19 logged out. Waiting for processes to exit. Apr 30 03:25:03.919341 systemd-logind[1563]: Removed session 19. Apr 30 03:25:04.042559 sshd[5485]: Accepted publickey for core from 139.178.89.65 port 37680 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:25:04.044935 sshd[5485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:04.083498 systemd-logind[1563]: New session 20 of user core. Apr 30 03:25:04.086716 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 03:25:05.184182 sshd[5485]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:05.206500 systemd[1]: Started sshd@20-64.227.96.87:22-139.178.89.65:37690.service - OpenSSH per-connection server daemon (139.178.89.65:37690). Apr 30 03:25:05.207318 systemd[1]: sshd@19-64.227.96.87:22-139.178.89.65:37680.service: Deactivated successfully. Apr 30 03:25:05.224671 systemd-resolved[1478]: Under memory pressure, flushing caches. Apr 30 03:25:05.236271 systemd-journald[1137]: Under memory pressure, flushing caches. Apr 30 03:25:05.224681 systemd-resolved[1478]: Flushed all caches. Apr 30 03:25:05.225997 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 03:25:05.229879 systemd-logind[1563]: Session 20 logged out. Waiting for processes to exit. Apr 30 03:25:05.239234 systemd-logind[1563]: Removed session 20. Apr 30 03:25:05.351458 sshd[5500]: Accepted publickey for core from 139.178.89.65 port 37690 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:25:05.358418 sshd[5500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:05.377680 systemd-logind[1563]: New session 21 of user core. Apr 30 03:25:05.382847 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 03:25:05.686697 sshd[5500]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:05.693251 systemd[1]: sshd@20-64.227.96.87:22-139.178.89.65:37690.service: Deactivated successfully. Apr 30 03:25:05.703849 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 03:25:05.705912 systemd-logind[1563]: Session 21 logged out. Waiting for processes to exit. Apr 30 03:25:05.710799 systemd-logind[1563]: Removed session 21. Apr 30 03:25:09.113049 kubelet[2721]: E0430 03:25:09.112714 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:25:10.700552 systemd[1]: Started sshd@21-64.227.96.87:22-139.178.89.65:53652.service - OpenSSH per-connection server daemon (139.178.89.65:53652). Apr 30 03:25:10.795580 sshd[5526]: Accepted publickey for core from 139.178.89.65 port 53652 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:25:10.800802 sshd[5526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:10.815443 systemd-logind[1563]: New session 22 of user core. Apr 30 03:25:10.824738 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 03:25:11.227290 sshd[5526]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:11.235586 systemd[1]: sshd@21-64.227.96.87:22-139.178.89.65:53652.service: Deactivated successfully. Apr 30 03:25:11.244802 systemd-logind[1563]: Session 22 logged out. Waiting for processes to exit. Apr 30 03:25:11.244878 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 03:25:11.252943 systemd-logind[1563]: Removed session 22. Apr 30 03:25:11.672593 kubelet[2721]: I0430 03:25:11.671946 2721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:25:13.067263 kubelet[2721]: E0430 03:25:13.065854 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:25:16.244116 systemd[1]: Started sshd@22-64.227.96.87:22-139.178.89.65:53664.service - OpenSSH per-connection server daemon (139.178.89.65:53664). Apr 30 03:25:16.360899 sshd[5544]: Accepted publickey for core from 139.178.89.65 port 53664 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:25:16.363443 sshd[5544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:16.378874 systemd-logind[1563]: New session 23 of user core. Apr 30 03:25:16.386949 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 03:25:16.576038 sshd[5544]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:16.586840 systemd[1]: sshd@22-64.227.96.87:22-139.178.89.65:53664.service: Deactivated successfully. Apr 30 03:25:16.595176 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 03:25:16.596402 systemd-logind[1563]: Session 23 logged out. Waiting for processes to exit. Apr 30 03:25:16.602738 systemd-logind[1563]: Removed session 23. Apr 30 03:25:20.045649 kubelet[2721]: E0430 03:25:20.044782 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:25:21.585769 systemd[1]: Started sshd@23-64.227.96.87:22-139.178.89.65:37898.service - OpenSSH per-connection server daemon (139.178.89.65:37898). Apr 30 03:25:21.646783 sshd[5578]: Accepted publickey for core from 139.178.89.65 port 37898 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:25:21.649375 sshd[5578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:21.654742 systemd-logind[1563]: New session 24 of user core. Apr 30 03:25:21.659732 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 03:25:22.021500 sshd[5578]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:22.025655 systemd[1]: sshd@23-64.227.96.87:22-139.178.89.65:37898.service: Deactivated successfully. Apr 30 03:25:22.032480 systemd-logind[1563]: Session 24 logged out. Waiting for processes to exit. Apr 30 03:25:22.033645 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 03:25:22.039296 systemd-logind[1563]: Removed session 24. Apr 30 03:25:23.045538 kubelet[2721]: E0430 03:25:23.045482 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:25:26.045459 kubelet[2721]: E0430 03:25:26.045419 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:25:27.031816 systemd[1]: Started sshd@24-64.227.96.87:22-139.178.89.65:48954.service - OpenSSH per-connection server daemon (139.178.89.65:48954). Apr 30 03:25:27.088820 sshd[5594]: Accepted publickey for core from 139.178.89.65 port 48954 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:25:27.091247 sshd[5594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:27.105021 systemd-logind[1563]: New session 25 of user core. Apr 30 03:25:27.108074 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 03:25:27.323969 sshd[5594]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:27.329536 systemd[1]: sshd@24-64.227.96.87:22-139.178.89.65:48954.service: Deactivated successfully. Apr 30 03:25:27.336090 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 03:25:27.338137 systemd-logind[1563]: Session 25 logged out. Waiting for processes to exit. Apr 30 03:25:27.339607 systemd-logind[1563]: Removed session 25.