Apr 30 03:27:43.029932 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 03:27:43.029980 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:27:43.030001 kernel: BIOS-provided physical RAM map: Apr 30 03:27:43.030013 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 30 03:27:43.030023 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 30 03:27:43.030034 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 30 03:27:43.030047 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Apr 30 03:27:43.030059 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Apr 30 03:27:43.030070 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 30 03:27:43.030087 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 30 03:27:43.030099 kernel: NX (Execute Disable) protection: active Apr 30 03:27:43.030110 kernel: APIC: Static calls initialized Apr 30 03:27:43.030131 kernel: SMBIOS 2.8 present. Apr 30 03:27:43.030144 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Apr 30 03:27:43.030158 kernel: Hypervisor detected: KVM Apr 30 03:27:43.030177 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 03:27:43.030195 kernel: kvm-clock: using sched offset of 2877180872 cycles Apr 30 03:27:43.030209 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 03:27:43.030223 kernel: tsc: Detected 2494.134 MHz processor Apr 30 03:27:43.030236 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 03:27:43.030249 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 03:27:43.030261 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Apr 30 03:27:43.030275 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 30 03:27:43.030288 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 03:27:43.030309 kernel: ACPI: Early table checksum verification disabled Apr 30 03:27:43.030322 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Apr 30 03:27:43.030335 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:27:43.030349 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:27:43.030361 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:27:43.030373 kernel: ACPI: FACS 0x000000007FFE0000 000040 Apr 30 03:27:43.030386 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:27:43.030398 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:27:43.030410 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:27:43.030430 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:27:43.030443 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Apr 30 03:27:43.030455 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Apr 30 03:27:43.030468 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Apr 30 03:27:43.030482 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Apr 30 03:27:43.030494 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Apr 30 03:27:43.030507 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Apr 30 03:27:43.030528 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Apr 30 03:27:43.030547 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 30 03:27:43.030559 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 30 03:27:43.030573 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Apr 30 03:27:43.030586 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Apr 30 03:27:43.030609 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Apr 30 03:27:43.030623 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Apr 30 03:27:43.030641 kernel: Zone ranges: Apr 30 03:27:43.030654 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 03:27:43.030670 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Apr 30 03:27:43.030683 kernel: Normal empty Apr 30 03:27:43.032806 kernel: Movable zone start for each node Apr 30 03:27:43.032822 kernel: Early memory node ranges Apr 30 03:27:43.032837 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 30 03:27:43.032849 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Apr 30 03:27:43.032862 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Apr 30 03:27:43.032894 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:27:43.032908 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 30 03:27:43.032933 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Apr 30 03:27:43.032949 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 30 03:27:43.032965 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 03:27:43.032978 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 30 03:27:43.032991 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 03:27:43.033005 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 03:27:43.033021 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 03:27:43.033042 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 03:27:43.033058 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 03:27:43.033073 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 03:27:43.033089 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 03:27:43.033105 kernel: TSC deadline timer available Apr 30 03:27:43.033121 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 03:27:43.033138 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 03:27:43.033153 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Apr 30 03:27:43.033174 kernel: Booting paravirtualized kernel on KVM Apr 30 03:27:43.033191 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 03:27:43.033212 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 03:27:43.033229 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 03:27:43.033246 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 03:27:43.033262 kernel: pcpu-alloc: [0] 0 1 Apr 30 03:27:43.033277 kernel: kvm-guest: PV spinlocks disabled, no host support Apr 30 03:27:43.033297 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:27:43.033315 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 03:27:43.033331 kernel: random: crng init done Apr 30 03:27:43.033352 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 03:27:43.033369 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 03:27:43.033386 kernel: Fallback order for Node 0: 0 Apr 30 03:27:43.033402 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Apr 30 03:27:43.033429 kernel: Policy zone: DMA32 Apr 30 03:27:43.033445 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 03:27:43.033462 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 125148K reserved, 0K cma-reserved) Apr 30 03:27:43.033478 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 03:27:43.033498 kernel: Kernel/User page tables isolation: enabled Apr 30 03:27:43.033514 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 03:27:43.033529 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 03:27:43.033545 kernel: Dynamic Preempt: voluntary Apr 30 03:27:43.033556 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 03:27:43.033578 kernel: rcu: RCU event tracing is enabled. Apr 30 03:27:43.033593 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 03:27:43.033608 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 03:27:43.033623 kernel: Rude variant of Tasks RCU enabled. Apr 30 03:27:43.033638 kernel: Tracing variant of Tasks RCU enabled. Apr 30 03:27:43.033656 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 03:27:43.033669 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 03:27:43.033681 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 30 03:27:43.033693 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 03:27:43.033726 kernel: Console: colour VGA+ 80x25 Apr 30 03:27:43.033739 kernel: printk: console [tty0] enabled Apr 30 03:27:43.033751 kernel: printk: console [ttyS0] enabled Apr 30 03:27:43.033763 kernel: ACPI: Core revision 20230628 Apr 30 03:27:43.033778 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 30 03:27:43.033797 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 03:27:43.033809 kernel: x2apic enabled Apr 30 03:27:43.033822 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 03:27:43.033835 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 30 03:27:43.033847 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f3946f721, max_idle_ns: 440795294991 ns Apr 30 03:27:43.033861 kernel: Calibrating delay loop (skipped) preset value.. 4988.26 BogoMIPS (lpj=2494134) Apr 30 03:27:43.033873 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 30 03:27:43.033886 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 30 03:27:43.033917 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 03:27:43.033932 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 03:27:43.033945 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 03:27:43.033965 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 03:27:43.033977 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 30 03:27:43.033990 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 03:27:43.034005 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 03:27:43.034019 kernel: MDS: Mitigation: Clear CPU buffers Apr 30 03:27:43.034033 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:27:43.034059 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 03:27:43.034075 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 03:27:43.034089 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 03:27:43.034106 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 03:27:43.034120 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 30 03:27:43.034134 kernel: Freeing SMP alternatives memory: 32K Apr 30 03:27:43.034149 kernel: pid_max: default: 32768 minimum: 301 Apr 30 03:27:43.034164 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 03:27:43.034186 kernel: landlock: Up and running. Apr 30 03:27:43.034201 kernel: SELinux: Initializing. Apr 30 03:27:43.034215 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 03:27:43.034228 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 03:27:43.034242 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Apr 30 03:27:43.034257 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:27:43.034271 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:27:43.034286 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:27:43.034299 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Apr 30 03:27:43.034320 kernel: signal: max sigframe size: 1776 Apr 30 03:27:43.034336 kernel: rcu: Hierarchical SRCU implementation. Apr 30 03:27:43.034352 kernel: rcu: Max phase no-delay instances is 400. Apr 30 03:27:43.034366 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 30 03:27:43.034380 kernel: smp: Bringing up secondary CPUs ... Apr 30 03:27:43.034393 kernel: smpboot: x86: Booting SMP configuration: Apr 30 03:27:43.034407 kernel: .... node #0, CPUs: #1 Apr 30 03:27:43.034421 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 03:27:43.034441 kernel: smpboot: Max logical packages: 1 Apr 30 03:27:43.034463 kernel: smpboot: Total of 2 processors activated (9976.53 BogoMIPS) Apr 30 03:27:43.034476 kernel: devtmpfs: initialized Apr 30 03:27:43.034491 kernel: x86/mm: Memory block size: 128MB Apr 30 03:27:43.034505 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 03:27:43.034519 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 03:27:43.034533 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 03:27:43.034547 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 03:27:43.034560 kernel: audit: initializing netlink subsys (disabled) Apr 30 03:27:43.034575 kernel: audit: type=2000 audit(1745983662.102:1): state=initialized audit_enabled=0 res=1 Apr 30 03:27:43.034595 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 03:27:43.034609 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 03:27:43.034626 kernel: cpuidle: using governor menu Apr 30 03:27:43.034640 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 03:27:43.034656 kernel: dca service started, version 1.12.1 Apr 30 03:27:43.034670 kernel: PCI: Using configuration type 1 for base access Apr 30 03:27:43.036747 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 03:27:43.036835 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 03:27:43.036854 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 03:27:43.036886 kernel: ACPI: Added _OSI(Module Device) Apr 30 03:27:43.036904 kernel: ACPI: Added _OSI(Processor Device) Apr 30 03:27:43.036921 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 03:27:43.036939 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 03:27:43.036956 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 03:27:43.036973 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 03:27:43.036991 kernel: ACPI: Interpreter enabled Apr 30 03:27:43.037008 kernel: ACPI: PM: (supports S0 S5) Apr 30 03:27:43.037024 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 03:27:43.037047 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 03:27:43.037064 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 03:27:43.037077 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 30 03:27:43.037090 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 03:27:43.037523 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 30 03:27:43.037764 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 30 03:27:43.037928 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 30 03:27:43.037960 kernel: acpiphp: Slot [3] registered Apr 30 03:27:43.037973 kernel: acpiphp: Slot [4] registered Apr 30 03:27:43.037986 kernel: acpiphp: Slot [5] registered Apr 30 03:27:43.037999 kernel: acpiphp: Slot [6] registered Apr 30 03:27:43.038012 kernel: acpiphp: Slot [7] registered Apr 30 03:27:43.038025 kernel: acpiphp: Slot [8] registered Apr 30 03:27:43.038038 kernel: acpiphp: Slot [9] registered Apr 30 03:27:43.038052 kernel: acpiphp: Slot [10] registered Apr 30 03:27:43.038066 kernel: acpiphp: Slot [11] registered Apr 30 03:27:43.038083 kernel: acpiphp: Slot [12] registered Apr 30 03:27:43.038102 kernel: acpiphp: Slot [13] registered Apr 30 03:27:43.038115 kernel: acpiphp: Slot [14] registered Apr 30 03:27:43.038128 kernel: acpiphp: Slot [15] registered Apr 30 03:27:43.038141 kernel: acpiphp: Slot [16] registered Apr 30 03:27:43.038155 kernel: acpiphp: Slot [17] registered Apr 30 03:27:43.038171 kernel: acpiphp: Slot [18] registered Apr 30 03:27:43.038185 kernel: acpiphp: Slot [19] registered Apr 30 03:27:43.038201 kernel: acpiphp: Slot [20] registered Apr 30 03:27:43.038216 kernel: acpiphp: Slot [21] registered Apr 30 03:27:43.038237 kernel: acpiphp: Slot [22] registered Apr 30 03:27:43.038251 kernel: acpiphp: Slot [23] registered Apr 30 03:27:43.038267 kernel: acpiphp: Slot [24] registered Apr 30 03:27:43.038280 kernel: acpiphp: Slot [25] registered Apr 30 03:27:43.038295 kernel: acpiphp: Slot [26] registered Apr 30 03:27:43.038308 kernel: acpiphp: Slot [27] registered Apr 30 03:27:43.038321 kernel: acpiphp: Slot [28] registered Apr 30 03:27:43.038336 kernel: acpiphp: Slot [29] registered Apr 30 03:27:43.038351 kernel: acpiphp: Slot [30] registered Apr 30 03:27:43.038364 kernel: acpiphp: Slot [31] registered Apr 30 03:27:43.038384 kernel: PCI host bridge to bus 0000:00 Apr 30 03:27:43.038621 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 03:27:43.040933 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 03:27:43.041131 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 03:27:43.041265 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 30 03:27:43.041398 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Apr 30 03:27:43.041537 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 03:27:43.041921 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 30 03:27:43.042130 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 30 03:27:43.042317 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Apr 30 03:27:43.042489 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Apr 30 03:27:43.042672 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Apr 30 03:27:43.045054 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Apr 30 03:27:43.045265 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Apr 30 03:27:43.045467 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Apr 30 03:27:43.045672 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Apr 30 03:27:43.046018 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Apr 30 03:27:43.046201 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Apr 30 03:27:43.046368 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Apr 30 03:27:43.046550 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Apr 30 03:27:43.048876 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Apr 30 03:27:43.049085 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Apr 30 03:27:43.049242 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Apr 30 03:27:43.049393 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Apr 30 03:27:43.049559 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Apr 30 03:27:43.049789 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 03:27:43.050003 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Apr 30 03:27:43.050161 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Apr 30 03:27:43.050316 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Apr 30 03:27:43.050468 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Apr 30 03:27:43.050632 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 30 03:27:43.052980 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Apr 30 03:27:43.053200 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Apr 30 03:27:43.053374 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Apr 30 03:27:43.053553 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Apr 30 03:27:43.053772 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Apr 30 03:27:43.053928 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Apr 30 03:27:43.054078 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Apr 30 03:27:43.054255 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Apr 30 03:27:43.054405 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Apr 30 03:27:43.054569 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Apr 30 03:27:43.056870 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Apr 30 03:27:43.057134 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Apr 30 03:27:43.057290 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Apr 30 03:27:43.057436 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Apr 30 03:27:43.057585 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Apr 30 03:27:43.057881 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Apr 30 03:27:43.058066 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Apr 30 03:27:43.058216 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Apr 30 03:27:43.058235 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 03:27:43.058250 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 03:27:43.058264 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 03:27:43.058277 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 03:27:43.058291 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 30 03:27:43.058313 kernel: iommu: Default domain type: Translated Apr 30 03:27:43.058327 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 03:27:43.058341 kernel: PCI: Using ACPI for IRQ routing Apr 30 03:27:43.058354 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 03:27:43.058367 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 30 03:27:43.058379 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Apr 30 03:27:43.058548 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Apr 30 03:27:43.060812 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Apr 30 03:27:43.061106 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 03:27:43.061138 kernel: vgaarb: loaded Apr 30 03:27:43.061154 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 30 03:27:43.061169 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 30 03:27:43.061185 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 03:27:43.061198 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 03:27:43.061212 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 03:27:43.061225 kernel: pnp: PnP ACPI init Apr 30 03:27:43.061240 kernel: pnp: PnP ACPI: found 4 devices Apr 30 03:27:43.061267 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 03:27:43.061283 kernel: NET: Registered PF_INET protocol family Apr 30 03:27:43.061296 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 03:27:43.061312 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 30 03:27:43.061328 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 03:27:43.061344 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:27:43.061358 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 03:27:43.061372 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 30 03:27:43.061387 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 03:27:43.061409 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 03:27:43.061425 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 03:27:43.061440 kernel: NET: Registered PF_XDP protocol family Apr 30 03:27:43.061644 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 03:27:43.061818 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 03:27:43.061963 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 03:27:43.062125 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 30 03:27:43.062259 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Apr 30 03:27:43.062441 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Apr 30 03:27:43.062606 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 30 03:27:43.062631 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 30 03:27:43.064986 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 30734 usecs Apr 30 03:27:43.065043 kernel: PCI: CLS 0 bytes, default 64 Apr 30 03:27:43.065063 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 30 03:27:43.065083 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f3946f721, max_idle_ns: 440795294991 ns Apr 30 03:27:43.065103 kernel: Initialise system trusted keyrings Apr 30 03:27:43.065134 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 30 03:27:43.065153 kernel: Key type asymmetric registered Apr 30 03:27:43.065171 kernel: Asymmetric key parser 'x509' registered Apr 30 03:27:43.065188 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 03:27:43.065206 kernel: io scheduler mq-deadline registered Apr 30 03:27:43.065224 kernel: io scheduler kyber registered Apr 30 03:27:43.065242 kernel: io scheduler bfq registered Apr 30 03:27:43.065260 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 03:27:43.065279 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Apr 30 03:27:43.065298 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Apr 30 03:27:43.065322 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Apr 30 03:27:43.065340 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 03:27:43.065358 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 03:27:43.065376 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 03:27:43.065394 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 03:27:43.065412 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 03:27:43.065430 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 30 03:27:43.065681 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 30 03:27:43.065884 kernel: rtc_cmos 00:03: registered as rtc0 Apr 30 03:27:43.066036 kernel: rtc_cmos 00:03: setting system clock to 2025-04-30T03:27:42 UTC (1745983662) Apr 30 03:27:43.066189 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Apr 30 03:27:43.066208 kernel: intel_pstate: CPU model not supported Apr 30 03:27:43.066225 kernel: NET: Registered PF_INET6 protocol family Apr 30 03:27:43.066243 kernel: Segment Routing with IPv6 Apr 30 03:27:43.066260 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 03:27:43.066276 kernel: NET: Registered PF_PACKET protocol family Apr 30 03:27:43.066303 kernel: Key type dns_resolver registered Apr 30 03:27:43.066319 kernel: IPI shorthand broadcast: enabled Apr 30 03:27:43.066336 kernel: sched_clock: Marking stable (940006481, 83364584)->(1122591466, -99220401) Apr 30 03:27:43.066353 kernel: registered taskstats version 1 Apr 30 03:27:43.066369 kernel: Loading compiled-in X.509 certificates Apr 30 03:27:43.066386 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 03:27:43.066402 kernel: Key type .fscrypt registered Apr 30 03:27:43.066419 kernel: Key type fscrypt-provisioning registered Apr 30 03:27:43.066437 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 03:27:43.066460 kernel: ima: Allocated hash algorithm: sha1 Apr 30 03:27:43.066476 kernel: ima: No architecture policies found Apr 30 03:27:43.066493 kernel: clk: Disabling unused clocks Apr 30 03:27:43.066509 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 03:27:43.066526 kernel: Write protecting the kernel read-only data: 36864k Apr 30 03:27:43.066574 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 03:27:43.066595 kernel: Run /init as init process Apr 30 03:27:43.066610 kernel: with arguments: Apr 30 03:27:43.066625 kernel: /init Apr 30 03:27:43.066644 kernel: with environment: Apr 30 03:27:43.066658 kernel: HOME=/ Apr 30 03:27:43.066671 kernel: TERM=linux Apr 30 03:27:43.067787 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 03:27:43.067874 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:27:43.067898 systemd[1]: Detected virtualization kvm. Apr 30 03:27:43.067915 systemd[1]: Detected architecture x86-64. Apr 30 03:27:43.067931 systemd[1]: Running in initrd. Apr 30 03:27:43.067952 systemd[1]: No hostname configured, using default hostname. Apr 30 03:27:43.067968 systemd[1]: Hostname set to . Apr 30 03:27:43.067985 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:27:43.067999 systemd[1]: Queued start job for default target initrd.target. Apr 30 03:27:43.068015 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:27:43.068031 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:27:43.068048 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 03:27:43.068063 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:27:43.068084 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 03:27:43.068098 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 03:27:43.068115 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 03:27:43.068130 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 03:27:43.068146 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:27:43.068161 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:27:43.068177 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:27:43.068199 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:27:43.068216 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:27:43.068237 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:27:43.068252 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:27:43.068267 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:27:43.068287 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:27:43.068301 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:27:43.068316 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:27:43.068331 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:27:43.068346 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:27:43.068363 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:27:43.068377 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 03:27:43.068392 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:27:43.068409 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 03:27:43.068431 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 03:27:43.068446 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:27:43.068463 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:27:43.068480 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:27:43.068500 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 03:27:43.068521 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:27:43.068538 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 03:27:43.068560 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:27:43.068578 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:27:43.068595 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:27:43.068679 systemd-journald[183]: Collecting audit messages is disabled. Apr 30 03:27:43.068785 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:27:43.068801 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:27:43.068820 systemd-journald[183]: Journal started Apr 30 03:27:43.068860 systemd-journald[183]: Runtime Journal (/run/log/journal/f08d5e0bfd1c4413bff2f617fa1a5a46) is 4.9M, max 39.3M, 34.4M free. Apr 30 03:27:43.035149 systemd-modules-load[184]: Inserted module 'overlay' Apr 30 03:27:43.078542 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:27:43.082741 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:27:43.098068 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:27:43.104002 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 03:27:43.104042 kernel: Bridge firewalling registered Apr 30 03:27:43.103136 systemd-modules-load[184]: Inserted module 'br_netfilter' Apr 30 03:27:43.105580 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:27:43.113132 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:27:43.124063 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:27:43.140527 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:27:43.141894 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:27:43.149105 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 03:27:43.161217 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:27:43.185725 dracut-cmdline[218]: dracut-dracut-053 Apr 30 03:27:43.188513 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:27:43.196587 systemd-resolved[220]: Positive Trust Anchors: Apr 30 03:27:43.197247 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:27:43.197287 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:27:43.203871 systemd-resolved[220]: Defaulting to hostname 'linux'. Apr 30 03:27:43.206171 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:27:43.207271 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:27:43.303744 kernel: SCSI subsystem initialized Apr 30 03:27:43.313732 kernel: Loading iSCSI transport class v2.0-870. Apr 30 03:27:43.324728 kernel: iscsi: registered transport (tcp) Apr 30 03:27:43.346874 kernel: iscsi: registered transport (qla4xxx) Apr 30 03:27:43.346951 kernel: QLogic iSCSI HBA Driver Apr 30 03:27:43.397926 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 03:27:43.404030 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 03:27:43.436841 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 03:27:43.436968 kernel: device-mapper: uevent: version 1.0.3 Apr 30 03:27:43.436991 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 03:27:43.484744 kernel: raid6: avx2x4 gen() 17251 MB/s Apr 30 03:27:43.500733 kernel: raid6: avx2x2 gen() 16142 MB/s Apr 30 03:27:43.517852 kernel: raid6: avx2x1 gen() 13398 MB/s Apr 30 03:27:43.517937 kernel: raid6: using algorithm avx2x4 gen() 17251 MB/s Apr 30 03:27:43.535847 kernel: raid6: .... xor() 7101 MB/s, rmw enabled Apr 30 03:27:43.535940 kernel: raid6: using avx2x2 recovery algorithm Apr 30 03:27:43.556727 kernel: xor: automatically using best checksumming function avx Apr 30 03:27:43.720738 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 03:27:43.734362 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:27:43.741063 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:27:43.758113 systemd-udevd[403]: Using default interface naming scheme 'v255'. Apr 30 03:27:43.763877 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:27:43.770927 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 03:27:43.794737 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Apr 30 03:27:43.836035 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:27:43.844073 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:27:43.913914 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:27:43.925042 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 03:27:43.946609 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 03:27:43.950786 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:27:43.952917 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:27:43.954490 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:27:43.963235 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 03:27:43.994962 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:27:44.023818 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Apr 30 03:27:44.100343 kernel: scsi host0: Virtio SCSI HBA Apr 30 03:27:44.100541 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Apr 30 03:27:44.100670 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 03:27:44.100706 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 03:27:44.100720 kernel: GPT:9289727 != 125829119 Apr 30 03:27:44.100733 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 03:27:44.100745 kernel: GPT:9289727 != 125829119 Apr 30 03:27:44.100756 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 03:27:44.100769 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:27:44.102203 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Apr 30 03:27:44.110422 kernel: libata version 3.00 loaded. Apr 30 03:27:44.110460 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Apr 30 03:27:44.110713 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 03:27:44.110174 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:27:44.110346 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:27:44.112018 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:27:44.113279 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:27:44.119509 kernel: ata_piix 0000:00:01.1: version 2.13 Apr 30 03:27:44.151899 kernel: AES CTR mode by8 optimization enabled Apr 30 03:27:44.151928 kernel: scsi host1: ata_piix Apr 30 03:27:44.152164 kernel: scsi host2: ata_piix Apr 30 03:27:44.152396 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Apr 30 03:27:44.152413 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Apr 30 03:27:44.113492 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:27:44.115148 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:27:44.121066 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:27:44.185834 kernel: ACPI: bus type USB registered Apr 30 03:27:44.187798 kernel: usbcore: registered new interface driver usbfs Apr 30 03:27:44.187838 kernel: usbcore: registered new interface driver hub Apr 30 03:27:44.187851 kernel: usbcore: registered new device driver usb Apr 30 03:27:44.193310 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 30 03:27:44.234908 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (450) Apr 30 03:27:44.234958 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (448) Apr 30 03:27:44.235209 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:27:44.248443 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 30 03:27:44.256289 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 03:27:44.262496 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 30 03:27:44.263210 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 30 03:27:44.271080 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 03:27:44.275024 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:27:44.281737 disk-uuid[532]: Primary Header is updated. Apr 30 03:27:44.281737 disk-uuid[532]: Secondary Entries is updated. Apr 30 03:27:44.281737 disk-uuid[532]: Secondary Header is updated. Apr 30 03:27:44.290816 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:27:44.307834 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:27:44.323734 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:27:44.329273 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:27:44.417410 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Apr 30 03:27:44.422976 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Apr 30 03:27:44.423221 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Apr 30 03:27:44.423396 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Apr 30 03:27:44.423663 kernel: hub 1-0:1.0: USB hub found Apr 30 03:27:44.423945 kernel: hub 1-0:1.0: 2 ports detected Apr 30 03:27:45.319770 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:27:45.320317 disk-uuid[533]: The operation has completed successfully. Apr 30 03:27:45.387164 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 03:27:45.387333 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 03:27:45.392148 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 03:27:45.401888 sh[565]: Success Apr 30 03:27:45.418717 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 03:27:45.489189 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 03:27:45.506181 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 03:27:45.510757 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 03:27:45.531948 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 03:27:45.532051 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:27:45.532066 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 03:27:45.533916 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 03:27:45.534018 kernel: BTRFS info (device dm-0): using free space tree Apr 30 03:27:45.544908 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 03:27:45.546321 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 03:27:45.553109 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 03:27:45.555704 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 03:27:45.576580 kernel: BTRFS info (device vda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:27:45.576708 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:27:45.576726 kernel: BTRFS info (device vda6): using free space tree Apr 30 03:27:45.585744 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 03:27:45.602868 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 03:27:45.604409 kernel: BTRFS info (device vda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:27:45.611915 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 03:27:45.621096 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 03:27:45.759953 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:27:45.772149 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:27:45.794654 ignition[660]: Ignition 2.19.0 Apr 30 03:27:45.794839 ignition[660]: Stage: fetch-offline Apr 30 03:27:45.794918 ignition[660]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:27:45.794932 ignition[660]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:27:45.795092 ignition[660]: parsed url from cmdline: "" Apr 30 03:27:45.795098 ignition[660]: no config URL provided Apr 30 03:27:45.795107 ignition[660]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:27:45.800449 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:27:45.795121 ignition[660]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:27:45.795131 ignition[660]: failed to fetch config: resource requires networking Apr 30 03:27:45.795395 ignition[660]: Ignition finished successfully Apr 30 03:27:45.824718 systemd-networkd[756]: lo: Link UP Apr 30 03:27:45.824729 systemd-networkd[756]: lo: Gained carrier Apr 30 03:27:45.827385 systemd-networkd[756]: Enumeration completed Apr 30 03:27:45.827885 systemd-networkd[756]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Apr 30 03:27:45.827889 systemd-networkd[756]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Apr 30 03:27:45.828884 systemd-networkd[756]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:27:45.828891 systemd-networkd[756]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:27:45.829022 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:27:45.829601 systemd-networkd[756]: eth0: Link UP Apr 30 03:27:45.829606 systemd-networkd[756]: eth0: Gained carrier Apr 30 03:27:45.829615 systemd-networkd[756]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Apr 30 03:27:45.829873 systemd[1]: Reached target network.target - Network. Apr 30 03:27:45.833826 systemd-networkd[756]: eth1: Link UP Apr 30 03:27:45.833831 systemd-networkd[756]: eth1: Gained carrier Apr 30 03:27:45.833847 systemd-networkd[756]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:27:45.837037 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 03:27:45.847968 systemd-networkd[756]: eth0: DHCPv4 address 143.198.63.212/20, gateway 143.198.48.1 acquired from 169.254.169.253 Apr 30 03:27:45.853872 systemd-networkd[756]: eth1: DHCPv4 address 10.124.0.20/20 acquired from 169.254.169.253 Apr 30 03:27:45.859856 ignition[760]: Ignition 2.19.0 Apr 30 03:27:45.859883 ignition[760]: Stage: fetch Apr 30 03:27:45.861186 ignition[760]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:27:45.861210 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:27:45.861430 ignition[760]: parsed url from cmdline: "" Apr 30 03:27:45.861442 ignition[760]: no config URL provided Apr 30 03:27:45.861476 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:27:45.861496 ignition[760]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:27:45.861529 ignition[760]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Apr 30 03:27:45.877715 ignition[760]: GET result: OK Apr 30 03:27:45.877921 ignition[760]: parsing config with SHA512: b1efbf81511d40252b269c1753bf578411c15a0b29066fdf431f928e1ba809ccefb3f9f0be22bbcdb344000f67170e9be82c5c0a4ca40d87b89556ac28f3ec03 Apr 30 03:27:45.884209 unknown[760]: fetched base config from "system" Apr 30 03:27:45.884977 ignition[760]: fetch: fetch complete Apr 30 03:27:45.884222 unknown[760]: fetched base config from "system" Apr 30 03:27:45.884988 ignition[760]: fetch: fetch passed Apr 30 03:27:45.884229 unknown[760]: fetched user config from "digitalocean" Apr 30 03:27:45.885051 ignition[760]: Ignition finished successfully Apr 30 03:27:45.887809 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 03:27:45.892983 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 03:27:45.918222 ignition[767]: Ignition 2.19.0 Apr 30 03:27:45.918239 ignition[767]: Stage: kargs Apr 30 03:27:45.918456 ignition[767]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:27:45.918468 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:27:45.921315 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 03:27:45.919669 ignition[767]: kargs: kargs passed Apr 30 03:27:45.919787 ignition[767]: Ignition finished successfully Apr 30 03:27:45.928070 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 03:27:45.958202 ignition[773]: Ignition 2.19.0 Apr 30 03:27:45.958219 ignition[773]: Stage: disks Apr 30 03:27:45.958451 ignition[773]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:27:45.958462 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:27:45.959498 ignition[773]: disks: disks passed Apr 30 03:27:45.962809 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 03:27:45.959715 ignition[773]: Ignition finished successfully Apr 30 03:27:45.965762 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 03:27:45.966487 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:27:45.967375 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:27:45.968337 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:27:45.969116 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:27:45.976177 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 03:27:46.002364 systemd-fsck[781]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 03:27:46.006538 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 03:27:46.012916 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 03:27:46.121756 kernel: EXT4-fs (vda9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 03:27:46.122024 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 03:27:46.123151 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 03:27:46.130975 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:27:46.141159 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 03:27:46.144818 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Apr 30 03:27:46.148001 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 03:27:46.149070 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 03:27:46.154830 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (789) Apr 30 03:27:46.149113 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:27:46.158912 kernel: BTRFS info (device vda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:27:46.158983 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:27:46.160586 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 03:27:46.163854 kernel: BTRFS info (device vda6): using free space tree Apr 30 03:27:46.170052 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 03:27:46.172254 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 03:27:46.179604 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:27:46.243918 coreos-metadata[792]: Apr 30 03:27:46.243 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Apr 30 03:27:46.249825 initrd-setup-root[819]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 03:27:46.254974 coreos-metadata[791]: Apr 30 03:27:46.254 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Apr 30 03:27:46.259419 coreos-metadata[792]: Apr 30 03:27:46.258 INFO Fetch successful Apr 30 03:27:46.261457 initrd-setup-root[826]: cut: /sysroot/etc/group: No such file or directory Apr 30 03:27:46.265372 coreos-metadata[792]: Apr 30 03:27:46.264 INFO wrote hostname ci-4081.3.3-0-7c044d2e24 to /sysroot/etc/hostname Apr 30 03:27:46.268162 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:27:46.269976 initrd-setup-root[834]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 03:27:46.271463 coreos-metadata[791]: Apr 30 03:27:46.270 INFO Fetch successful Apr 30 03:27:46.280571 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Apr 30 03:27:46.280768 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Apr 30 03:27:46.282839 initrd-setup-root[841]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 03:27:46.393990 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 03:27:46.398864 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 03:27:46.400894 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 03:27:46.412767 kernel: BTRFS info (device vda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:27:46.433885 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 03:27:46.448383 ignition[912]: INFO : Ignition 2.19.0 Apr 30 03:27:46.449744 ignition[912]: INFO : Stage: mount Apr 30 03:27:46.449744 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:27:46.449744 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:27:46.452042 ignition[912]: INFO : mount: mount passed Apr 30 03:27:46.452042 ignition[912]: INFO : Ignition finished successfully Apr 30 03:27:46.452312 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 03:27:46.455906 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 03:27:46.531006 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 03:27:46.538119 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:27:46.549737 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (923) Apr 30 03:27:46.552032 kernel: BTRFS info (device vda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:27:46.552115 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:27:46.552135 kernel: BTRFS info (device vda6): using free space tree Apr 30 03:27:46.555745 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 03:27:46.559036 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:27:46.592807 ignition[940]: INFO : Ignition 2.19.0 Apr 30 03:27:46.592807 ignition[940]: INFO : Stage: files Apr 30 03:27:46.594065 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:27:46.594065 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:27:46.595254 ignition[940]: DEBUG : files: compiled without relabeling support, skipping Apr 30 03:27:46.595905 ignition[940]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 03:27:46.595905 ignition[940]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 03:27:46.598602 ignition[940]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 03:27:46.599273 ignition[940]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 03:27:46.599273 ignition[940]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 03:27:46.599125 unknown[940]: wrote ssh authorized keys file for user: core Apr 30 03:27:46.601301 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:27:46.602414 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 03:27:46.655344 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 03:27:46.779728 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:27:46.779728 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 03:27:46.779728 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 03:27:46.779728 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:27:46.779728 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:27:46.779728 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:27:46.779728 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:27:46.779728 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:27:46.779728 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:27:46.788581 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:27:46.788581 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:27:46.788581 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:27:46.788581 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:27:46.788581 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:27:46.788581 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Apr 30 03:27:47.406041 systemd-networkd[756]: eth0: Gained IPv6LL Apr 30 03:27:47.457293 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 30 03:27:47.662251 systemd-networkd[756]: eth1: Gained IPv6LL Apr 30 03:27:47.735069 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:27:47.735069 ignition[940]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 30 03:27:47.736943 ignition[940]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:27:47.736943 ignition[940]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:27:47.736943 ignition[940]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 30 03:27:47.736943 ignition[940]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 30 03:27:47.741342 ignition[940]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 03:27:47.741342 ignition[940]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:27:47.741342 ignition[940]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:27:47.741342 ignition[940]: INFO : files: files passed Apr 30 03:27:47.741342 ignition[940]: INFO : Ignition finished successfully Apr 30 03:27:47.740331 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 03:27:47.754645 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 03:27:47.756919 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 03:27:47.763403 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 03:27:47.764297 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 03:27:47.788182 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:27:47.788182 initrd-setup-root-after-ignition[969]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:27:47.790627 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:27:47.794145 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:27:47.795195 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 03:27:47.801022 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 03:27:47.871233 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 03:27:47.871455 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 03:27:47.873545 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 03:27:47.874173 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 03:27:47.875143 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 03:27:47.881111 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 03:27:47.910402 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:27:47.917133 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 03:27:47.939142 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:27:47.939742 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:27:47.940199 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 03:27:47.940567 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 03:27:47.942596 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:27:47.943388 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 03:27:47.944355 systemd[1]: Stopped target basic.target - Basic System. Apr 30 03:27:47.945066 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 03:27:47.945832 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:27:47.946476 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 03:27:47.947283 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 03:27:47.948136 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:27:47.948860 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 03:27:47.949575 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 03:27:47.950328 systemd[1]: Stopped target swap.target - Swaps. Apr 30 03:27:47.950999 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 03:27:47.951166 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:27:47.952228 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:27:47.953059 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:27:47.953728 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 03:27:47.953842 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:27:47.954463 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 03:27:47.954636 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 03:27:47.955767 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 03:27:47.955970 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:27:47.956662 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 03:27:47.956885 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 03:27:47.957867 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 03:27:47.958016 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:27:47.970822 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 03:27:47.971278 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 03:27:47.971667 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:27:47.974003 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 03:27:47.976234 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 03:27:47.976925 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:27:47.982195 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 03:27:47.982480 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:27:47.992136 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 03:27:47.992329 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 03:27:48.000505 ignition[993]: INFO : Ignition 2.19.0 Apr 30 03:27:48.002378 ignition[993]: INFO : Stage: umount Apr 30 03:27:48.002378 ignition[993]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:27:48.002378 ignition[993]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:27:48.006631 ignition[993]: INFO : umount: umount passed Apr 30 03:27:48.006631 ignition[993]: INFO : Ignition finished successfully Apr 30 03:27:48.004598 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 03:27:48.004881 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 03:27:48.009094 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 03:27:48.009286 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 03:27:48.015650 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 03:27:48.015755 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 03:27:48.017196 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 03:27:48.017272 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 03:27:48.018215 systemd[1]: Stopped target network.target - Network. Apr 30 03:27:48.018939 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 03:27:48.019019 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:27:48.020789 systemd[1]: Stopped target paths.target - Path Units. Apr 30 03:27:48.021382 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 03:27:48.024842 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:27:48.026043 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 03:27:48.026400 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 03:27:48.027188 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 03:27:48.027252 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:27:48.028158 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 03:27:48.028224 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:27:48.028916 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 03:27:48.029009 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 03:27:48.029736 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 03:27:48.029802 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 03:27:48.030663 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 03:27:48.032139 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 03:27:48.034449 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 03:27:48.035352 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 03:27:48.035489 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 03:27:48.035782 systemd-networkd[756]: eth1: DHCPv6 lease lost Apr 30 03:27:48.038238 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 03:27:48.038320 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 03:27:48.039867 systemd-networkd[756]: eth0: DHCPv6 lease lost Apr 30 03:27:48.042450 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 03:27:48.042775 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 03:27:48.046212 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 03:27:48.046412 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 03:27:48.048798 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 03:27:48.048922 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:27:48.054901 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 03:27:48.055305 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 03:27:48.055403 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:27:48.055989 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:27:48.056059 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:27:48.056550 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 03:27:48.056614 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 03:27:48.059987 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 03:27:48.060097 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:27:48.060956 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:27:48.080579 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 03:27:48.081572 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:27:48.083335 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 03:27:48.083545 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 03:27:48.085593 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 03:27:48.085726 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 03:27:48.086212 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 03:27:48.086261 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:27:48.087187 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 03:27:48.087274 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:27:48.088974 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 03:27:48.089066 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 03:27:48.090419 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:27:48.090517 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:27:48.097236 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 03:27:48.100116 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 03:27:48.100284 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:27:48.101307 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:27:48.101413 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:27:48.122681 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 03:27:48.122864 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 03:27:48.124314 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 03:27:48.131084 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 03:27:48.150827 systemd[1]: Switching root. Apr 30 03:27:48.202600 systemd-journald[183]: Journal stopped Apr 30 03:27:49.428603 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Apr 30 03:27:49.428749 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 03:27:49.428775 kernel: SELinux: policy capability open_perms=1 Apr 30 03:27:49.428796 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 03:27:49.428816 kernel: SELinux: policy capability always_check_network=0 Apr 30 03:27:49.428843 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 03:27:49.428861 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 03:27:49.428880 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 03:27:49.428899 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 03:27:49.428919 kernel: audit: type=1403 audit(1745983668.372:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 03:27:49.428941 systemd[1]: Successfully loaded SELinux policy in 49.839ms. Apr 30 03:27:49.428990 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.812ms. Apr 30 03:27:49.429012 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:27:49.429033 systemd[1]: Detected virtualization kvm. Apr 30 03:27:49.429059 systemd[1]: Detected architecture x86-64. Apr 30 03:27:49.429079 systemd[1]: Detected first boot. Apr 30 03:27:49.429110 systemd[1]: Hostname set to . Apr 30 03:27:49.429131 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:27:49.429153 zram_generator::config[1036]: No configuration found. Apr 30 03:27:49.429176 systemd[1]: Populated /etc with preset unit settings. Apr 30 03:27:49.429197 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 03:27:49.429222 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 03:27:49.429244 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 03:27:49.429267 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 03:27:49.429285 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 03:27:49.429305 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 03:27:49.429324 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 03:27:49.429346 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 03:27:49.429371 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 03:27:49.429392 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 03:27:49.429418 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 03:27:49.429440 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:27:49.429462 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:27:49.429483 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 03:27:49.429504 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 03:27:49.429526 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 03:27:49.429551 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:27:49.429572 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 03:27:49.429593 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:27:49.429616 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 03:27:49.429639 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 03:27:49.429660 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 03:27:49.444785 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 03:27:49.444848 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:27:49.444871 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:27:49.444894 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:27:49.444908 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:27:49.444920 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 03:27:49.444933 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 03:27:49.444946 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:27:49.444959 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:27:49.444981 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:27:49.444994 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 03:27:49.445171 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 03:27:49.445198 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 03:27:49.445216 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 03:27:49.445236 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:27:49.445255 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 03:27:49.445273 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 03:27:49.445292 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 03:27:49.445312 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 03:27:49.445340 systemd[1]: Reached target machines.target - Containers. Apr 30 03:27:49.445357 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 03:27:49.445376 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:27:49.445396 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:27:49.445415 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 03:27:49.445433 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:27:49.445451 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:27:49.445467 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:27:49.445486 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 03:27:49.445502 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:27:49.445525 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 03:27:49.445542 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 03:27:49.445559 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 03:27:49.445576 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 03:27:49.445593 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 03:27:49.445619 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:27:49.445643 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:27:49.445668 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 03:27:49.445699 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 03:27:49.445723 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:27:49.445741 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 03:27:49.445763 systemd[1]: Stopped verity-setup.service. Apr 30 03:27:49.445782 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:27:49.445802 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 03:27:49.445824 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 03:27:49.445842 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 03:27:49.445860 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 03:27:49.445879 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 03:27:49.445898 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 03:27:49.445918 kernel: loop: module loaded Apr 30 03:27:49.445941 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:27:49.445962 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 03:27:49.445981 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 03:27:49.446006 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:27:49.446032 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:27:49.446060 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:27:49.446082 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:27:49.446103 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:27:49.446137 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:27:49.446159 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:27:49.446181 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 03:27:49.446202 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:27:49.446226 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:27:49.446248 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 03:27:49.446270 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 03:27:49.446288 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 03:27:49.446305 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 03:27:49.446328 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 03:27:49.446346 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:27:49.446365 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 03:27:49.446383 kernel: fuse: init (API version 7.39) Apr 30 03:27:49.446401 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 03:27:49.446419 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 03:27:49.446437 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:27:49.446456 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 03:27:49.446479 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:27:49.446507 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 03:27:49.446527 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 03:27:49.446548 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 03:27:49.446567 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 03:27:49.446588 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 03:27:49.446648 systemd-journald[1106]: Collecting audit messages is disabled. Apr 30 03:27:49.446674 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 03:27:49.448898 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 03:27:49.448946 systemd-journald[1106]: Journal started Apr 30 03:27:49.448993 systemd-journald[1106]: Runtime Journal (/run/log/journal/f08d5e0bfd1c4413bff2f617fa1a5a46) is 4.9M, max 39.3M, 34.4M free. Apr 30 03:27:49.463877 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:27:49.009437 systemd[1]: Queued start job for default target multi-user.target. Apr 30 03:27:49.031423 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 30 03:27:49.032082 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 03:27:49.464981 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 03:27:49.510879 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:27:49.530404 kernel: loop0: detected capacity change from 0 to 142488 Apr 30 03:27:49.537489 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 03:27:49.538250 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 03:27:49.548004 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 03:27:49.560886 systemd-journald[1106]: Time spent on flushing to /var/log/journal/f08d5e0bfd1c4413bff2f617fa1a5a46 is 38.696ms for 987 entries. Apr 30 03:27:49.560886 systemd-journald[1106]: System Journal (/var/log/journal/f08d5e0bfd1c4413bff2f617fa1a5a46) is 8.0M, max 195.6M, 187.6M free. Apr 30 03:27:49.606733 systemd-journald[1106]: Received client request to flush runtime journal. Apr 30 03:27:49.606781 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 03:27:49.606799 kernel: ACPI: bus type drm_connector registered Apr 30 03:27:49.578308 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 03:27:49.590919 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 03:27:49.600441 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:27:49.600600 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:27:49.615383 kernel: loop1: detected capacity change from 0 to 210664 Apr 30 03:27:49.616792 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 03:27:49.622211 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 03:27:49.625207 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 03:27:49.639287 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:27:49.646027 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 03:27:49.692737 kernel: loop2: detected capacity change from 0 to 8 Apr 30 03:27:49.705783 udevadm[1174]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 03:27:49.738779 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 03:27:49.744466 kernel: loop3: detected capacity change from 0 to 140768 Apr 30 03:27:49.744623 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:27:49.801737 kernel: loop4: detected capacity change from 0 to 142488 Apr 30 03:27:49.833817 kernel: loop5: detected capacity change from 0 to 210664 Apr 30 03:27:49.835409 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Apr 30 03:27:49.837172 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Apr 30 03:27:49.860730 kernel: loop6: detected capacity change from 0 to 8 Apr 30 03:27:49.864731 kernel: loop7: detected capacity change from 0 to 140768 Apr 30 03:27:49.864099 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:27:49.884451 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Apr 30 03:27:49.885417 (sd-merge)[1181]: Merged extensions into '/usr'. Apr 30 03:27:49.900117 systemd[1]: Reloading requested from client PID 1131 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 03:27:49.900142 systemd[1]: Reloading... Apr 30 03:27:50.057737 zram_generator::config[1205]: No configuration found. Apr 30 03:27:50.175162 ldconfig[1125]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 03:27:50.270476 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:27:50.342099 systemd[1]: Reloading finished in 440 ms. Apr 30 03:27:50.387865 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 03:27:50.390874 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 03:27:50.401993 systemd[1]: Starting ensure-sysext.service... Apr 30 03:27:50.408009 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:27:50.437753 systemd[1]: Reloading requested from client PID 1251 ('systemctl') (unit ensure-sysext.service)... Apr 30 03:27:50.437784 systemd[1]: Reloading... Apr 30 03:27:50.460856 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 03:27:50.461468 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 03:27:50.465142 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 03:27:50.465533 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Apr 30 03:27:50.465621 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Apr 30 03:27:50.477411 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:27:50.477429 systemd-tmpfiles[1252]: Skipping /boot Apr 30 03:27:50.507549 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:27:50.507586 systemd-tmpfiles[1252]: Skipping /boot Apr 30 03:27:50.579939 zram_generator::config[1279]: No configuration found. Apr 30 03:27:50.734669 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:27:50.804273 systemd[1]: Reloading finished in 365 ms. Apr 30 03:27:50.828960 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 03:27:50.834644 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:27:50.847067 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:27:50.852008 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 03:27:50.855271 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 03:27:50.866756 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:27:50.871947 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:27:50.880450 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 03:27:50.888939 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:27:50.889155 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:27:50.897419 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:27:50.902281 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:27:50.914148 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:27:50.914816 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:27:50.915011 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:27:50.929364 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 03:27:50.933047 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:27:50.933281 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:27:50.933487 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:27:50.933578 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:27:50.938307 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:27:50.938660 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:27:50.944193 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:27:50.945481 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:27:50.945716 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:27:50.949202 systemd[1]: Finished ensure-sysext.service. Apr 30 03:27:50.967023 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 03:27:50.969135 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 03:27:50.976220 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:27:50.976803 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:27:50.988395 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 03:27:50.991902 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 03:27:50.997191 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:27:50.997713 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:27:51.000409 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:27:51.009260 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Apr 30 03:27:51.010629 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:27:51.010838 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:27:51.011644 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:27:51.020538 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:27:51.020810 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:27:51.054072 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 03:27:51.056396 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 03:27:51.060603 augenrules[1359]: No rules Apr 30 03:27:51.060055 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:27:51.062430 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:27:51.068210 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 03:27:51.074080 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:27:51.086023 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:27:51.196877 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 03:27:51.198198 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 03:27:51.226196 systemd-networkd[1374]: lo: Link UP Apr 30 03:27:51.226552 systemd-networkd[1374]: lo: Gained carrier Apr 30 03:27:51.227366 systemd-networkd[1374]: Enumeration completed Apr 30 03:27:51.228102 systemd-resolved[1329]: Positive Trust Anchors: Apr 30 03:27:51.229054 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:27:51.231539 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:27:51.231803 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:27:51.236933 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 03:27:51.240274 systemd-resolved[1329]: Using system hostname 'ci-4081.3.3-0-7c044d2e24'. Apr 30 03:27:51.242022 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:27:51.242494 systemd[1]: Reached target network.target - Network. Apr 30 03:27:51.242860 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:27:51.274843 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Apr 30 03:27:51.275366 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:27:51.275535 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:27:51.284159 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1383) Apr 30 03:27:51.286944 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:27:51.297960 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:27:51.304642 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:27:51.305983 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:27:51.306040 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:27:51.306059 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:27:51.324632 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 03:27:51.327278 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:27:51.329531 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:27:51.336580 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 03:27:51.350619 kernel: ISO 9660 Extensions: RRIP_1991A Apr 30 03:27:51.348931 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 03:27:51.354737 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Apr 30 03:27:51.355818 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:27:51.356015 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:27:51.358727 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:27:51.366894 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:27:51.367078 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:27:51.367943 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:27:51.376773 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 03:27:51.416517 systemd-networkd[1374]: eth1: Configuring with /run/systemd/network/10-6e:8f:63:47:fb:23.network. Apr 30 03:27:51.419951 systemd-networkd[1374]: eth1: Link UP Apr 30 03:27:51.419964 systemd-networkd[1374]: eth1: Gained carrier Apr 30 03:27:51.425091 systemd-timesyncd[1344]: Network configuration changed, trying to establish connection. Apr 30 03:27:51.435913 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 30 03:27:51.440759 kernel: ACPI: button: Power Button [PWRF] Apr 30 03:27:51.445040 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Apr 30 03:27:51.445579 systemd-networkd[1374]: eth0: Configuring with /run/systemd/network/10-2a:07:b2:ac:25:d7.network. Apr 30 03:27:51.447758 systemd-timesyncd[1344]: Network configuration changed, trying to establish connection. Apr 30 03:27:51.448579 systemd-networkd[1374]: eth0: Link UP Apr 30 03:27:51.448584 systemd-networkd[1374]: eth0: Gained carrier Apr 30 03:27:51.452151 systemd-timesyncd[1344]: Network configuration changed, trying to establish connection. Apr 30 03:27:51.454082 systemd-timesyncd[1344]: Network configuration changed, trying to establish connection. Apr 30 03:27:51.526734 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 30 03:27:51.558832 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 03:27:51.559155 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:27:51.649814 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Apr 30 03:27:51.649903 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Apr 30 03:27:51.666967 kernel: Console: switching to colour dummy device 80x25 Apr 30 03:27:51.667066 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 30 03:27:51.667088 kernel: [drm] features: -context_init Apr 30 03:27:51.667109 kernel: [drm] number of scanouts: 1 Apr 30 03:27:51.669967 kernel: [drm] number of cap sets: 0 Apr 30 03:27:51.674721 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Apr 30 03:27:51.687709 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Apr 30 03:27:51.689806 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 03:27:51.688000 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:27:51.688384 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:27:51.694765 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 30 03:27:51.708323 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:27:51.717555 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:27:51.718127 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:27:51.728093 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:27:51.738780 kernel: EDAC MC: Ver: 3.0.0 Apr 30 03:27:51.778511 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 03:27:51.785340 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 03:27:51.800893 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:27:51.812458 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:27:51.841839 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 03:27:51.843318 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:27:51.843541 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:27:51.843994 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 03:27:51.844150 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 03:27:51.844491 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 03:27:51.845361 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 03:27:51.845602 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 03:27:51.845807 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 03:27:51.845946 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:27:51.846113 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:27:51.849367 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 03:27:51.853888 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 03:27:51.862306 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 03:27:51.866434 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 03:27:51.867991 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 03:27:51.868681 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:27:51.872800 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:27:51.873485 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:27:51.873520 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:27:51.887928 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 03:27:51.895841 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:27:51.900009 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 03:27:51.905955 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 03:27:51.916111 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 03:27:51.920590 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 03:27:51.923358 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 03:27:51.931126 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 03:27:51.936740 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 03:27:51.952196 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 03:27:51.964982 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 03:27:51.980123 coreos-metadata[1440]: Apr 30 03:27:51.979 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Apr 30 03:27:51.987237 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 03:27:51.988951 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 03:27:51.991746 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 03:27:51.994296 coreos-metadata[1440]: Apr 30 03:27:51.994 INFO Fetch successful Apr 30 03:27:51.994524 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 03:27:52.006464 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 03:27:52.010952 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 03:27:52.028450 jq[1442]: false Apr 30 03:27:52.024498 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 03:27:52.024800 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 03:27:52.035260 dbus-daemon[1441]: [system] SELinux support is enabled Apr 30 03:27:52.035944 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 03:27:52.040243 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 03:27:52.040283 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 03:27:52.044517 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 03:27:52.044638 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Apr 30 03:27:52.044664 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 03:27:52.050437 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 03:27:52.050783 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 03:27:52.054945 extend-filesystems[1444]: Found loop4 Apr 30 03:27:52.054945 extend-filesystems[1444]: Found loop5 Apr 30 03:27:52.054945 extend-filesystems[1444]: Found loop6 Apr 30 03:27:52.054945 extend-filesystems[1444]: Found loop7 Apr 30 03:27:52.054945 extend-filesystems[1444]: Found vda Apr 30 03:27:52.054945 extend-filesystems[1444]: Found vda1 Apr 30 03:27:52.054945 extend-filesystems[1444]: Found vda2 Apr 30 03:27:52.054945 extend-filesystems[1444]: Found vda3 Apr 30 03:27:52.100500 extend-filesystems[1444]: Found usr Apr 30 03:27:52.100500 extend-filesystems[1444]: Found vda4 Apr 30 03:27:52.100500 extend-filesystems[1444]: Found vda6 Apr 30 03:27:52.100500 extend-filesystems[1444]: Found vda7 Apr 30 03:27:52.100500 extend-filesystems[1444]: Found vda9 Apr 30 03:27:52.100500 extend-filesystems[1444]: Checking size of /dev/vda9 Apr 30 03:27:52.121332 jq[1453]: true Apr 30 03:27:52.121809 extend-filesystems[1444]: Resized partition /dev/vda9 Apr 30 03:27:52.123650 extend-filesystems[1478]: resize2fs 1.47.1 (20-May-2024) Apr 30 03:27:52.141159 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Apr 30 03:27:52.157578 (ntainerd)[1473]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 03:27:52.173267 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 03:27:52.176237 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 03:27:52.179640 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 03:27:52.180823 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 03:27:52.191715 tar[1455]: linux-amd64/helm Apr 30 03:27:52.203931 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1391) Apr 30 03:27:52.209248 update_engine[1452]: I20250430 03:27:52.208575 1452 main.cc:92] Flatcar Update Engine starting Apr 30 03:27:52.221403 jq[1477]: true Apr 30 03:27:52.226278 systemd[1]: Started update-engine.service - Update Engine. Apr 30 03:27:52.233646 update_engine[1452]: I20250430 03:27:52.233137 1452 update_check_scheduler.cc:74] Next update check in 11m42s Apr 30 03:27:52.237225 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 03:27:52.290891 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Apr 30 03:27:52.297631 systemd-logind[1450]: New seat seat0. Apr 30 03:27:52.308348 systemd-logind[1450]: Watching system buttons on /dev/input/event1 (Power Button) Apr 30 03:27:52.308375 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 03:27:52.308874 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 03:27:52.312902 extend-filesystems[1478]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 30 03:27:52.312902 extend-filesystems[1478]: old_desc_blocks = 1, new_desc_blocks = 8 Apr 30 03:27:52.312902 extend-filesystems[1478]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Apr 30 03:27:52.339900 extend-filesystems[1444]: Resized filesystem in /dev/vda9 Apr 30 03:27:52.339900 extend-filesystems[1444]: Found vdb Apr 30 03:27:52.316829 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 03:27:52.317146 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 03:27:52.508086 bash[1507]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:27:52.512114 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 03:27:52.527835 systemd[1]: Starting sshkeys.service... Apr 30 03:27:52.577606 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 03:27:52.584208 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 03:27:52.626777 locksmithd[1486]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 03:27:52.712491 coreos-metadata[1515]: Apr 30 03:27:52.712 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Apr 30 03:27:52.726188 coreos-metadata[1515]: Apr 30 03:27:52.725 INFO Fetch successful Apr 30 03:27:52.740006 unknown[1515]: wrote ssh authorized keys file for user: core Apr 30 03:27:52.803103 update-ssh-keys[1519]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:27:52.804148 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 03:27:52.814773 systemd[1]: Finished sshkeys.service. Apr 30 03:27:52.941002 sshd_keygen[1481]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 03:27:52.962744 containerd[1473]: time="2025-04-30T03:27:52.960713101Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 03:27:53.002832 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 03:27:53.018404 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 03:27:53.061624 containerd[1473]: time="2025-04-30T03:27:53.061524376Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:27:53.065092 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 03:27:53.066749 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 03:27:53.075904 containerd[1473]: time="2025-04-30T03:27:53.074835089Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:27:53.076221 containerd[1473]: time="2025-04-30T03:27:53.076092307Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 03:27:53.076221 containerd[1473]: time="2025-04-30T03:27:53.076179367Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 03:27:53.077955 containerd[1473]: time="2025-04-30T03:27:53.076623117Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 03:27:53.077955 containerd[1473]: time="2025-04-30T03:27:53.076664435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 03:27:53.077955 containerd[1473]: time="2025-04-30T03:27:53.076751639Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:27:53.077955 containerd[1473]: time="2025-04-30T03:27:53.076768900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:27:53.077955 containerd[1473]: time="2025-04-30T03:27:53.077172873Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:27:53.077955 containerd[1473]: time="2025-04-30T03:27:53.077207064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 03:27:53.077955 containerd[1473]: time="2025-04-30T03:27:53.077227611Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:27:53.077955 containerd[1473]: time="2025-04-30T03:27:53.077239340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 03:27:53.077955 containerd[1473]: time="2025-04-30T03:27:53.077346227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:27:53.078598 containerd[1473]: time="2025-04-30T03:27:53.078542709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:27:53.079153 containerd[1473]: time="2025-04-30T03:27:53.079103421Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:27:53.079303 containerd[1473]: time="2025-04-30T03:27:53.079281390Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 03:27:53.079635 containerd[1473]: time="2025-04-30T03:27:53.079608855Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 03:27:53.079897 containerd[1473]: time="2025-04-30T03:27:53.079869428Z" level=info msg="metadata content store policy set" policy=shared Apr 30 03:27:53.083322 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 03:27:53.093147 containerd[1473]: time="2025-04-30T03:27:53.093062853Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 03:27:53.094299 containerd[1473]: time="2025-04-30T03:27:53.093461912Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 03:27:53.094299 containerd[1473]: time="2025-04-30T03:27:53.093670510Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 03:27:53.094299 containerd[1473]: time="2025-04-30T03:27:53.093725532Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 03:27:53.094299 containerd[1473]: time="2025-04-30T03:27:53.093751216Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 03:27:53.094299 containerd[1473]: time="2025-04-30T03:27:53.094030383Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 03:27:53.095451 containerd[1473]: time="2025-04-30T03:27:53.095406598Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 03:27:53.096728 containerd[1473]: time="2025-04-30T03:27:53.096004503Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 03:27:53.096728 containerd[1473]: time="2025-04-30T03:27:53.096043347Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 03:27:53.096728 containerd[1473]: time="2025-04-30T03:27:53.096066153Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 03:27:53.096728 containerd[1473]: time="2025-04-30T03:27:53.096089847Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 03:27:53.096728 containerd[1473]: time="2025-04-30T03:27:53.096114129Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 03:27:53.096728 containerd[1473]: time="2025-04-30T03:27:53.096133121Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 03:27:53.096728 containerd[1473]: time="2025-04-30T03:27:53.096157169Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 03:27:53.096728 containerd[1473]: time="2025-04-30T03:27:53.096178550Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 03:27:53.096728 containerd[1473]: time="2025-04-30T03:27:53.096206624Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 03:27:53.096728 containerd[1473]: time="2025-04-30T03:27:53.096225580Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 03:27:53.096728 containerd[1473]: time="2025-04-30T03:27:53.096247034Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 03:27:53.096728 containerd[1473]: time="2025-04-30T03:27:53.096300384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 03:27:53.096728 containerd[1473]: time="2025-04-30T03:27:53.096324409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 03:27:53.096728 containerd[1473]: time="2025-04-30T03:27:53.096341578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 03:27:53.097271 containerd[1473]: time="2025-04-30T03:27:53.096363020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 03:27:53.097271 containerd[1473]: time="2025-04-30T03:27:53.096382080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 03:27:53.097271 containerd[1473]: time="2025-04-30T03:27:53.096400578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 03:27:53.097271 containerd[1473]: time="2025-04-30T03:27:53.096417213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 03:27:53.097271 containerd[1473]: time="2025-04-30T03:27:53.096435966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 03:27:53.097271 containerd[1473]: time="2025-04-30T03:27:53.096455191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 03:27:53.097271 containerd[1473]: time="2025-04-30T03:27:53.096477452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 03:27:53.097271 containerd[1473]: time="2025-04-30T03:27:53.096496133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 03:27:53.097271 containerd[1473]: time="2025-04-30T03:27:53.096512307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 03:27:53.097271 containerd[1473]: time="2025-04-30T03:27:53.096532618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 03:27:53.097271 containerd[1473]: time="2025-04-30T03:27:53.096556631Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 03:27:53.097271 containerd[1473]: time="2025-04-30T03:27:53.096588215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 03:27:53.097271 containerd[1473]: time="2025-04-30T03:27:53.096635554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 03:27:53.097271 containerd[1473]: time="2025-04-30T03:27:53.096661479Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 03:27:53.098715 containerd[1473]: time="2025-04-30T03:27:53.097894798Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 03:27:53.098715 containerd[1473]: time="2025-04-30T03:27:53.098023718Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 03:27:53.098715 containerd[1473]: time="2025-04-30T03:27:53.098044562Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 03:27:53.098715 containerd[1473]: time="2025-04-30T03:27:53.098062715Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 03:27:53.098715 containerd[1473]: time="2025-04-30T03:27:53.098082463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 03:27:53.098715 containerd[1473]: time="2025-04-30T03:27:53.098105196Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 03:27:53.098715 containerd[1473]: time="2025-04-30T03:27:53.098129720Z" level=info msg="NRI interface is disabled by configuration." Apr 30 03:27:53.098715 containerd[1473]: time="2025-04-30T03:27:53.098150251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 03:27:53.099118 containerd[1473]: time="2025-04-30T03:27:53.098554511Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 03:27:53.099118 containerd[1473]: time="2025-04-30T03:27:53.098656913Z" level=info msg="Connect containerd service" Apr 30 03:27:53.100474 containerd[1473]: time="2025-04-30T03:27:53.099600474Z" level=info msg="using legacy CRI server" Apr 30 03:27:53.100474 containerd[1473]: time="2025-04-30T03:27:53.099636568Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 03:27:53.100474 containerd[1473]: time="2025-04-30T03:27:53.099907606Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 03:27:53.101362 containerd[1473]: time="2025-04-30T03:27:53.101317161Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:27:53.102105 containerd[1473]: time="2025-04-30T03:27:53.101822789Z" level=info msg="Start subscribing containerd event" Apr 30 03:27:53.102297 containerd[1473]: time="2025-04-30T03:27:53.102271108Z" level=info msg="Start recovering state" Apr 30 03:27:53.102477 containerd[1473]: time="2025-04-30T03:27:53.102457275Z" level=info msg="Start event monitor" Apr 30 03:27:53.102565 containerd[1473]: time="2025-04-30T03:27:53.102552267Z" level=info msg="Start snapshots syncer" Apr 30 03:27:53.102633 containerd[1473]: time="2025-04-30T03:27:53.102619622Z" level=info msg="Start cni network conf syncer for default" Apr 30 03:27:53.102650 systemd-networkd[1374]: eth0: Gained IPv6LL Apr 30 03:27:53.108846 containerd[1473]: time="2025-04-30T03:27:53.104003848Z" level=info msg="Start streaming server" Apr 30 03:27:53.108846 containerd[1473]: time="2025-04-30T03:27:53.103655545Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 03:27:53.108846 containerd[1473]: time="2025-04-30T03:27:53.104757558Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 03:27:53.108846 containerd[1473]: time="2025-04-30T03:27:53.104847262Z" level=info msg="containerd successfully booted in 0.146568s" Apr 30 03:27:53.104358 systemd-timesyncd[1344]: Network configuration changed, trying to establish connection. Apr 30 03:27:53.105875 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 03:27:53.112016 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 03:27:53.117817 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 03:27:53.132088 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:27:53.144224 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 03:27:53.162281 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 03:27:53.174269 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 03:27:53.183118 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 03:27:53.186309 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 03:27:53.226117 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 03:27:53.294742 systemd-networkd[1374]: eth1: Gained IPv6LL Apr 30 03:27:53.297384 systemd-timesyncd[1344]: Network configuration changed, trying to establish connection. Apr 30 03:27:53.417554 tar[1455]: linux-amd64/LICENSE Apr 30 03:27:53.418672 tar[1455]: linux-amd64/README.md Apr 30 03:27:53.449344 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 03:27:53.466417 systemd[1]: Started sshd@0-143.198.63.212:22-218.92.0.157:25334.service - OpenSSH per-connection server daemon (218.92.0.157:25334). Apr 30 03:27:53.474204 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 03:27:54.491090 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:27:54.496262 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 03:27:54.501056 systemd[1]: Startup finished in 1.124s (kernel) + 5.615s (initrd) + 6.176s (userspace) = 12.916s. Apr 30 03:27:54.504994 (kubelet)[1567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:27:54.540818 sshd[1561]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Apr 30 03:27:55.552015 kubelet[1567]: E0430 03:27:55.551929 1567 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:27:55.557471 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:27:55.557726 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:27:55.558337 systemd[1]: kubelet.service: Consumed 1.641s CPU time. Apr 30 03:27:56.146276 systemd[1]: Started sshd@1-143.198.63.212:22-139.178.89.65:54430.service - OpenSSH per-connection server daemon (139.178.89.65:54430). Apr 30 03:27:56.195650 sshd[1580]: Accepted publickey for core from 139.178.89.65 port 54430 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:27:56.198748 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:27:56.212674 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 03:27:56.223246 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 03:27:56.228789 systemd-logind[1450]: New session 1 of user core. Apr 30 03:27:56.246608 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 03:27:56.267200 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 03:27:56.271970 (systemd)[1584]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 03:27:56.444832 systemd[1584]: Queued start job for default target default.target. Apr 30 03:27:56.452471 systemd[1584]: Created slice app.slice - User Application Slice. Apr 30 03:27:56.452520 systemd[1584]: Reached target paths.target - Paths. Apr 30 03:27:56.452541 systemd[1584]: Reached target timers.target - Timers. Apr 30 03:27:56.454782 systemd[1584]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 03:27:56.470716 systemd[1584]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 03:27:56.470914 systemd[1584]: Reached target sockets.target - Sockets. Apr 30 03:27:56.470938 systemd[1584]: Reached target basic.target - Basic System. Apr 30 03:27:56.471008 systemd[1584]: Reached target default.target - Main User Target. Apr 30 03:27:56.471066 systemd[1584]: Startup finished in 188ms. Apr 30 03:27:56.471214 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 03:27:56.480051 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 03:27:56.556564 systemd[1]: Started sshd@2-143.198.63.212:22-139.178.89.65:54432.service - OpenSSH per-connection server daemon (139.178.89.65:54432). Apr 30 03:27:56.608735 sshd[1595]: Accepted publickey for core from 139.178.89.65 port 54432 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:27:56.611711 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:27:56.619929 systemd-logind[1450]: New session 2 of user core. Apr 30 03:27:56.624048 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 03:27:56.690205 sshd[1595]: pam_unix(sshd:session): session closed for user core Apr 30 03:27:56.701948 systemd[1]: sshd@2-143.198.63.212:22-139.178.89.65:54432.service: Deactivated successfully. Apr 30 03:27:56.705067 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 03:27:56.707929 systemd-logind[1450]: Session 2 logged out. Waiting for processes to exit. Apr 30 03:27:56.712238 systemd[1]: Started sshd@3-143.198.63.212:22-139.178.89.65:54438.service - OpenSSH per-connection server daemon (139.178.89.65:54438). Apr 30 03:27:56.715359 systemd-logind[1450]: Removed session 2. Apr 30 03:27:56.757510 sshd[1558]: PAM: Permission denied for root from 218.92.0.157 Apr 30 03:27:56.766775 sshd[1602]: Accepted publickey for core from 139.178.89.65 port 54438 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:27:56.768005 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:27:56.774270 systemd-logind[1450]: New session 3 of user core. Apr 30 03:27:56.786980 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 03:27:56.846895 sshd[1602]: pam_unix(sshd:session): session closed for user core Apr 30 03:27:56.860324 systemd[1]: sshd@3-143.198.63.212:22-139.178.89.65:54438.service: Deactivated successfully. Apr 30 03:27:56.862433 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 03:27:56.865058 systemd-logind[1450]: Session 3 logged out. Waiting for processes to exit. Apr 30 03:27:56.875809 systemd[1]: Started sshd@4-143.198.63.212:22-139.178.89.65:54442.service - OpenSSH per-connection server daemon (139.178.89.65:54442). Apr 30 03:27:56.877325 systemd-logind[1450]: Removed session 3. Apr 30 03:27:56.921887 sshd[1609]: Accepted publickey for core from 139.178.89.65 port 54442 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:27:56.924086 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:27:56.931879 systemd-logind[1450]: New session 4 of user core. Apr 30 03:27:56.942065 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 03:27:57.004707 sshd[1609]: pam_unix(sshd:session): session closed for user core Apr 30 03:27:57.020194 systemd[1]: sshd@4-143.198.63.212:22-139.178.89.65:54442.service: Deactivated successfully. Apr 30 03:27:57.022058 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 03:27:57.024073 systemd-logind[1450]: Session 4 logged out. Waiting for processes to exit. Apr 30 03:27:57.029202 systemd[1]: Started sshd@5-143.198.63.212:22-139.178.89.65:54450.service - OpenSSH per-connection server daemon (139.178.89.65:54450). Apr 30 03:27:57.031986 systemd-logind[1450]: Removed session 4. Apr 30 03:27:57.032961 sshd[1611]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Apr 30 03:27:57.074788 sshd[1617]: Accepted publickey for core from 139.178.89.65 port 54450 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:27:57.076944 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:27:57.084737 systemd-logind[1450]: New session 5 of user core. Apr 30 03:27:57.090037 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 03:27:57.167963 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 03:27:57.168404 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:27:57.183450 sudo[1620]: pam_unix(sudo:session): session closed for user root Apr 30 03:27:57.188377 sshd[1617]: pam_unix(sshd:session): session closed for user core Apr 30 03:27:57.201591 systemd[1]: sshd@5-143.198.63.212:22-139.178.89.65:54450.service: Deactivated successfully. Apr 30 03:27:57.205207 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 03:27:57.207908 systemd-logind[1450]: Session 5 logged out. Waiting for processes to exit. Apr 30 03:27:57.216226 systemd[1]: Started sshd@6-143.198.63.212:22-139.178.89.65:54458.service - OpenSSH per-connection server daemon (139.178.89.65:54458). Apr 30 03:27:57.218514 systemd-logind[1450]: Removed session 5. Apr 30 03:27:57.272491 sshd[1625]: Accepted publickey for core from 139.178.89.65 port 54458 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:27:57.274899 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:27:57.281017 systemd-logind[1450]: New session 6 of user core. Apr 30 03:27:57.286998 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 03:27:57.350263 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 03:27:57.351181 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:27:57.356223 sudo[1629]: pam_unix(sudo:session): session closed for user root Apr 30 03:27:57.364510 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 03:27:57.364944 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:27:57.385129 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 03:27:57.387906 auditctl[1632]: No rules Apr 30 03:27:57.388734 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 03:27:57.389046 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 03:27:57.392477 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:27:57.430652 augenrules[1650]: No rules Apr 30 03:27:57.432337 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:27:57.435406 sudo[1628]: pam_unix(sudo:session): session closed for user root Apr 30 03:27:57.439640 sshd[1625]: pam_unix(sshd:session): session closed for user core Apr 30 03:27:57.452177 systemd[1]: sshd@6-143.198.63.212:22-139.178.89.65:54458.service: Deactivated successfully. Apr 30 03:27:57.454579 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 03:27:57.459030 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit. Apr 30 03:27:57.464232 systemd[1]: Started sshd@7-143.198.63.212:22-139.178.89.65:54466.service - OpenSSH per-connection server daemon (139.178.89.65:54466). Apr 30 03:27:57.465638 systemd-logind[1450]: Removed session 6. Apr 30 03:27:57.508951 sshd[1658]: Accepted publickey for core from 139.178.89.65 port 54466 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:27:57.510788 sshd[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:27:57.516070 systemd-logind[1450]: New session 7 of user core. Apr 30 03:27:57.523967 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 03:27:57.587028 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 03:27:57.587463 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:27:58.036043 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 03:27:58.052765 (dockerd)[1677]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 03:27:58.549762 dockerd[1677]: time="2025-04-30T03:27:58.548990708Z" level=info msg="Starting up" Apr 30 03:27:58.685191 dockerd[1677]: time="2025-04-30T03:27:58.685144298Z" level=info msg="Loading containers: start." Apr 30 03:27:58.811731 kernel: Initializing XFRM netlink socket Apr 30 03:27:58.844738 systemd-timesyncd[1344]: Network configuration changed, trying to establish connection. Apr 30 03:27:58.847022 systemd-timesyncd[1344]: Network configuration changed, trying to establish connection. Apr 30 03:27:58.857120 systemd-timesyncd[1344]: Network configuration changed, trying to establish connection. Apr 30 03:27:58.857858 sshd[1558]: PAM: Permission denied for root from 218.92.0.157 Apr 30 03:27:58.904398 systemd-networkd[1374]: docker0: Link UP Apr 30 03:27:58.905136 systemd-timesyncd[1344]: Network configuration changed, trying to establish connection. Apr 30 03:27:58.946578 dockerd[1677]: time="2025-04-30T03:27:58.946516463Z" level=info msg="Loading containers: done." Apr 30 03:27:58.964963 dockerd[1677]: time="2025-04-30T03:27:58.964879148Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 03:27:58.965165 dockerd[1677]: time="2025-04-30T03:27:58.965028297Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 03:27:58.965165 dockerd[1677]: time="2025-04-30T03:27:58.965149406Z" level=info msg="Daemon has completed initialization" Apr 30 03:27:58.997291 dockerd[1677]: time="2025-04-30T03:27:58.996289936Z" level=info msg="API listen on /run/docker.sock" Apr 30 03:27:58.996949 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 03:27:59.135804 sshd[1803]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Apr 30 03:28:00.027791 containerd[1473]: time="2025-04-30T03:28:00.027368383Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 03:28:00.604380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2820537136.mount: Deactivated successfully. Apr 30 03:28:01.567729 sshd[1558]: PAM: Permission denied for root from 218.92.0.157 Apr 30 03:28:01.704655 sshd[1558]: Received disconnect from 218.92.0.157 port 25334:11: [preauth] Apr 30 03:28:01.704655 sshd[1558]: Disconnected from authenticating user root 218.92.0.157 port 25334 [preauth] Apr 30 03:28:01.707023 systemd[1]: sshd@0-143.198.63.212:22-218.92.0.157:25334.service: Deactivated successfully. Apr 30 03:28:02.151315 containerd[1473]: time="2025-04-30T03:28:02.151242973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:02.152952 containerd[1473]: time="2025-04-30T03:28:02.152579422Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" Apr 30 03:28:02.154256 containerd[1473]: time="2025-04-30T03:28:02.153510025Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:02.157434 containerd[1473]: time="2025-04-30T03:28:02.157375941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:02.158961 containerd[1473]: time="2025-04-30T03:28:02.158899176Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.131453224s" Apr 30 03:28:02.158961 containerd[1473]: time="2025-04-30T03:28:02.158961649Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" Apr 30 03:28:02.201910 containerd[1473]: time="2025-04-30T03:28:02.201862811Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 03:28:03.955518 containerd[1473]: time="2025-04-30T03:28:03.954125304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:03.956877 containerd[1473]: time="2025-04-30T03:28:03.956810181Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" Apr 30 03:28:03.957634 containerd[1473]: time="2025-04-30T03:28:03.957600191Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:03.961519 containerd[1473]: time="2025-04-30T03:28:03.961462837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:03.962991 containerd[1473]: time="2025-04-30T03:28:03.962938988Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 1.76078667s" Apr 30 03:28:03.963179 containerd[1473]: time="2025-04-30T03:28:03.963157289Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" Apr 30 03:28:04.000574 containerd[1473]: time="2025-04-30T03:28:04.000522898Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 03:28:05.242742 containerd[1473]: time="2025-04-30T03:28:05.242638553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:05.245219 containerd[1473]: time="2025-04-30T03:28:05.245109021Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" Apr 30 03:28:05.246657 containerd[1473]: time="2025-04-30T03:28:05.246560887Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:05.251888 containerd[1473]: time="2025-04-30T03:28:05.251795234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:05.254965 containerd[1473]: time="2025-04-30T03:28:05.254860870Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.254290972s" Apr 30 03:28:05.254965 containerd[1473]: time="2025-04-30T03:28:05.254924398Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" Apr 30 03:28:05.290321 containerd[1473]: time="2025-04-30T03:28:05.290258005Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 03:28:05.661114 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 03:28:05.670046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:05.827948 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:05.842424 (kubelet)[1913]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:28:05.933199 kubelet[1913]: E0430 03:28:05.933037 1913 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:28:05.939612 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:28:05.941046 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:28:06.509951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount914871209.mount: Deactivated successfully. Apr 30 03:28:07.134800 containerd[1473]: time="2025-04-30T03:28:07.133504123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:07.134800 containerd[1473]: time="2025-04-30T03:28:07.134522292Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" Apr 30 03:28:07.134800 containerd[1473]: time="2025-04-30T03:28:07.134724107Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:07.137613 containerd[1473]: time="2025-04-30T03:28:07.137547390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:07.138898 containerd[1473]: time="2025-04-30T03:28:07.138846871Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.848536705s" Apr 30 03:28:07.139089 containerd[1473]: time="2025-04-30T03:28:07.139065600Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" Apr 30 03:28:07.173910 containerd[1473]: time="2025-04-30T03:28:07.173865070Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 03:28:07.175965 systemd-resolved[1329]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Apr 30 03:28:07.669278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1196455052.mount: Deactivated successfully. Apr 30 03:28:08.655202 containerd[1473]: time="2025-04-30T03:28:08.655133443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:08.657154 containerd[1473]: time="2025-04-30T03:28:08.657079389Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Apr 30 03:28:08.658012 containerd[1473]: time="2025-04-30T03:28:08.657973716Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:08.662202 containerd[1473]: time="2025-04-30T03:28:08.660850131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:08.662202 containerd[1473]: time="2025-04-30T03:28:08.661932854Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.487722004s" Apr 30 03:28:08.662202 containerd[1473]: time="2025-04-30T03:28:08.661966610Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 30 03:28:08.690250 containerd[1473]: time="2025-04-30T03:28:08.690207242Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 03:28:09.143207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2423585006.mount: Deactivated successfully. Apr 30 03:28:09.148872 containerd[1473]: time="2025-04-30T03:28:09.148775842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:09.150665 containerd[1473]: time="2025-04-30T03:28:09.150571555Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Apr 30 03:28:09.151980 containerd[1473]: time="2025-04-30T03:28:09.151896450Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:09.155172 containerd[1473]: time="2025-04-30T03:28:09.155067323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:09.156329 containerd[1473]: time="2025-04-30T03:28:09.156277161Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 465.826676ms" Apr 30 03:28:09.156329 containerd[1473]: time="2025-04-30T03:28:09.156324101Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Apr 30 03:28:09.196818 containerd[1473]: time="2025-04-30T03:28:09.196742612Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 03:28:09.673942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2177623142.mount: Deactivated successfully. Apr 30 03:28:10.253985 systemd-resolved[1329]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Apr 30 03:28:11.445745 containerd[1473]: time="2025-04-30T03:28:11.445586233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:11.448499 containerd[1473]: time="2025-04-30T03:28:11.447431509Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Apr 30 03:28:11.448499 containerd[1473]: time="2025-04-30T03:28:11.447815635Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:11.452463 containerd[1473]: time="2025-04-30T03:28:11.452394322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:11.454490 containerd[1473]: time="2025-04-30T03:28:11.454425376Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.257617134s" Apr 30 03:28:11.454853 containerd[1473]: time="2025-04-30T03:28:11.454681907Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Apr 30 03:28:15.072173 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:15.084160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:15.124081 systemd[1]: Reloading requested from client PID 2099 ('systemctl') (unit session-7.scope)... Apr 30 03:28:15.124101 systemd[1]: Reloading... Apr 30 03:28:15.280726 zram_generator::config[2139]: No configuration found. Apr 30 03:28:15.441021 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:28:15.532374 systemd[1]: Reloading finished in 407 ms. Apr 30 03:28:15.602493 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 03:28:15.602607 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 03:28:15.603106 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:15.607247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:15.778859 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:15.797741 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:28:15.880773 kubelet[2192]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:28:15.881332 kubelet[2192]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:28:15.881332 kubelet[2192]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:28:15.882780 kubelet[2192]: I0430 03:28:15.882643 2192 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:28:16.432785 kubelet[2192]: I0430 03:28:16.432190 2192 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 03:28:16.432785 kubelet[2192]: I0430 03:28:16.432230 2192 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:28:16.432785 kubelet[2192]: I0430 03:28:16.432553 2192 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 03:28:16.462928 kubelet[2192]: I0430 03:28:16.462234 2192 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:28:16.466475 kubelet[2192]: E0430 03:28:16.466345 2192 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://143.198.63.212:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 143.198.63.212:6443: connect: connection refused Apr 30 03:28:16.485730 kubelet[2192]: I0430 03:28:16.484547 2192 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:28:16.485730 kubelet[2192]: I0430 03:28:16.485063 2192 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:28:16.485730 kubelet[2192]: I0430 03:28:16.485125 2192 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-0-7c044d2e24","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 03:28:16.485730 kubelet[2192]: I0430 03:28:16.485538 2192 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:28:16.486118 kubelet[2192]: I0430 03:28:16.485556 2192 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 03:28:16.487955 kubelet[2192]: I0430 03:28:16.487907 2192 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:28:16.489232 kubelet[2192]: I0430 03:28:16.489189 2192 kubelet.go:400] "Attempting to sync node with API server" Apr 30 03:28:16.489232 kubelet[2192]: I0430 03:28:16.489232 2192 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:28:16.489378 kubelet[2192]: I0430 03:28:16.489282 2192 kubelet.go:312] "Adding apiserver pod source" Apr 30 03:28:16.489378 kubelet[2192]: I0430 03:28:16.489309 2192 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:28:16.495765 kubelet[2192]: W0430 03:28:16.494737 2192 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.198.63.212:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-0-7c044d2e24&limit=500&resourceVersion=0": dial tcp 143.198.63.212:6443: connect: connection refused Apr 30 03:28:16.495765 kubelet[2192]: E0430 03:28:16.494844 2192 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://143.198.63.212:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-0-7c044d2e24&limit=500&resourceVersion=0": dial tcp 143.198.63.212:6443: connect: connection refused Apr 30 03:28:16.495765 kubelet[2192]: I0430 03:28:16.494990 2192 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:28:16.497716 kubelet[2192]: I0430 03:28:16.497406 2192 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:28:16.497716 kubelet[2192]: W0430 03:28:16.497543 2192 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 03:28:16.499184 kubelet[2192]: I0430 03:28:16.498652 2192 server.go:1264] "Started kubelet" Apr 30 03:28:16.507747 kubelet[2192]: W0430 03:28:16.505741 2192 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.198.63.212:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.63.212:6443: connect: connection refused Apr 30 03:28:16.507747 kubelet[2192]: E0430 03:28:16.505821 2192 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://143.198.63.212:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.63.212:6443: connect: connection refused Apr 30 03:28:16.507747 kubelet[2192]: I0430 03:28:16.505860 2192 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:28:16.507747 kubelet[2192]: I0430 03:28:16.506857 2192 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:28:16.507747 kubelet[2192]: I0430 03:28:16.507419 2192 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:28:16.507747 kubelet[2192]: I0430 03:28:16.507434 2192 server.go:455] "Adding debug handlers to kubelet server" Apr 30 03:28:16.508403 kubelet[2192]: E0430 03:28:16.508215 2192 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.198.63.212:6443/api/v1/namespaces/default/events\": dial tcp 143.198.63.212:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.3-0-7c044d2e24.183afaebdd6c0a7a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-0-7c044d2e24,UID:ci-4081.3.3-0-7c044d2e24,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-0-7c044d2e24,},FirstTimestamp:2025-04-30 03:28:16.498592378 +0000 UTC m=+0.695634501,LastTimestamp:2025-04-30 03:28:16.498592378 +0000 UTC m=+0.695634501,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-0-7c044d2e24,}" Apr 30 03:28:16.515711 kubelet[2192]: I0430 03:28:16.514742 2192 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:28:16.521928 kubelet[2192]: I0430 03:28:16.521876 2192 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 03:28:16.522729 kubelet[2192]: E0430 03:28:16.522659 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.63.212:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-0-7c044d2e24?timeout=10s\": dial tcp 143.198.63.212:6443: connect: connection refused" interval="200ms" Apr 30 03:28:16.523388 kubelet[2192]: I0430 03:28:16.523347 2192 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:28:16.524176 kubelet[2192]: W0430 03:28:16.524110 2192 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.198.63.212:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.63.212:6443: connect: connection refused Apr 30 03:28:16.524362 kubelet[2192]: E0430 03:28:16.524342 2192 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://143.198.63.212:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.63.212:6443: connect: connection refused Apr 30 03:28:16.524817 kubelet[2192]: I0430 03:28:16.524793 2192 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:28:16.526777 kubelet[2192]: I0430 03:28:16.526733 2192 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:28:16.527486 kubelet[2192]: E0430 03:28:16.527444 2192 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:28:16.529932 kubelet[2192]: I0430 03:28:16.529884 2192 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:28:16.529932 kubelet[2192]: I0430 03:28:16.529917 2192 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:28:16.561031 kubelet[2192]: I0430 03:28:16.560960 2192 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:28:16.563933 kubelet[2192]: I0430 03:28:16.563893 2192 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:28:16.563933 kubelet[2192]: I0430 03:28:16.563921 2192 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:28:16.563933 kubelet[2192]: I0430 03:28:16.563949 2192 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:28:16.564226 kubelet[2192]: I0430 03:28:16.563910 2192 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:28:16.564369 kubelet[2192]: I0430 03:28:16.564351 2192 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:28:16.564561 kubelet[2192]: I0430 03:28:16.564544 2192 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 03:28:16.565318 kubelet[2192]: E0430 03:28:16.564871 2192 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:28:16.567342 kubelet[2192]: I0430 03:28:16.567287 2192 policy_none.go:49] "None policy: Start" Apr 30 03:28:16.574083 kubelet[2192]: W0430 03:28:16.574000 2192 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.198.63.212:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.63.212:6443: connect: connection refused Apr 30 03:28:16.574083 kubelet[2192]: E0430 03:28:16.574074 2192 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://143.198.63.212:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.63.212:6443: connect: connection refused Apr 30 03:28:16.575384 kubelet[2192]: I0430 03:28:16.574917 2192 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:28:16.575384 kubelet[2192]: I0430 03:28:16.574955 2192 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:28:16.585010 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 03:28:16.601021 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 03:28:16.608347 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 03:28:16.618908 kubelet[2192]: I0430 03:28:16.618553 2192 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:28:16.619111 kubelet[2192]: I0430 03:28:16.618890 2192 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:28:16.619111 kubelet[2192]: I0430 03:28:16.619067 2192 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:28:16.627801 kubelet[2192]: I0430 03:28:16.627108 2192 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:16.629351 kubelet[2192]: E0430 03:28:16.629215 2192 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.3-0-7c044d2e24\" not found" Apr 30 03:28:16.630201 kubelet[2192]: E0430 03:28:16.629965 2192 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.198.63.212:6443/api/v1/nodes\": dial tcp 143.198.63.212:6443: connect: connection refused" node="ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:16.665823 kubelet[2192]: I0430 03:28:16.665634 2192 topology_manager.go:215] "Topology Admit Handler" podUID="ba3ea4f3b3293435954e72248604cb22" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:16.667216 kubelet[2192]: I0430 03:28:16.667171 2192 topology_manager.go:215] "Topology Admit Handler" podUID="65afaab99c8ad24561b70b530b45c3e9" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:16.669731 kubelet[2192]: I0430 03:28:16.669012 2192 topology_manager.go:215] "Topology Admit Handler" podUID="87642321db6a299bda362c43f95325ac" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:16.680467 systemd[1]: Created slice kubepods-burstable-podba3ea4f3b3293435954e72248604cb22.slice - libcontainer container kubepods-burstable-podba3ea4f3b3293435954e72248604cb22.slice. Apr 30 03:28:16.700273 systemd[1]: Created slice kubepods-burstable-pod65afaab99c8ad24561b70b530b45c3e9.slice - libcontainer container kubepods-burstable-pod65afaab99c8ad24561b70b530b45c3e9.slice. Apr 30 03:28:16.715782 systemd[1]: Created slice kubepods-burstable-pod87642321db6a299bda362c43f95325ac.slice - libcontainer container kubepods-burstable-pod87642321db6a299bda362c43f95325ac.slice. Apr 30 03:28:16.724629 kubelet[2192]: E0430 03:28:16.724531 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.63.212:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-0-7c044d2e24?timeout=10s\": dial tcp 143.198.63.212:6443: connect: connection refused" interval="400ms" Apr 30 03:28:16.727152 kubelet[2192]: I0430 03:28:16.726991 2192 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/65afaab99c8ad24561b70b530b45c3e9-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-0-7c044d2e24\" (UID: \"65afaab99c8ad24561b70b530b45c3e9\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:16.727152 kubelet[2192]: I0430 03:28:16.727060 2192 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/65afaab99c8ad24561b70b530b45c3e9-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-0-7c044d2e24\" (UID: \"65afaab99c8ad24561b70b530b45c3e9\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:16.727152 kubelet[2192]: I0430 03:28:16.727108 2192 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/65afaab99c8ad24561b70b530b45c3e9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-0-7c044d2e24\" (UID: \"65afaab99c8ad24561b70b530b45c3e9\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:16.727152 kubelet[2192]: I0430 03:28:16.727142 2192 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/87642321db6a299bda362c43f95325ac-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-0-7c044d2e24\" (UID: \"87642321db6a299bda362c43f95325ac\") " pod="kube-system/kube-scheduler-ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:16.727152 kubelet[2192]: I0430 03:28:16.727172 2192 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ba3ea4f3b3293435954e72248604cb22-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-0-7c044d2e24\" (UID: \"ba3ea4f3b3293435954e72248604cb22\") " pod="kube-system/kube-apiserver-ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:16.728127 kubelet[2192]: I0430 03:28:16.727202 2192 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ba3ea4f3b3293435954e72248604cb22-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-0-7c044d2e24\" (UID: \"ba3ea4f3b3293435954e72248604cb22\") " pod="kube-system/kube-apiserver-ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:16.728127 kubelet[2192]: I0430 03:28:16.727258 2192 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/65afaab99c8ad24561b70b530b45c3e9-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-0-7c044d2e24\" (UID: \"65afaab99c8ad24561b70b530b45c3e9\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:16.728127 kubelet[2192]: I0430 03:28:16.727305 2192 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ba3ea4f3b3293435954e72248604cb22-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-0-7c044d2e24\" (UID: \"ba3ea4f3b3293435954e72248604cb22\") " pod="kube-system/kube-apiserver-ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:16.728127 kubelet[2192]: I0430 03:28:16.727338 2192 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/65afaab99c8ad24561b70b530b45c3e9-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-0-7c044d2e24\" (UID: \"65afaab99c8ad24561b70b530b45c3e9\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:16.832458 kubelet[2192]: I0430 03:28:16.831854 2192 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:16.832458 kubelet[2192]: E0430 03:28:16.832296 2192 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.198.63.212:6443/api/v1/nodes\": dial tcp 143.198.63.212:6443: connect: connection refused" node="ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:16.992970 kubelet[2192]: E0430 03:28:16.992815 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:16.994722 containerd[1473]: time="2025-04-30T03:28:16.994665093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-0-7c044d2e24,Uid:ba3ea4f3b3293435954e72248604cb22,Namespace:kube-system,Attempt:0,}" Apr 30 03:28:16.996404 systemd-resolved[1329]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Apr 30 03:28:17.013633 kubelet[2192]: E0430 03:28:17.012234 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:17.020553 kubelet[2192]: E0430 03:28:17.020021 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:17.024412 containerd[1473]: time="2025-04-30T03:28:17.023999278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-0-7c044d2e24,Uid:65afaab99c8ad24561b70b530b45c3e9,Namespace:kube-system,Attempt:0,}" Apr 30 03:28:17.025066 containerd[1473]: time="2025-04-30T03:28:17.024776421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-0-7c044d2e24,Uid:87642321db6a299bda362c43f95325ac,Namespace:kube-system,Attempt:0,}" Apr 30 03:28:17.126321 kubelet[2192]: E0430 03:28:17.126246 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.63.212:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-0-7c044d2e24?timeout=10s\": dial tcp 143.198.63.212:6443: connect: connection refused" interval="800ms" Apr 30 03:28:17.234709 kubelet[2192]: I0430 03:28:17.234639 2192 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:17.235151 kubelet[2192]: E0430 03:28:17.235111 2192 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.198.63.212:6443/api/v1/nodes\": dial tcp 143.198.63.212:6443: connect: connection refused" node="ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:17.386859 kubelet[2192]: W0430 03:28:17.386568 2192 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.198.63.212:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.63.212:6443: connect: connection refused Apr 30 03:28:17.386859 kubelet[2192]: E0430 03:28:17.386717 2192 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://143.198.63.212:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.63.212:6443: connect: connection refused Apr 30 03:28:17.460445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4156204581.mount: Deactivated successfully. Apr 30 03:28:17.469072 containerd[1473]: time="2025-04-30T03:28:17.467658969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:28:17.470917 containerd[1473]: time="2025-04-30T03:28:17.470838087Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:28:17.472467 containerd[1473]: time="2025-04-30T03:28:17.472320684Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:28:17.474949 containerd[1473]: time="2025-04-30T03:28:17.474018531Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:28:17.474949 containerd[1473]: time="2025-04-30T03:28:17.474788275Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 30 03:28:17.475123 containerd[1473]: time="2025-04-30T03:28:17.475087471Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:28:17.475721 containerd[1473]: time="2025-04-30T03:28:17.475663617Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:28:17.480423 containerd[1473]: time="2025-04-30T03:28:17.480348991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:28:17.481550 kubelet[2192]: W0430 03:28:17.481426 2192 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.198.63.212:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.63.212:6443: connect: connection refused Apr 30 03:28:17.482400 containerd[1473]: time="2025-04-30T03:28:17.482247638Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 487.444608ms" Apr 30 03:28:17.482678 kubelet[2192]: E0430 03:28:17.482643 2192 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://143.198.63.212:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.63.212:6443: connect: connection refused Apr 30 03:28:17.485353 containerd[1473]: time="2025-04-30T03:28:17.484178743Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 460.056903ms" Apr 30 03:28:17.489375 containerd[1473]: time="2025-04-30T03:28:17.489093778Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 464.225242ms" Apr 30 03:28:17.522987 kubelet[2192]: W0430 03:28:17.522821 2192 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.198.63.212:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.63.212:6443: connect: connection refused Apr 30 03:28:17.522987 kubelet[2192]: E0430 03:28:17.522954 2192 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://143.198.63.212:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.63.212:6443: connect: connection refused Apr 30 03:28:17.701833 containerd[1473]: time="2025-04-30T03:28:17.701404367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:28:17.701833 containerd[1473]: time="2025-04-30T03:28:17.701519446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:28:17.701833 containerd[1473]: time="2025-04-30T03:28:17.701583213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:17.701833 containerd[1473]: time="2025-04-30T03:28:17.701746640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:17.711300 containerd[1473]: time="2025-04-30T03:28:17.711029288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:28:17.711300 containerd[1473]: time="2025-04-30T03:28:17.711115563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:28:17.711300 containerd[1473]: time="2025-04-30T03:28:17.711133343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:17.715193 containerd[1473]: time="2025-04-30T03:28:17.714947899Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:28:17.715193 containerd[1473]: time="2025-04-30T03:28:17.715066892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:28:17.715978 containerd[1473]: time="2025-04-30T03:28:17.715805471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:17.715978 containerd[1473]: time="2025-04-30T03:28:17.715926515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:17.717497 containerd[1473]: time="2025-04-30T03:28:17.717344261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:17.748451 systemd[1]: Started cri-containerd-cbd063e571b981b8f794c5ed7e3d0ebba5f3138ebc3a23db64bc646881b57c0b.scope - libcontainer container cbd063e571b981b8f794c5ed7e3d0ebba5f3138ebc3a23db64bc646881b57c0b. Apr 30 03:28:17.770894 systemd[1]: Started cri-containerd-7092265bcbd56685d574ea2d5b974cfa86458d19b078dffe4ef2388468feabf1.scope - libcontainer container 7092265bcbd56685d574ea2d5b974cfa86458d19b078dffe4ef2388468feabf1. Apr 30 03:28:17.783073 systemd[1]: Started cri-containerd-5cd2adb0c7126c16b896e4bd2524fd0872f3fdb0636870d71108ddaf6cd2bb5e.scope - libcontainer container 5cd2adb0c7126c16b896e4bd2524fd0872f3fdb0636870d71108ddaf6cd2bb5e. Apr 30 03:28:17.874445 containerd[1473]: time="2025-04-30T03:28:17.874092469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-0-7c044d2e24,Uid:ba3ea4f3b3293435954e72248604cb22,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbd063e571b981b8f794c5ed7e3d0ebba5f3138ebc3a23db64bc646881b57c0b\"" Apr 30 03:28:17.884093 kubelet[2192]: E0430 03:28:17.884034 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:17.900185 containerd[1473]: time="2025-04-30T03:28:17.899474719Z" level=info msg="CreateContainer within sandbox \"cbd063e571b981b8f794c5ed7e3d0ebba5f3138ebc3a23db64bc646881b57c0b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 03:28:17.927319 kubelet[2192]: E0430 03:28:17.927132 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.63.212:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-0-7c044d2e24?timeout=10s\": dial tcp 143.198.63.212:6443: connect: connection refused" interval="1.6s" Apr 30 03:28:17.936394 containerd[1473]: time="2025-04-30T03:28:17.936324021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-0-7c044d2e24,Uid:65afaab99c8ad24561b70b530b45c3e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"7092265bcbd56685d574ea2d5b974cfa86458d19b078dffe4ef2388468feabf1\"" Apr 30 03:28:17.938408 kubelet[2192]: E0430 03:28:17.937919 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:17.943919 containerd[1473]: time="2025-04-30T03:28:17.943019731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-0-7c044d2e24,Uid:87642321db6a299bda362c43f95325ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cd2adb0c7126c16b896e4bd2524fd0872f3fdb0636870d71108ddaf6cd2bb5e\"" Apr 30 03:28:17.945055 containerd[1473]: time="2025-04-30T03:28:17.944986096Z" level=info msg="CreateContainer within sandbox \"7092265bcbd56685d574ea2d5b974cfa86458d19b078dffe4ef2388468feabf1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 03:28:17.945780 kubelet[2192]: E0430 03:28:17.945381 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:17.949489 containerd[1473]: time="2025-04-30T03:28:17.949433759Z" level=info msg="CreateContainer within sandbox \"5cd2adb0c7126c16b896e4bd2524fd0872f3fdb0636870d71108ddaf6cd2bb5e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 03:28:17.953217 containerd[1473]: time="2025-04-30T03:28:17.951758940Z" level=info msg="CreateContainer within sandbox \"cbd063e571b981b8f794c5ed7e3d0ebba5f3138ebc3a23db64bc646881b57c0b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"110d608b8c7f82c976f9a8e8050ad0ddec9cad1c5308ba5bab205fbc6920efd3\"" Apr 30 03:28:17.953217 containerd[1473]: time="2025-04-30T03:28:17.952851890Z" level=info msg="StartContainer for \"110d608b8c7f82c976f9a8e8050ad0ddec9cad1c5308ba5bab205fbc6920efd3\"" Apr 30 03:28:17.970376 containerd[1473]: time="2025-04-30T03:28:17.970164163Z" level=info msg="CreateContainer within sandbox \"7092265bcbd56685d574ea2d5b974cfa86458d19b078dffe4ef2388468feabf1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dc9f737dcc1b4c793b48936e45fc05ebd51c87b2836550be55ae5010270eaa9f\"" Apr 30 03:28:17.971296 containerd[1473]: time="2025-04-30T03:28:17.971235176Z" level=info msg="StartContainer for \"dc9f737dcc1b4c793b48936e45fc05ebd51c87b2836550be55ae5010270eaa9f\"" Apr 30 03:28:17.974831 containerd[1473]: time="2025-04-30T03:28:17.974771724Z" level=info msg="CreateContainer within sandbox \"5cd2adb0c7126c16b896e4bd2524fd0872f3fdb0636870d71108ddaf6cd2bb5e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6d0ef5a8a5077e609adf0e77d2efc098a3a860c591b8136a6b3e65f717c4d910\"" Apr 30 03:28:17.975787 containerd[1473]: time="2025-04-30T03:28:17.975537749Z" level=info msg="StartContainer for \"6d0ef5a8a5077e609adf0e77d2efc098a3a860c591b8136a6b3e65f717c4d910\"" Apr 30 03:28:18.024823 systemd[1]: Started cri-containerd-110d608b8c7f82c976f9a8e8050ad0ddec9cad1c5308ba5bab205fbc6920efd3.scope - libcontainer container 110d608b8c7f82c976f9a8e8050ad0ddec9cad1c5308ba5bab205fbc6920efd3. Apr 30 03:28:18.037319 kubelet[2192]: I0430 03:28:18.037011 2192 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:18.038128 kubelet[2192]: E0430 03:28:18.037421 2192 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.198.63.212:6443/api/v1/nodes\": dial tcp 143.198.63.212:6443: connect: connection refused" node="ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:18.049031 systemd[1]: Started cri-containerd-dc9f737dcc1b4c793b48936e45fc05ebd51c87b2836550be55ae5010270eaa9f.scope - libcontainer container dc9f737dcc1b4c793b48936e45fc05ebd51c87b2836550be55ae5010270eaa9f. Apr 30 03:28:18.074097 systemd[1]: Started cri-containerd-6d0ef5a8a5077e609adf0e77d2efc098a3a860c591b8136a6b3e65f717c4d910.scope - libcontainer container 6d0ef5a8a5077e609adf0e77d2efc098a3a860c591b8136a6b3e65f717c4d910. Apr 30 03:28:18.092190 kubelet[2192]: W0430 03:28:18.092038 2192 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.198.63.212:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-0-7c044d2e24&limit=500&resourceVersion=0": dial tcp 143.198.63.212:6443: connect: connection refused Apr 30 03:28:18.092190 kubelet[2192]: E0430 03:28:18.092148 2192 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://143.198.63.212:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-0-7c044d2e24&limit=500&resourceVersion=0": dial tcp 143.198.63.212:6443: connect: connection refused Apr 30 03:28:18.136731 containerd[1473]: time="2025-04-30T03:28:18.134239701Z" level=info msg="StartContainer for \"110d608b8c7f82c976f9a8e8050ad0ddec9cad1c5308ba5bab205fbc6920efd3\" returns successfully" Apr 30 03:28:18.172391 containerd[1473]: time="2025-04-30T03:28:18.172242918Z" level=info msg="StartContainer for \"dc9f737dcc1b4c793b48936e45fc05ebd51c87b2836550be55ae5010270eaa9f\" returns successfully" Apr 30 03:28:18.198241 containerd[1473]: time="2025-04-30T03:28:18.198107143Z" level=info msg="StartContainer for \"6d0ef5a8a5077e609adf0e77d2efc098a3a860c591b8136a6b3e65f717c4d910\" returns successfully" Apr 30 03:28:18.586716 kubelet[2192]: E0430 03:28:18.586666 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:18.589849 kubelet[2192]: E0430 03:28:18.589803 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:18.595286 kubelet[2192]: E0430 03:28:18.595241 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:19.596006 kubelet[2192]: E0430 03:28:19.595964 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:19.640212 kubelet[2192]: I0430 03:28:19.639242 2192 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:20.638340 kubelet[2192]: E0430 03:28:20.638265 2192 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.3-0-7c044d2e24\" not found" node="ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:20.669955 kubelet[2192]: E0430 03:28:20.669567 2192 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.3-0-7c044d2e24.183afaebdd6c0a7a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-0-7c044d2e24,UID:ci-4081.3.3-0-7c044d2e24,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-0-7c044d2e24,},FirstTimestamp:2025-04-30 03:28:16.498592378 +0000 UTC m=+0.695634501,LastTimestamp:2025-04-30 03:28:16.498592378 +0000 UTC m=+0.695634501,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-0-7c044d2e24,}" Apr 30 03:28:20.735723 kubelet[2192]: I0430 03:28:20.733495 2192 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:20.737974 kubelet[2192]: E0430 03:28:20.737795 2192 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.3-0-7c044d2e24.183afaebdf23f7a4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-0-7c044d2e24,UID:ci-4081.3.3-0-7c044d2e24,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-0-7c044d2e24,},FirstTimestamp:2025-04-30 03:28:16.527423396 +0000 UTC m=+0.724465524,LastTimestamp:2025-04-30 03:28:16.527423396 +0000 UTC m=+0.724465524,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-0-7c044d2e24,}" Apr 30 03:28:21.514271 kubelet[2192]: I0430 03:28:21.513942 2192 apiserver.go:52] "Watching apiserver" Apr 30 03:28:21.524467 kubelet[2192]: I0430 03:28:21.524419 2192 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:28:22.741291 systemd[1]: Reloading requested from client PID 2466 ('systemctl') (unit session-7.scope)... Apr 30 03:28:22.741312 systemd[1]: Reloading... Apr 30 03:28:22.870905 zram_generator::config[2511]: No configuration found. Apr 30 03:28:23.044970 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:28:23.199528 systemd[1]: Reloading finished in 457 ms. Apr 30 03:28:23.253940 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:23.265421 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:28:23.265829 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:23.265968 systemd[1]: kubelet.service: Consumed 1.215s CPU time, 109.8M memory peak, 0B memory swap peak. Apr 30 03:28:23.275430 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:23.442839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:23.455984 (kubelet)[2556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:28:23.547783 kubelet[2556]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:28:23.547783 kubelet[2556]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:28:23.547783 kubelet[2556]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:28:23.549710 kubelet[2556]: I0430 03:28:23.549194 2556 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:28:23.556999 kubelet[2556]: I0430 03:28:23.556944 2556 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 03:28:23.556999 kubelet[2556]: I0430 03:28:23.556988 2556 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:28:23.560737 kubelet[2556]: I0430 03:28:23.560671 2556 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 03:28:23.564374 kubelet[2556]: I0430 03:28:23.564326 2556 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 03:28:23.566668 kubelet[2556]: I0430 03:28:23.566276 2556 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:28:23.577237 kubelet[2556]: I0430 03:28:23.577195 2556 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:28:23.577576 kubelet[2556]: I0430 03:28:23.577526 2556 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:28:23.577922 kubelet[2556]: I0430 03:28:23.577581 2556 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-0-7c044d2e24","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 03:28:23.578045 kubelet[2556]: I0430 03:28:23.577945 2556 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:28:23.578045 kubelet[2556]: I0430 03:28:23.577967 2556 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 03:28:23.578045 kubelet[2556]: I0430 03:28:23.578037 2556 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:28:23.580944 kubelet[2556]: I0430 03:28:23.579708 2556 kubelet.go:400] "Attempting to sync node with API server" Apr 30 03:28:23.580944 kubelet[2556]: I0430 03:28:23.579751 2556 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:28:23.580944 kubelet[2556]: I0430 03:28:23.579798 2556 kubelet.go:312] "Adding apiserver pod source" Apr 30 03:28:23.580944 kubelet[2556]: I0430 03:28:23.579829 2556 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:28:23.582026 kubelet[2556]: I0430 03:28:23.581948 2556 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:28:23.584293 kubelet[2556]: I0430 03:28:23.584251 2556 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:28:23.585175 kubelet[2556]: I0430 03:28:23.585146 2556 server.go:1264] "Started kubelet" Apr 30 03:28:23.593780 kubelet[2556]: I0430 03:28:23.593405 2556 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:28:23.598351 kubelet[2556]: I0430 03:28:23.598144 2556 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:28:23.603260 kubelet[2556]: I0430 03:28:23.602077 2556 server.go:455] "Adding debug handlers to kubelet server" Apr 30 03:28:23.606748 kubelet[2556]: I0430 03:28:23.604907 2556 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 03:28:23.608906 kubelet[2556]: I0430 03:28:23.608809 2556 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:28:23.610726 kubelet[2556]: I0430 03:28:23.609245 2556 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:28:23.622699 kubelet[2556]: I0430 03:28:23.621332 2556 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:28:23.623727 kubelet[2556]: I0430 03:28:23.623194 2556 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:28:23.627844 kubelet[2556]: I0430 03:28:23.627809 2556 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:28:23.628184 kubelet[2556]: I0430 03:28:23.628153 2556 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:28:23.629470 kubelet[2556]: E0430 03:28:23.629435 2556 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:28:23.631098 kubelet[2556]: I0430 03:28:23.631068 2556 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:28:23.644509 kubelet[2556]: I0430 03:28:23.644101 2556 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:28:23.646099 kubelet[2556]: I0430 03:28:23.646001 2556 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:28:23.646099 kubelet[2556]: I0430 03:28:23.646058 2556 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:28:23.646099 kubelet[2556]: I0430 03:28:23.646085 2556 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 03:28:23.646369 kubelet[2556]: E0430 03:28:23.646235 2556 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:28:23.707531 kubelet[2556]: I0430 03:28:23.706755 2556 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:23.710086 kubelet[2556]: I0430 03:28:23.709993 2556 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:28:23.710086 kubelet[2556]: I0430 03:28:23.710010 2556 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:28:23.710086 kubelet[2556]: I0430 03:28:23.710037 2556 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:28:23.710472 kubelet[2556]: I0430 03:28:23.710266 2556 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 03:28:23.710472 kubelet[2556]: I0430 03:28:23.710290 2556 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 03:28:23.710472 kubelet[2556]: I0430 03:28:23.710316 2556 policy_none.go:49] "None policy: Start" Apr 30 03:28:23.713771 kubelet[2556]: I0430 03:28:23.713447 2556 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:28:23.713771 kubelet[2556]: I0430 03:28:23.713498 2556 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:28:23.714247 kubelet[2556]: I0430 03:28:23.714065 2556 state_mem.go:75] "Updated machine memory state" Apr 30 03:28:23.734064 kubelet[2556]: I0430 03:28:23.733637 2556 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:23.736658 kubelet[2556]: I0430 03:28:23.736469 2556 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:23.737940 kubelet[2556]: I0430 03:28:23.737898 2556 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:28:23.738420 kubelet[2556]: I0430 03:28:23.738119 2556 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:28:23.738420 kubelet[2556]: I0430 03:28:23.738257 2556 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:28:23.749865 kubelet[2556]: I0430 03:28:23.749155 2556 topology_manager.go:215] "Topology Admit Handler" podUID="ba3ea4f3b3293435954e72248604cb22" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:23.749865 kubelet[2556]: I0430 03:28:23.749263 2556 topology_manager.go:215] "Topology Admit Handler" podUID="65afaab99c8ad24561b70b530b45c3e9" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:23.749865 kubelet[2556]: I0430 03:28:23.749319 2556 topology_manager.go:215] "Topology Admit Handler" podUID="87642321db6a299bda362c43f95325ac" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:23.783519 kubelet[2556]: W0430 03:28:23.780111 2556 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:28:23.784519 kubelet[2556]: W0430 03:28:23.784486 2556 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:28:23.785233 kubelet[2556]: W0430 03:28:23.785129 2556 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:28:23.829810 kubelet[2556]: I0430 03:28:23.829725 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ba3ea4f3b3293435954e72248604cb22-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-0-7c044d2e24\" (UID: \"ba3ea4f3b3293435954e72248604cb22\") " pod="kube-system/kube-apiserver-ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:23.829810 kubelet[2556]: I0430 03:28:23.829784 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ba3ea4f3b3293435954e72248604cb22-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-0-7c044d2e24\" (UID: \"ba3ea4f3b3293435954e72248604cb22\") " pod="kube-system/kube-apiserver-ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:23.829810 kubelet[2556]: I0430 03:28:23.829813 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/65afaab99c8ad24561b70b530b45c3e9-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-0-7c044d2e24\" (UID: \"65afaab99c8ad24561b70b530b45c3e9\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:23.830108 kubelet[2556]: I0430 03:28:23.829833 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/65afaab99c8ad24561b70b530b45c3e9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-0-7c044d2e24\" (UID: \"65afaab99c8ad24561b70b530b45c3e9\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:23.830108 kubelet[2556]: I0430 03:28:23.829857 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/87642321db6a299bda362c43f95325ac-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-0-7c044d2e24\" (UID: \"87642321db6a299bda362c43f95325ac\") " pod="kube-system/kube-scheduler-ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:23.830108 kubelet[2556]: I0430 03:28:23.829891 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ba3ea4f3b3293435954e72248604cb22-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-0-7c044d2e24\" (UID: \"ba3ea4f3b3293435954e72248604cb22\") " pod="kube-system/kube-apiserver-ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:23.830108 kubelet[2556]: I0430 03:28:23.829923 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/65afaab99c8ad24561b70b530b45c3e9-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-0-7c044d2e24\" (UID: \"65afaab99c8ad24561b70b530b45c3e9\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:23.830108 kubelet[2556]: I0430 03:28:23.829946 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/65afaab99c8ad24561b70b530b45c3e9-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-0-7c044d2e24\" (UID: \"65afaab99c8ad24561b70b530b45c3e9\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:23.830280 kubelet[2556]: I0430 03:28:23.829964 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/65afaab99c8ad24561b70b530b45c3e9-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-0-7c044d2e24\" (UID: \"65afaab99c8ad24561b70b530b45c3e9\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:24.083110 kubelet[2556]: E0430 03:28:24.081628 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:24.087902 kubelet[2556]: E0430 03:28:24.086957 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:24.087902 kubelet[2556]: E0430 03:28:24.087208 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:24.582077 kubelet[2556]: I0430 03:28:24.580847 2556 apiserver.go:52] "Watching apiserver" Apr 30 03:28:24.623699 kubelet[2556]: I0430 03:28:24.623509 2556 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:28:24.678376 kubelet[2556]: E0430 03:28:24.678326 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:24.679240 kubelet[2556]: E0430 03:28:24.679210 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:24.716770 kubelet[2556]: W0430 03:28:24.716726 2556 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:28:24.716974 kubelet[2556]: E0430 03:28:24.716818 2556 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.3-0-7c044d2e24\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.3-0-7c044d2e24" Apr 30 03:28:24.717558 kubelet[2556]: E0430 03:28:24.717522 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:24.818497 kubelet[2556]: I0430 03:28:24.818398 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.3-0-7c044d2e24" podStartSLOduration=1.818371578 podStartE2EDuration="1.818371578s" podCreationTimestamp="2025-04-30 03:28:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:28:24.709404796 +0000 UTC m=+1.244525539" watchObservedRunningTime="2025-04-30 03:28:24.818371578 +0000 UTC m=+1.353492323" Apr 30 03:28:24.886663 kubelet[2556]: I0430 03:28:24.886487 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.3-0-7c044d2e24" podStartSLOduration=1.886457593 podStartE2EDuration="1.886457593s" podCreationTimestamp="2025-04-30 03:28:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:28:24.820757236 +0000 UTC m=+1.355877977" watchObservedRunningTime="2025-04-30 03:28:24.886457593 +0000 UTC m=+1.421578327" Apr 30 03:28:24.913279 kubelet[2556]: I0430 03:28:24.913062 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.3-0-7c044d2e24" podStartSLOduration=1.913035486 podStartE2EDuration="1.913035486s" podCreationTimestamp="2025-04-30 03:28:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:28:24.890444261 +0000 UTC m=+1.425565011" watchObservedRunningTime="2025-04-30 03:28:24.913035486 +0000 UTC m=+1.448156230" Apr 30 03:28:25.679227 kubelet[2556]: E0430 03:28:25.679188 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:25.681735 kubelet[2556]: E0430 03:28:25.680586 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:28.346713 kubelet[2556]: E0430 03:28:28.346642 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:28.684774 kubelet[2556]: E0430 03:28:28.684724 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:29.620747 systemd-resolved[1329]: Clock change detected. Flushing caches. Apr 30 03:28:29.621881 systemd-timesyncd[1344]: Contacted time server 23.186.168.127:123 (2.flatcar.pool.ntp.org). Apr 30 03:28:29.621967 systemd-timesyncd[1344]: Initial clock synchronization to Wed 2025-04-30 03:28:29.620437 UTC. Apr 30 03:28:30.348906 sudo[1661]: pam_unix(sudo:session): session closed for user root Apr 30 03:28:30.359111 sshd[1658]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:30.365961 systemd[1]: sshd@7-143.198.63.212:22-139.178.89.65:54466.service: Deactivated successfully. Apr 30 03:28:30.369014 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 03:28:30.369277 systemd[1]: session-7.scope: Consumed 6.422s CPU time, 188.0M memory peak, 0B memory swap peak. Apr 30 03:28:30.370201 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit. Apr 30 03:28:30.371558 systemd-logind[1450]: Removed session 7. Apr 30 03:28:32.197379 kubelet[2556]: E0430 03:28:32.196981 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:32.303188 kubelet[2556]: E0430 03:28:32.303152 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:35.245295 kubelet[2556]: E0430 03:28:35.244144 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:36.947318 kubelet[2556]: I0430 03:28:36.945273 2556 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 03:28:36.947940 containerd[1473]: time="2025-04-30T03:28:36.945811396Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 03:28:36.949078 kubelet[2556]: I0430 03:28:36.948550 2556 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 03:28:37.926603 kubelet[2556]: I0430 03:28:37.925969 2556 topology_manager.go:215] "Topology Admit Handler" podUID="189d277b-2410-41fb-aaff-d8058cde05bb" podNamespace="kube-system" podName="kube-proxy-n75xd" Apr 30 03:28:37.941120 systemd[1]: Created slice kubepods-besteffort-pod189d277b_2410_41fb_aaff_d8058cde05bb.slice - libcontainer container kubepods-besteffort-pod189d277b_2410_41fb_aaff_d8058cde05bb.slice. Apr 30 03:28:38.028001 kubelet[2556]: I0430 03:28:38.027864 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/189d277b-2410-41fb-aaff-d8058cde05bb-lib-modules\") pod \"kube-proxy-n75xd\" (UID: \"189d277b-2410-41fb-aaff-d8058cde05bb\") " pod="kube-system/kube-proxy-n75xd" Apr 30 03:28:38.028001 kubelet[2556]: I0430 03:28:38.027925 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfbqt\" (UniqueName: \"kubernetes.io/projected/189d277b-2410-41fb-aaff-d8058cde05bb-kube-api-access-cfbqt\") pod \"kube-proxy-n75xd\" (UID: \"189d277b-2410-41fb-aaff-d8058cde05bb\") " pod="kube-system/kube-proxy-n75xd" Apr 30 03:28:38.028001 kubelet[2556]: I0430 03:28:38.027947 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/189d277b-2410-41fb-aaff-d8058cde05bb-kube-proxy\") pod \"kube-proxy-n75xd\" (UID: \"189d277b-2410-41fb-aaff-d8058cde05bb\") " pod="kube-system/kube-proxy-n75xd" Apr 30 03:28:38.028001 kubelet[2556]: I0430 03:28:38.027969 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/189d277b-2410-41fb-aaff-d8058cde05bb-xtables-lock\") pod \"kube-proxy-n75xd\" (UID: \"189d277b-2410-41fb-aaff-d8058cde05bb\") " pod="kube-system/kube-proxy-n75xd" Apr 30 03:28:38.161623 kubelet[2556]: I0430 03:28:38.160191 2556 topology_manager.go:215] "Topology Admit Handler" podUID="bc3e748a-636a-4b38-9370-8ed47e911d79" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-lxb8s" Apr 30 03:28:38.173849 systemd[1]: Created slice kubepods-besteffort-podbc3e748a_636a_4b38_9370_8ed47e911d79.slice - libcontainer container kubepods-besteffort-podbc3e748a_636a_4b38_9370_8ed47e911d79.slice. Apr 30 03:28:38.201121 update_engine[1452]: I20250430 03:28:38.200862 1452 update_attempter.cc:509] Updating boot flags... Apr 30 03:28:38.230668 kubelet[2556]: I0430 03:28:38.230208 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bc3e748a-636a-4b38-9370-8ed47e911d79-var-lib-calico\") pod \"tigera-operator-797db67f8-lxb8s\" (UID: \"bc3e748a-636a-4b38-9370-8ed47e911d79\") " pod="tigera-operator/tigera-operator-797db67f8-lxb8s" Apr 30 03:28:38.230668 kubelet[2556]: I0430 03:28:38.230606 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhk24\" (UniqueName: \"kubernetes.io/projected/bc3e748a-636a-4b38-9370-8ed47e911d79-kube-api-access-zhk24\") pod \"tigera-operator-797db67f8-lxb8s\" (UID: \"bc3e748a-636a-4b38-9370-8ed47e911d79\") " pod="tigera-operator/tigera-operator-797db67f8-lxb8s" Apr 30 03:28:38.242030 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2642) Apr 30 03:28:38.252763 kubelet[2556]: E0430 03:28:38.250216 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:38.266248 containerd[1473]: time="2025-04-30T03:28:38.261567712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n75xd,Uid:189d277b-2410-41fb-aaff-d8058cde05bb,Namespace:kube-system,Attempt:0,}" Apr 30 03:28:38.316616 containerd[1473]: time="2025-04-30T03:28:38.316240397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:28:38.318145 containerd[1473]: time="2025-04-30T03:28:38.317364617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:28:38.318145 containerd[1473]: time="2025-04-30T03:28:38.317389406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:38.319530 containerd[1473]: time="2025-04-30T03:28:38.318850699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:38.348956 systemd[1]: run-containerd-runc-k8s.io-22765e92bae3109f6829269f55926e12cfcf062469ebfb2e05832f3fad35f119-runc.FebiqO.mount: Deactivated successfully. Apr 30 03:28:38.361540 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2641) Apr 30 03:28:38.385752 systemd[1]: Started cri-containerd-22765e92bae3109f6829269f55926e12cfcf062469ebfb2e05832f3fad35f119.scope - libcontainer container 22765e92bae3109f6829269f55926e12cfcf062469ebfb2e05832f3fad35f119. Apr 30 03:28:38.458849 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2641) Apr 30 03:28:38.459940 containerd[1473]: time="2025-04-30T03:28:38.459851318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n75xd,Uid:189d277b-2410-41fb-aaff-d8058cde05bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"22765e92bae3109f6829269f55926e12cfcf062469ebfb2e05832f3fad35f119\"" Apr 30 03:28:38.461491 kubelet[2556]: E0430 03:28:38.461068 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:38.466760 containerd[1473]: time="2025-04-30T03:28:38.466675099Z" level=info msg="CreateContainer within sandbox \"22765e92bae3109f6829269f55926e12cfcf062469ebfb2e05832f3fad35f119\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 03:28:38.487784 containerd[1473]: time="2025-04-30T03:28:38.482562869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-lxb8s,Uid:bc3e748a-636a-4b38-9370-8ed47e911d79,Namespace:tigera-operator,Attempt:0,}" Apr 30 03:28:38.506584 containerd[1473]: time="2025-04-30T03:28:38.505059222Z" level=info msg="CreateContainer within sandbox \"22765e92bae3109f6829269f55926e12cfcf062469ebfb2e05832f3fad35f119\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e841a6c6c8f053a27eded86cc332b3f55e97c799ceb7848c02ea4d87be1b9e7e\"" Apr 30 03:28:38.512650 containerd[1473]: time="2025-04-30T03:28:38.511475900Z" level=info msg="StartContainer for \"e841a6c6c8f053a27eded86cc332b3f55e97c799ceb7848c02ea4d87be1b9e7e\"" Apr 30 03:28:38.555404 containerd[1473]: time="2025-04-30T03:28:38.555109086Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:28:38.555667 containerd[1473]: time="2025-04-30T03:28:38.555346203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:28:38.555667 containerd[1473]: time="2025-04-30T03:28:38.555364650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:38.555667 containerd[1473]: time="2025-04-30T03:28:38.555576408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:38.570809 systemd[1]: Started cri-containerd-e841a6c6c8f053a27eded86cc332b3f55e97c799ceb7848c02ea4d87be1b9e7e.scope - libcontainer container e841a6c6c8f053a27eded86cc332b3f55e97c799ceb7848c02ea4d87be1b9e7e. Apr 30 03:28:38.591851 systemd[1]: Started cri-containerd-f3922a29819e48a9f9890fd819965ff78cf0e113dbae18751bf9b47b7db59216.scope - libcontainer container f3922a29819e48a9f9890fd819965ff78cf0e113dbae18751bf9b47b7db59216. Apr 30 03:28:38.633078 containerd[1473]: time="2025-04-30T03:28:38.632801046Z" level=info msg="StartContainer for \"e841a6c6c8f053a27eded86cc332b3f55e97c799ceb7848c02ea4d87be1b9e7e\" returns successfully" Apr 30 03:28:38.676915 containerd[1473]: time="2025-04-30T03:28:38.676413498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-lxb8s,Uid:bc3e748a-636a-4b38-9370-8ed47e911d79,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f3922a29819e48a9f9890fd819965ff78cf0e113dbae18751bf9b47b7db59216\"" Apr 30 03:28:38.682545 containerd[1473]: time="2025-04-30T03:28:38.680895055Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" Apr 30 03:28:39.321145 kubelet[2556]: E0430 03:28:39.321095 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:39.339545 kubelet[2556]: I0430 03:28:39.339421 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n75xd" podStartSLOduration=2.339394251 podStartE2EDuration="2.339394251s" podCreationTimestamp="2025-04-30 03:28:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:28:39.339271827 +0000 UTC m=+15.263571563" watchObservedRunningTime="2025-04-30 03:28:39.339394251 +0000 UTC m=+15.263693987" Apr 30 03:28:41.168946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2321165856.mount: Deactivated successfully. Apr 30 03:28:41.858899 containerd[1473]: time="2025-04-30T03:28:41.858816084Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:41.860728 containerd[1473]: time="2025-04-30T03:28:41.860634432Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" Apr 30 03:28:41.862272 containerd[1473]: time="2025-04-30T03:28:41.862039941Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:41.870648 containerd[1473]: time="2025-04-30T03:28:41.867083029Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:41.870648 containerd[1473]: time="2025-04-30T03:28:41.870181988Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 3.189228691s" Apr 30 03:28:41.870648 containerd[1473]: time="2025-04-30T03:28:41.870248111Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" Apr 30 03:28:41.933076 containerd[1473]: time="2025-04-30T03:28:41.933012287Z" level=info msg="CreateContainer within sandbox \"f3922a29819e48a9f9890fd819965ff78cf0e113dbae18751bf9b47b7db59216\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 30 03:28:41.955704 containerd[1473]: time="2025-04-30T03:28:41.955622935Z" level=info msg="CreateContainer within sandbox \"f3922a29819e48a9f9890fd819965ff78cf0e113dbae18751bf9b47b7db59216\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4233f260581646f63c958dc6b2465a94ac676d7706888a97d4cd84713cd9b2f4\"" Apr 30 03:28:41.956896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2551885261.mount: Deactivated successfully. Apr 30 03:28:41.959877 containerd[1473]: time="2025-04-30T03:28:41.959578682Z" level=info msg="StartContainer for \"4233f260581646f63c958dc6b2465a94ac676d7706888a97d4cd84713cd9b2f4\"" Apr 30 03:28:42.012954 systemd[1]: Started cri-containerd-4233f260581646f63c958dc6b2465a94ac676d7706888a97d4cd84713cd9b2f4.scope - libcontainer container 4233f260581646f63c958dc6b2465a94ac676d7706888a97d4cd84713cd9b2f4. Apr 30 03:28:42.060638 containerd[1473]: time="2025-04-30T03:28:42.060356253Z" level=info msg="StartContainer for \"4233f260581646f63c958dc6b2465a94ac676d7706888a97d4cd84713cd9b2f4\" returns successfully" Apr 30 03:28:45.143819 kubelet[2556]: I0430 03:28:45.143721 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-lxb8s" podStartSLOduration=3.926058556 podStartE2EDuration="7.143680264s" podCreationTimestamp="2025-04-30 03:28:38 +0000 UTC" firstStartedPulling="2025-04-30 03:28:38.678958685 +0000 UTC m=+14.603258397" lastFinishedPulling="2025-04-30 03:28:41.896580392 +0000 UTC m=+17.820880105" observedRunningTime="2025-04-30 03:28:42.351396176 +0000 UTC m=+18.275695912" watchObservedRunningTime="2025-04-30 03:28:45.143680264 +0000 UTC m=+21.067980001" Apr 30 03:28:45.144412 kubelet[2556]: I0430 03:28:45.144044 2556 topology_manager.go:215] "Topology Admit Handler" podUID="c593f81f-6866-439d-8485-683a3575cf5f" podNamespace="calico-system" podName="calico-typha-55594f56c8-g9hkh" Apr 30 03:28:45.157560 systemd[1]: Created slice kubepods-besteffort-podc593f81f_6866_439d_8485_683a3575cf5f.slice - libcontainer container kubepods-besteffort-podc593f81f_6866_439d_8485_683a3575cf5f.slice. Apr 30 03:28:45.200865 kubelet[2556]: I0430 03:28:45.200693 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c593f81f-6866-439d-8485-683a3575cf5f-typha-certs\") pod \"calico-typha-55594f56c8-g9hkh\" (UID: \"c593f81f-6866-439d-8485-683a3575cf5f\") " pod="calico-system/calico-typha-55594f56c8-g9hkh" Apr 30 03:28:45.200865 kubelet[2556]: I0430 03:28:45.200744 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c593f81f-6866-439d-8485-683a3575cf5f-tigera-ca-bundle\") pod \"calico-typha-55594f56c8-g9hkh\" (UID: \"c593f81f-6866-439d-8485-683a3575cf5f\") " pod="calico-system/calico-typha-55594f56c8-g9hkh" Apr 30 03:28:45.200865 kubelet[2556]: I0430 03:28:45.200768 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgkwj\" (UniqueName: \"kubernetes.io/projected/c593f81f-6866-439d-8485-683a3575cf5f-kube-api-access-bgkwj\") pod \"calico-typha-55594f56c8-g9hkh\" (UID: \"c593f81f-6866-439d-8485-683a3575cf5f\") " pod="calico-system/calico-typha-55594f56c8-g9hkh" Apr 30 03:28:45.374785 kubelet[2556]: I0430 03:28:45.374671 2556 topology_manager.go:215] "Topology Admit Handler" podUID="81e72421-8a40-4992-a91b-cb68088524a7" podNamespace="calico-system" podName="calico-node-b74fm" Apr 30 03:28:45.385723 systemd[1]: Created slice kubepods-besteffort-pod81e72421_8a40_4992_a91b_cb68088524a7.slice - libcontainer container kubepods-besteffort-pod81e72421_8a40_4992_a91b_cb68088524a7.slice. Apr 30 03:28:45.464208 kubelet[2556]: E0430 03:28:45.463950 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:45.465000 containerd[1473]: time="2025-04-30T03:28:45.464866147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55594f56c8-g9hkh,Uid:c593f81f-6866-439d-8485-683a3575cf5f,Namespace:calico-system,Attempt:0,}" Apr 30 03:28:45.499871 containerd[1473]: time="2025-04-30T03:28:45.499685205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:28:45.499871 containerd[1473]: time="2025-04-30T03:28:45.499747985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:28:45.499871 containerd[1473]: time="2025-04-30T03:28:45.499777352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:45.500800 containerd[1473]: time="2025-04-30T03:28:45.499889967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:45.505217 kubelet[2556]: I0430 03:28:45.502366 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81e72421-8a40-4992-a91b-cb68088524a7-tigera-ca-bundle\") pod \"calico-node-b74fm\" (UID: \"81e72421-8a40-4992-a91b-cb68088524a7\") " pod="calico-system/calico-node-b74fm" Apr 30 03:28:45.505217 kubelet[2556]: I0430 03:28:45.502563 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/81e72421-8a40-4992-a91b-cb68088524a7-flexvol-driver-host\") pod \"calico-node-b74fm\" (UID: \"81e72421-8a40-4992-a91b-cb68088524a7\") " pod="calico-system/calico-node-b74fm" Apr 30 03:28:45.505217 kubelet[2556]: I0430 03:28:45.502600 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/81e72421-8a40-4992-a91b-cb68088524a7-node-certs\") pod \"calico-node-b74fm\" (UID: \"81e72421-8a40-4992-a91b-cb68088524a7\") " pod="calico-system/calico-node-b74fm" Apr 30 03:28:45.505217 kubelet[2556]: I0430 03:28:45.502620 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/81e72421-8a40-4992-a91b-cb68088524a7-cni-net-dir\") pod \"calico-node-b74fm\" (UID: \"81e72421-8a40-4992-a91b-cb68088524a7\") " pod="calico-system/calico-node-b74fm" Apr 30 03:28:45.505217 kubelet[2556]: I0430 03:28:45.502636 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/81e72421-8a40-4992-a91b-cb68088524a7-cni-log-dir\") pod \"calico-node-b74fm\" (UID: \"81e72421-8a40-4992-a91b-cb68088524a7\") " pod="calico-system/calico-node-b74fm" Apr 30 03:28:45.505558 kubelet[2556]: I0430 03:28:45.502654 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8s2r\" (UniqueName: \"kubernetes.io/projected/81e72421-8a40-4992-a91b-cb68088524a7-kube-api-access-f8s2r\") pod \"calico-node-b74fm\" (UID: \"81e72421-8a40-4992-a91b-cb68088524a7\") " pod="calico-system/calico-node-b74fm" Apr 30 03:28:45.505558 kubelet[2556]: I0430 03:28:45.502670 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/81e72421-8a40-4992-a91b-cb68088524a7-var-run-calico\") pod \"calico-node-b74fm\" (UID: \"81e72421-8a40-4992-a91b-cb68088524a7\") " pod="calico-system/calico-node-b74fm" Apr 30 03:28:45.505558 kubelet[2556]: I0430 03:28:45.502688 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81e72421-8a40-4992-a91b-cb68088524a7-lib-modules\") pod \"calico-node-b74fm\" (UID: \"81e72421-8a40-4992-a91b-cb68088524a7\") " pod="calico-system/calico-node-b74fm" Apr 30 03:28:45.505558 kubelet[2556]: I0430 03:28:45.502702 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81e72421-8a40-4992-a91b-cb68088524a7-xtables-lock\") pod \"calico-node-b74fm\" (UID: \"81e72421-8a40-4992-a91b-cb68088524a7\") " pod="calico-system/calico-node-b74fm" Apr 30 03:28:45.505558 kubelet[2556]: I0430 03:28:45.502715 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/81e72421-8a40-4992-a91b-cb68088524a7-cni-bin-dir\") pod \"calico-node-b74fm\" (UID: \"81e72421-8a40-4992-a91b-cb68088524a7\") " pod="calico-system/calico-node-b74fm" Apr 30 03:28:45.505760 kubelet[2556]: I0430 03:28:45.502735 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/81e72421-8a40-4992-a91b-cb68088524a7-policysync\") pod \"calico-node-b74fm\" (UID: \"81e72421-8a40-4992-a91b-cb68088524a7\") " pod="calico-system/calico-node-b74fm" Apr 30 03:28:45.505760 kubelet[2556]: I0430 03:28:45.502758 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/81e72421-8a40-4992-a91b-cb68088524a7-var-lib-calico\") pod \"calico-node-b74fm\" (UID: \"81e72421-8a40-4992-a91b-cb68088524a7\") " pod="calico-system/calico-node-b74fm" Apr 30 03:28:45.536775 systemd[1]: Started cri-containerd-9b475ae724e9602c4760edac53beac6676c3eb79eefe031783005c5ba47528a1.scope - libcontainer container 9b475ae724e9602c4760edac53beac6676c3eb79eefe031783005c5ba47528a1. Apr 30 03:28:45.558534 kubelet[2556]: I0430 03:28:45.558158 2556 topology_manager.go:215] "Topology Admit Handler" podUID="8f623e99-7bb9-4ed3-8866-963ff1311503" podNamespace="calico-system" podName="csi-node-driver-bfvvm" Apr 30 03:28:45.558995 kubelet[2556]: E0430 03:28:45.558754 2556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bfvvm" podUID="8f623e99-7bb9-4ed3-8866-963ff1311503" Apr 30 03:28:45.604610 kubelet[2556]: I0430 03:28:45.603839 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f623e99-7bb9-4ed3-8866-963ff1311503-kubelet-dir\") pod \"csi-node-driver-bfvvm\" (UID: \"8f623e99-7bb9-4ed3-8866-963ff1311503\") " pod="calico-system/csi-node-driver-bfvvm" Apr 30 03:28:45.604610 kubelet[2556]: I0430 03:28:45.604246 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjdfg\" (UniqueName: \"kubernetes.io/projected/8f623e99-7bb9-4ed3-8866-963ff1311503-kube-api-access-cjdfg\") pod \"csi-node-driver-bfvvm\" (UID: \"8f623e99-7bb9-4ed3-8866-963ff1311503\") " pod="calico-system/csi-node-driver-bfvvm" Apr 30 03:28:45.604610 kubelet[2556]: I0430 03:28:45.604296 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8f623e99-7bb9-4ed3-8866-963ff1311503-socket-dir\") pod \"csi-node-driver-bfvvm\" (UID: \"8f623e99-7bb9-4ed3-8866-963ff1311503\") " pod="calico-system/csi-node-driver-bfvvm" Apr 30 03:28:45.604610 kubelet[2556]: I0430 03:28:45.604328 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8f623e99-7bb9-4ed3-8866-963ff1311503-registration-dir\") pod \"csi-node-driver-bfvvm\" (UID: \"8f623e99-7bb9-4ed3-8866-963ff1311503\") " pod="calico-system/csi-node-driver-bfvvm" Apr 30 03:28:45.604610 kubelet[2556]: I0430 03:28:45.604393 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8f623e99-7bb9-4ed3-8866-963ff1311503-varrun\") pod \"csi-node-driver-bfvvm\" (UID: \"8f623e99-7bb9-4ed3-8866-963ff1311503\") " pod="calico-system/csi-node-driver-bfvvm" Apr 30 03:28:45.611365 kubelet[2556]: E0430 03:28:45.611326 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:45.611365 kubelet[2556]: W0430 03:28:45.611354 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:45.611587 kubelet[2556]: E0430 03:28:45.611380 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:45.615777 kubelet[2556]: E0430 03:28:45.615700 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:45.615777 kubelet[2556]: W0430 03:28:45.615732 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:45.616097 kubelet[2556]: E0430 03:28:45.616020 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:45.624038 kubelet[2556]: E0430 03:28:45.623993 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:45.624278 kubelet[2556]: W0430 03:28:45.624017 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:45.624278 kubelet[2556]: E0430 03:28:45.624235 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:45.650459 containerd[1473]: time="2025-04-30T03:28:45.650397493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55594f56c8-g9hkh,Uid:c593f81f-6866-439d-8485-683a3575cf5f,Namespace:calico-system,Attempt:0,} returns sandbox id \"9b475ae724e9602c4760edac53beac6676c3eb79eefe031783005c5ba47528a1\"" Apr 30 03:28:45.652047 kubelet[2556]: E0430 03:28:45.651996 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:45.654106 containerd[1473]: time="2025-04-30T03:28:45.653985324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" Apr 30 03:28:45.691367 kubelet[2556]: E0430 03:28:45.691050 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:45.692604 containerd[1473]: time="2025-04-30T03:28:45.691950239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b74fm,Uid:81e72421-8a40-4992-a91b-cb68088524a7,Namespace:calico-system,Attempt:0,}" Apr 30 03:28:45.705922 kubelet[2556]: E0430 03:28:45.705883 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:45.707683 kubelet[2556]: W0430 03:28:45.707580 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:45.707683 kubelet[2556]: E0430 03:28:45.707632 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:45.708909 kubelet[2556]: E0430 03:28:45.708226 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:45.708909 kubelet[2556]: W0430 03:28:45.708241 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:45.708909 kubelet[2556]: E0430 03:28:45.708264 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:45.708909 kubelet[2556]: E0430 03:28:45.708647 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:45.708909 kubelet[2556]: W0430 03:28:45.708667 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:45.708909 kubelet[2556]: E0430 03:28:45.708695 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:45.709333 kubelet[2556]: E0430 03:28:45.709007 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:45.709333 kubelet[2556]: W0430 03:28:45.709021 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:45.709333 kubelet[2556]: E0430 03:28:45.709044 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:45.711218 kubelet[2556]: E0430 03:28:45.710123 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:45.711218 kubelet[2556]: W0430 03:28:45.710154 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:45.711218 kubelet[2556]: E0430 03:28:45.710172 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:45.711218 kubelet[2556]: E0430 03:28:45.710728 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:45.711218 kubelet[2556]: W0430 03:28:45.710739 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:45.711811 kubelet[2556]: E0430 03:28:45.711706 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:45.711811 kubelet[2556]: E0430 03:28:45.711717 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:45.711811 kubelet[2556]: W0430 03:28:45.711803 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:45.712278 kubelet[2556]: E0430 03:28:45.712000 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:45.712395 kubelet[2556]: E0430 03:28:45.712309 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:45.712395 kubelet[2556]: W0430 03:28:45.712322 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:45.712586 kubelet[2556]: E0430 03:28:45.712449 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:45.713119 kubelet[2556]: E0430 03:28:45.713063 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:45.713119 kubelet[2556]: W0430 03:28:45.713074 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:45.713119 kubelet[2556]: E0430 03:28:45.713089 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:45.716605 kubelet[2556]: E0430 03:28:45.713705 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:45.716605 kubelet[2556]: W0430 03:28:45.713724 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:45.716861 kubelet[2556]: E0430 03:28:45.716808 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:45.717365 kubelet[2556]: E0430 03:28:45.717342 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:45.717365 kubelet[2556]: W0430 03:28:45.717360 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:45.718058 kubelet[2556]: E0430 03:28:45.718030 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:45.718574 kubelet[2556]: E0430 03:28:45.718361 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:45.718657 kubelet[2556]: W0430 03:28:45.718573 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:45.719052 kubelet[2556]: E0430 03:28:45.719024 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:45.720352 kubelet[2556]: E0430 03:28:45.719477 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:45.720352 kubelet[2556]: W0430 03:28:45.719491 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:45.720680 kubelet[2556]: E0430 03:28:45.720654 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:45.723891 kubelet[2556]: E0430 03:28:45.723858 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:45.723891 kubelet[2556]: W0430 03:28:45.723881 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:45.724312 kubelet[2556]: E0430 03:28:45.724287 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:45.724919 kubelet[2556]: E0430 03:28:45.724891 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:45.725304 kubelet[2556]: W0430 03:28:45.725146 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:45.725304 kubelet[2556]: E0430 03:28:45.725193 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:45.725830 kubelet[2556]: E0430 03:28:45.725809 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:45.725830 kubelet[2556]: W0430 03:28:45.725828 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:45.726101 kubelet[2556]: E0430 03:28:45.726082 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:45.727106 kubelet[2556]: E0430 03:28:45.726665 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:45.727106 kubelet[2556]: W0430 03:28:45.726685 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:45.727106 kubelet[2556]: E0430 03:28:45.726723 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:45.727470 kubelet[2556]: E0430 03:28:45.727449 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:45.727470 kubelet[2556]: W0430 03:28:45.727466 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:45.727738 kubelet[2556]: E0430 03:28:45.727500 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:45.728263 kubelet[2556]: E0430 03:28:45.728154 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:45.728263 kubelet[2556]: W0430 03:28:45.728170 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:45.728263 kubelet[2556]: E0430 03:28:45.728204 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:45.729185 kubelet[2556]: E0430 03:28:45.728571 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:45.729185 kubelet[2556]: W0430 03:28:45.728582 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:45.729185 kubelet[2556]: E0430 03:28:45.729033 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:45.729185 kubelet[2556]: E0430 03:28:45.729175 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:45.729185 kubelet[2556]: W0430 03:28:45.729182 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:45.730125 kubelet[2556]: E0430 03:28:45.730097 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:45.730304 kubelet[2556]: E0430 03:28:45.730288 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:45.730304 kubelet[2556]: W0430 03:28:45.730300 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:45.730974 kubelet[2556]: E0430 03:28:45.730339 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:45.730974 kubelet[2556]: E0430 03:28:45.730498 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:45.730974 kubelet[2556]: W0430 03:28:45.730554 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:45.731106 kubelet[2556]: E0430 03:28:45.731080 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:45.733045 kubelet[2556]: E0430 03:28:45.732640 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:45.733045 kubelet[2556]: W0430 03:28:45.732658 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:45.733045 kubelet[2556]: E0430 03:28:45.732675 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:45.733045 kubelet[2556]: E0430 03:28:45.732947 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:45.733045 kubelet[2556]: W0430 03:28:45.732956 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:45.733045 kubelet[2556]: E0430 03:28:45.732970 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:45.745884 containerd[1473]: time="2025-04-30T03:28:45.743384113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:28:45.745884 containerd[1473]: time="2025-04-30T03:28:45.743482513Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:28:45.745884 containerd[1473]: time="2025-04-30T03:28:45.743502898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:45.745884 containerd[1473]: time="2025-04-30T03:28:45.743678392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:45.753726 kubelet[2556]: E0430 03:28:45.753688 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:45.753726 kubelet[2556]: W0430 03:28:45.753714 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:45.753895 kubelet[2556]: E0430 03:28:45.753743 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:45.770505 systemd[1]: Started cri-containerd-dfe85c2886403ec3035d79862f9eb429140e18e45de2b39967bdbf551b24fe40.scope - libcontainer container dfe85c2886403ec3035d79862f9eb429140e18e45de2b39967bdbf551b24fe40. Apr 30 03:28:45.824388 containerd[1473]: time="2025-04-30T03:28:45.824226224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b74fm,Uid:81e72421-8a40-4992-a91b-cb68088524a7,Namespace:calico-system,Attempt:0,} returns sandbox id \"dfe85c2886403ec3035d79862f9eb429140e18e45de2b39967bdbf551b24fe40\"" Apr 30 03:28:45.826708 kubelet[2556]: E0430 03:28:45.826483 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:47.257361 kubelet[2556]: E0430 03:28:47.257296 2556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bfvvm" podUID="8f623e99-7bb9-4ed3-8866-963ff1311503" Apr 30 03:28:48.801238 containerd[1473]: time="2025-04-30T03:28:48.801169033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:48.804645 containerd[1473]: time="2025-04-30T03:28:48.804422493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" Apr 30 03:28:48.805320 containerd[1473]: time="2025-04-30T03:28:48.805283370Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:48.808143 containerd[1473]: time="2025-04-30T03:28:48.808100719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:48.809526 containerd[1473]: time="2025-04-30T03:28:48.809145790Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 3.155110491s" Apr 30 03:28:48.809526 containerd[1473]: time="2025-04-30T03:28:48.809187946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" Apr 30 03:28:48.810740 containerd[1473]: time="2025-04-30T03:28:48.810711251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 03:28:48.835037 containerd[1473]: time="2025-04-30T03:28:48.834548571Z" level=info msg="CreateContainer within sandbox \"9b475ae724e9602c4760edac53beac6676c3eb79eefe031783005c5ba47528a1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 03:28:48.853730 containerd[1473]: time="2025-04-30T03:28:48.853674834Z" level=info msg="CreateContainer within sandbox \"9b475ae724e9602c4760edac53beac6676c3eb79eefe031783005c5ba47528a1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b10609751a4f1792934c2779e4b28d5167d901f1f538f395e4b25846c2d29beb\"" Apr 30 03:28:48.855147 containerd[1473]: time="2025-04-30T03:28:48.854789246Z" level=info msg="StartContainer for \"b10609751a4f1792934c2779e4b28d5167d901f1f538f395e4b25846c2d29beb\"" Apr 30 03:28:48.893859 systemd[1]: Started cri-containerd-b10609751a4f1792934c2779e4b28d5167d901f1f538f395e4b25846c2d29beb.scope - libcontainer container b10609751a4f1792934c2779e4b28d5167d901f1f538f395e4b25846c2d29beb. Apr 30 03:28:48.966625 containerd[1473]: time="2025-04-30T03:28:48.966371676Z" level=info msg="StartContainer for \"b10609751a4f1792934c2779e4b28d5167d901f1f538f395e4b25846c2d29beb\" returns successfully" Apr 30 03:28:49.258231 kubelet[2556]: E0430 03:28:49.257478 2556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bfvvm" podUID="8f623e99-7bb9-4ed3-8866-963ff1311503" Apr 30 03:28:49.354238 kubelet[2556]: E0430 03:28:49.353782 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:49.368376 kubelet[2556]: I0430 03:28:49.368279 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55594f56c8-g9hkh" podStartSLOduration=1.210993817 podStartE2EDuration="4.36825643s" podCreationTimestamp="2025-04-30 03:28:45 +0000 UTC" firstStartedPulling="2025-04-30 03:28:45.653118822 +0000 UTC m=+21.577418534" lastFinishedPulling="2025-04-30 03:28:48.810381426 +0000 UTC m=+24.734681147" observedRunningTime="2025-04-30 03:28:49.367399729 +0000 UTC m=+25.291699464" watchObservedRunningTime="2025-04-30 03:28:49.36825643 +0000 UTC m=+25.292556159" Apr 30 03:28:49.388917 kubelet[2556]: E0430 03:28:49.388873 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.388917 kubelet[2556]: W0430 03:28:49.388923 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.389208 kubelet[2556]: E0430 03:28:49.388954 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.389395 kubelet[2556]: E0430 03:28:49.389377 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.389439 kubelet[2556]: W0430 03:28:49.389420 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.389482 kubelet[2556]: E0430 03:28:49.389439 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.389971 kubelet[2556]: E0430 03:28:49.389945 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.389971 kubelet[2556]: W0430 03:28:49.389967 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.390144 kubelet[2556]: E0430 03:28:49.389985 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.390653 kubelet[2556]: E0430 03:28:49.390628 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.390653 kubelet[2556]: W0430 03:28:49.390646 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.390812 kubelet[2556]: E0430 03:28:49.390665 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.391896 kubelet[2556]: E0430 03:28:49.391749 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.391896 kubelet[2556]: W0430 03:28:49.391888 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.392061 kubelet[2556]: E0430 03:28:49.391910 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.393317 kubelet[2556]: E0430 03:28:49.392789 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.393317 kubelet[2556]: W0430 03:28:49.392808 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.393317 kubelet[2556]: E0430 03:28:49.392841 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.394139 kubelet[2556]: E0430 03:28:49.393820 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.394139 kubelet[2556]: W0430 03:28:49.393841 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.394139 kubelet[2556]: E0430 03:28:49.393860 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.394357 kubelet[2556]: E0430 03:28:49.394166 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.394357 kubelet[2556]: W0430 03:28:49.394190 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.394357 kubelet[2556]: E0430 03:28:49.394205 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.395744 kubelet[2556]: E0430 03:28:49.395667 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.395956 kubelet[2556]: W0430 03:28:49.395749 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.395956 kubelet[2556]: E0430 03:28:49.395779 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.397018 kubelet[2556]: E0430 03:28:49.396567 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.397018 kubelet[2556]: W0430 03:28:49.396589 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.397018 kubelet[2556]: E0430 03:28:49.396613 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.397018 kubelet[2556]: E0430 03:28:49.396922 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.397018 kubelet[2556]: W0430 03:28:49.396934 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.397018 kubelet[2556]: E0430 03:28:49.396948 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.397821 kubelet[2556]: E0430 03:28:49.397791 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.397821 kubelet[2556]: W0430 03:28:49.397813 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.397971 kubelet[2556]: E0430 03:28:49.397831 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.398331 kubelet[2556]: E0430 03:28:49.398135 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.398331 kubelet[2556]: W0430 03:28:49.398154 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.398331 kubelet[2556]: E0430 03:28:49.398169 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.398590 kubelet[2556]: E0430 03:28:49.398425 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.398590 kubelet[2556]: W0430 03:28:49.398438 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.398590 kubelet[2556]: E0430 03:28:49.398451 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.399734 kubelet[2556]: E0430 03:28:49.399687 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.399937 kubelet[2556]: W0430 03:28:49.399711 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.399937 kubelet[2556]: E0430 03:28:49.399840 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.439068 kubelet[2556]: E0430 03:28:49.439026 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.439068 kubelet[2556]: W0430 03:28:49.439059 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.439383 kubelet[2556]: E0430 03:28:49.439091 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.439470 kubelet[2556]: E0430 03:28:49.439452 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.439546 kubelet[2556]: W0430 03:28:49.439473 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.439546 kubelet[2556]: E0430 03:28:49.439532 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.439894 kubelet[2556]: E0430 03:28:49.439874 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.439894 kubelet[2556]: W0430 03:28:49.439892 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.440052 kubelet[2556]: E0430 03:28:49.439914 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.440785 kubelet[2556]: E0430 03:28:49.440752 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.440908 kubelet[2556]: W0430 03:28:49.440789 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.440908 kubelet[2556]: E0430 03:28:49.440815 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.441126 kubelet[2556]: E0430 03:28:49.441112 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.441178 kubelet[2556]: W0430 03:28:49.441126 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.441305 kubelet[2556]: E0430 03:28:49.441213 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.441419 kubelet[2556]: E0430 03:28:49.441369 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.441419 kubelet[2556]: W0430 03:28:49.441398 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.441644 kubelet[2556]: E0430 03:28:49.441481 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.441754 kubelet[2556]: E0430 03:28:49.441646 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.441754 kubelet[2556]: W0430 03:28:49.441656 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.441754 kubelet[2556]: E0430 03:28:49.441694 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.442031 kubelet[2556]: E0430 03:28:49.442016 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.442089 kubelet[2556]: W0430 03:28:49.442032 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.442171 kubelet[2556]: E0430 03:28:49.442110 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.442628 kubelet[2556]: E0430 03:28:49.442601 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.442903 kubelet[2556]: W0430 03:28:49.442866 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.442990 kubelet[2556]: E0430 03:28:49.442909 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.444492 kubelet[2556]: E0430 03:28:49.444355 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.444492 kubelet[2556]: W0430 03:28:49.444381 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.444492 kubelet[2556]: E0430 03:28:49.444410 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.445193 kubelet[2556]: E0430 03:28:49.444955 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.445193 kubelet[2556]: W0430 03:28:49.444972 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.445193 kubelet[2556]: E0430 03:28:49.445095 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.446151 kubelet[2556]: E0430 03:28:49.446009 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.446151 kubelet[2556]: W0430 03:28:49.446026 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.446151 kubelet[2556]: E0430 03:28:49.446073 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.447644 kubelet[2556]: E0430 03:28:49.447618 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.447826 kubelet[2556]: W0430 03:28:49.447737 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.447976 kubelet[2556]: E0430 03:28:49.447923 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.448395 kubelet[2556]: E0430 03:28:49.448278 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.448395 kubelet[2556]: W0430 03:28:49.448292 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.448395 kubelet[2556]: E0430 03:28:49.448335 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.448903 kubelet[2556]: E0430 03:28:49.448721 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.448903 kubelet[2556]: W0430 03:28:49.448736 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.448903 kubelet[2556]: E0430 03:28:49.448751 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.449634 kubelet[2556]: E0430 03:28:49.449228 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.449634 kubelet[2556]: W0430 03:28:49.449245 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.449634 kubelet[2556]: E0430 03:28:49.449289 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.450062 kubelet[2556]: E0430 03:28:49.450036 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.450062 kubelet[2556]: W0430 03:28:49.450060 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.450178 kubelet[2556]: E0430 03:28:49.450083 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:49.451151 kubelet[2556]: E0430 03:28:49.451125 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:49.451151 kubelet[2556]: W0430 03:28:49.451148 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:49.451302 kubelet[2556]: E0430 03:28:49.451168 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.355736 kubelet[2556]: E0430 03:28:50.355683 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:50.406762 kubelet[2556]: E0430 03:28:50.406601 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.406762 kubelet[2556]: W0430 03:28:50.406634 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.406762 kubelet[2556]: E0430 03:28:50.406660 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.407097 kubelet[2556]: E0430 03:28:50.407080 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.407310 kubelet[2556]: W0430 03:28:50.407154 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.407310 kubelet[2556]: E0430 03:28:50.407180 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.407650 kubelet[2556]: E0430 03:28:50.407619 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.408042 kubelet[2556]: W0430 03:28:50.407803 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.408042 kubelet[2556]: E0430 03:28:50.407846 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.408194 kubelet[2556]: E0430 03:28:50.408182 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.408243 kubelet[2556]: W0430 03:28:50.408233 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.408323 kubelet[2556]: E0430 03:28:50.408308 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.408620 kubelet[2556]: E0430 03:28:50.408608 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.408704 kubelet[2556]: W0430 03:28:50.408694 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.408755 kubelet[2556]: E0430 03:28:50.408746 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.409105 kubelet[2556]: E0430 03:28:50.409000 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.409105 kubelet[2556]: W0430 03:28:50.409014 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.409105 kubelet[2556]: E0430 03:28:50.409025 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.409254 kubelet[2556]: E0430 03:28:50.409245 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.409310 kubelet[2556]: W0430 03:28:50.409301 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.409359 kubelet[2556]: E0430 03:28:50.409348 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.409755 kubelet[2556]: E0430 03:28:50.409634 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.409755 kubelet[2556]: W0430 03:28:50.409647 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.409755 kubelet[2556]: E0430 03:28:50.409658 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.409914 kubelet[2556]: E0430 03:28:50.409904 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.409972 kubelet[2556]: W0430 03:28:50.409963 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.410018 kubelet[2556]: E0430 03:28:50.410009 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.410280 kubelet[2556]: E0430 03:28:50.410267 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.410602 kubelet[2556]: W0430 03:28:50.410361 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.410602 kubelet[2556]: E0430 03:28:50.410389 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.410846 kubelet[2556]: E0430 03:28:50.410744 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.410846 kubelet[2556]: W0430 03:28:50.410755 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.410846 kubelet[2556]: E0430 03:28:50.410767 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.410993 kubelet[2556]: E0430 03:28:50.410984 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.411066 kubelet[2556]: W0430 03:28:50.411057 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.411134 kubelet[2556]: E0430 03:28:50.411125 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.411367 kubelet[2556]: E0430 03:28:50.411356 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.411639 kubelet[2556]: W0430 03:28:50.411445 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.411639 kubelet[2556]: E0430 03:28:50.411468 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.411779 kubelet[2556]: E0430 03:28:50.411768 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.411831 kubelet[2556]: W0430 03:28:50.411822 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.411876 kubelet[2556]: E0430 03:28:50.411868 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.412171 kubelet[2556]: E0430 03:28:50.412158 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.412335 kubelet[2556]: W0430 03:28:50.412248 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.412335 kubelet[2556]: E0430 03:28:50.412269 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.447601 kubelet[2556]: E0430 03:28:50.447552 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.447601 kubelet[2556]: W0430 03:28:50.447582 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.447601 kubelet[2556]: E0430 03:28:50.447618 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.447976 kubelet[2556]: E0430 03:28:50.447951 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.447976 kubelet[2556]: W0430 03:28:50.447964 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.448137 kubelet[2556]: E0430 03:28:50.448004 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.448306 kubelet[2556]: E0430 03:28:50.448282 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.448306 kubelet[2556]: W0430 03:28:50.448299 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.448448 kubelet[2556]: E0430 03:28:50.448320 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.448653 kubelet[2556]: E0430 03:28:50.448632 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.448653 kubelet[2556]: W0430 03:28:50.448645 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.449257 kubelet[2556]: E0430 03:28:50.448662 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.449257 kubelet[2556]: E0430 03:28:50.448955 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.449257 kubelet[2556]: W0430 03:28:50.448975 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.449257 kubelet[2556]: E0430 03:28:50.449000 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.449550 kubelet[2556]: E0430 03:28:50.449531 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.449650 kubelet[2556]: W0430 03:28:50.449635 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.449734 kubelet[2556]: E0430 03:28:50.449722 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.450057 kubelet[2556]: E0430 03:28:50.450030 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.450057 kubelet[2556]: W0430 03:28:50.450052 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.450171 kubelet[2556]: E0430 03:28:50.450070 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.450353 kubelet[2556]: E0430 03:28:50.450338 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.450353 kubelet[2556]: W0430 03:28:50.450351 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.450552 kubelet[2556]: E0430 03:28:50.450404 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.450716 kubelet[2556]: E0430 03:28:50.450701 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.450716 kubelet[2556]: W0430 03:28:50.450715 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.450865 kubelet[2556]: E0430 03:28:50.450849 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.451051 kubelet[2556]: E0430 03:28:50.451035 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.451051 kubelet[2556]: W0430 03:28:50.451049 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.451151 kubelet[2556]: E0430 03:28:50.451079 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.451340 kubelet[2556]: E0430 03:28:50.451326 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.451340 kubelet[2556]: W0430 03:28:50.451337 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.451430 kubelet[2556]: E0430 03:28:50.451355 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.451934 kubelet[2556]: E0430 03:28:50.451737 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.451934 kubelet[2556]: W0430 03:28:50.451758 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.451934 kubelet[2556]: E0430 03:28:50.451782 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.452300 kubelet[2556]: E0430 03:28:50.452219 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.452300 kubelet[2556]: W0430 03:28:50.452237 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.452300 kubelet[2556]: E0430 03:28:50.452280 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.452876 kubelet[2556]: E0430 03:28:50.452676 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.452876 kubelet[2556]: W0430 03:28:50.452692 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.452876 kubelet[2556]: E0430 03:28:50.452795 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.453311 kubelet[2556]: E0430 03:28:50.453120 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.453311 kubelet[2556]: W0430 03:28:50.453134 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.453311 kubelet[2556]: E0430 03:28:50.453166 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.453741 kubelet[2556]: E0430 03:28:50.453561 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.453741 kubelet[2556]: W0430 03:28:50.453578 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.453741 kubelet[2556]: E0430 03:28:50.453593 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.454000 kubelet[2556]: E0430 03:28:50.453986 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.454079 kubelet[2556]: W0430 03:28:50.454067 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.454501 kubelet[2556]: E0430 03:28:50.454139 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:50.454894 kubelet[2556]: E0430 03:28:50.454875 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:50.455016 kubelet[2556]: W0430 03:28:50.454998 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:50.455099 kubelet[2556]: E0430 03:28:50.455085 2556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:51.203252 containerd[1473]: time="2025-04-30T03:28:51.203181056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:51.204311 containerd[1473]: time="2025-04-30T03:28:51.204250465Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" Apr 30 03:28:51.204630 containerd[1473]: time="2025-04-30T03:28:51.204597170Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:51.208037 containerd[1473]: time="2025-04-30T03:28:51.207987944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:51.209677 containerd[1473]: time="2025-04-30T03:28:51.209535455Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 2.398599622s" Apr 30 03:28:51.209677 containerd[1473]: time="2025-04-30T03:28:51.209574945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" Apr 30 03:28:51.215270 containerd[1473]: time="2025-04-30T03:28:51.215123365Z" level=info msg="CreateContainer within sandbox \"dfe85c2886403ec3035d79862f9eb429140e18e45de2b39967bdbf551b24fe40\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 03:28:51.237222 containerd[1473]: time="2025-04-30T03:28:51.237042661Z" level=info msg="CreateContainer within sandbox \"dfe85c2886403ec3035d79862f9eb429140e18e45de2b39967bdbf551b24fe40\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c108de2c263da665c2b9feb6ec3c4d0d761e02adc8664912d46260c43b0f0513\"" Apr 30 03:28:51.238362 containerd[1473]: time="2025-04-30T03:28:51.237864956Z" level=info msg="StartContainer for \"c108de2c263da665c2b9feb6ec3c4d0d761e02adc8664912d46260c43b0f0513\"" Apr 30 03:28:51.260738 kubelet[2556]: E0430 03:28:51.258361 2556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bfvvm" podUID="8f623e99-7bb9-4ed3-8866-963ff1311503" Apr 30 03:28:51.303847 systemd[1]: Started cri-containerd-c108de2c263da665c2b9feb6ec3c4d0d761e02adc8664912d46260c43b0f0513.scope - libcontainer container c108de2c263da665c2b9feb6ec3c4d0d761e02adc8664912d46260c43b0f0513. Apr 30 03:28:51.347121 containerd[1473]: time="2025-04-30T03:28:51.346973570Z" level=info msg="StartContainer for \"c108de2c263da665c2b9feb6ec3c4d0d761e02adc8664912d46260c43b0f0513\" returns successfully" Apr 30 03:28:51.365052 kubelet[2556]: E0430 03:28:51.364968 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:51.367378 kubelet[2556]: E0430 03:28:51.367249 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:51.387872 systemd[1]: cri-containerd-c108de2c263da665c2b9feb6ec3c4d0d761e02adc8664912d46260c43b0f0513.scope: Deactivated successfully. Apr 30 03:28:51.439866 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c108de2c263da665c2b9feb6ec3c4d0d761e02adc8664912d46260c43b0f0513-rootfs.mount: Deactivated successfully. Apr 30 03:28:51.461498 containerd[1473]: time="2025-04-30T03:28:51.453108346Z" level=info msg="shim disconnected" id=c108de2c263da665c2b9feb6ec3c4d0d761e02adc8664912d46260c43b0f0513 namespace=k8s.io Apr 30 03:28:51.461498 containerd[1473]: time="2025-04-30T03:28:51.461381837Z" level=warning msg="cleaning up after shim disconnected" id=c108de2c263da665c2b9feb6ec3c4d0d761e02adc8664912d46260c43b0f0513 namespace=k8s.io Apr 30 03:28:51.461498 containerd[1473]: time="2025-04-30T03:28:51.461403498Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:28:52.370258 kubelet[2556]: E0430 03:28:52.368499 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:52.371920 containerd[1473]: time="2025-04-30T03:28:52.371877254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 03:28:53.258205 kubelet[2556]: E0430 03:28:53.257669 2556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bfvvm" podUID="8f623e99-7bb9-4ed3-8866-963ff1311503" Apr 30 03:28:55.258198 kubelet[2556]: E0430 03:28:55.258126 2556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bfvvm" podUID="8f623e99-7bb9-4ed3-8866-963ff1311503" Apr 30 03:28:57.258279 kubelet[2556]: E0430 03:28:57.258102 2556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bfvvm" podUID="8f623e99-7bb9-4ed3-8866-963ff1311503" Apr 30 03:28:57.370020 containerd[1473]: time="2025-04-30T03:28:57.369960889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:57.371340 containerd[1473]: time="2025-04-30T03:28:57.371082111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" Apr 30 03:28:57.371986 containerd[1473]: time="2025-04-30T03:28:57.371951794Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:57.375479 containerd[1473]: time="2025-04-30T03:28:57.374982434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:57.375990 containerd[1473]: time="2025-04-30T03:28:57.375957420Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 5.004040905s" Apr 30 03:28:57.375990 containerd[1473]: time="2025-04-30T03:28:57.375992596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" Apr 30 03:28:57.381798 containerd[1473]: time="2025-04-30T03:28:57.381623926Z" level=info msg="CreateContainer within sandbox \"dfe85c2886403ec3035d79862f9eb429140e18e45de2b39967bdbf551b24fe40\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 03:28:57.401881 containerd[1473]: time="2025-04-30T03:28:57.401814150Z" level=info msg="CreateContainer within sandbox \"dfe85c2886403ec3035d79862f9eb429140e18e45de2b39967bdbf551b24fe40\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"edf0bfec54a8130d1242579d70a9ecf7f876ce7692f2497fc83f82904d421225\"" Apr 30 03:28:57.403838 containerd[1473]: time="2025-04-30T03:28:57.403768963Z" level=info msg="StartContainer for \"edf0bfec54a8130d1242579d70a9ecf7f876ce7692f2497fc83f82904d421225\"" Apr 30 03:28:57.500784 systemd[1]: Started cri-containerd-edf0bfec54a8130d1242579d70a9ecf7f876ce7692f2497fc83f82904d421225.scope - libcontainer container edf0bfec54a8130d1242579d70a9ecf7f876ce7692f2497fc83f82904d421225. Apr 30 03:28:57.540960 containerd[1473]: time="2025-04-30T03:28:57.540890679Z" level=info msg="StartContainer for \"edf0bfec54a8130d1242579d70a9ecf7f876ce7692f2497fc83f82904d421225\" returns successfully" Apr 30 03:28:58.146903 systemd[1]: cri-containerd-edf0bfec54a8130d1242579d70a9ecf7f876ce7692f2497fc83f82904d421225.scope: Deactivated successfully. Apr 30 03:28:58.185492 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-edf0bfec54a8130d1242579d70a9ecf7f876ce7692f2497fc83f82904d421225-rootfs.mount: Deactivated successfully. Apr 30 03:28:58.196934 containerd[1473]: time="2025-04-30T03:28:58.194713444Z" level=info msg="shim disconnected" id=edf0bfec54a8130d1242579d70a9ecf7f876ce7692f2497fc83f82904d421225 namespace=k8s.io Apr 30 03:28:58.196934 containerd[1473]: time="2025-04-30T03:28:58.194782217Z" level=warning msg="cleaning up after shim disconnected" id=edf0bfec54a8130d1242579d70a9ecf7f876ce7692f2497fc83f82904d421225 namespace=k8s.io Apr 30 03:28:58.196934 containerd[1473]: time="2025-04-30T03:28:58.194806372Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:28:58.211558 kubelet[2556]: I0430 03:28:58.211089 2556 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 03:28:58.220339 containerd[1473]: time="2025-04-30T03:28:58.220275260Z" level=warning msg="cleanup warnings time=\"2025-04-30T03:28:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 03:28:58.248790 kubelet[2556]: I0430 03:28:58.248734 2556 topology_manager.go:215] "Topology Admit Handler" podUID="bb22c691-4fbf-4372-b30c-281e4f70d3e0" podNamespace="kube-system" podName="coredns-7db6d8ff4d-m94hd" Apr 30 03:28:58.262735 systemd[1]: Created slice kubepods-burstable-podbb22c691_4fbf_4372_b30c_281e4f70d3e0.slice - libcontainer container kubepods-burstable-podbb22c691_4fbf_4372_b30c_281e4f70d3e0.slice. Apr 30 03:28:58.267653 kubelet[2556]: I0430 03:28:58.265739 2556 topology_manager.go:215] "Topology Admit Handler" podUID="ce3de429-7f35-47dd-ba9a-d97e4159a358" podNamespace="calico-apiserver" podName="calico-apiserver-5579bb7b4d-fp4xj" Apr 30 03:28:58.278399 kubelet[2556]: I0430 03:28:58.278354 2556 topology_manager.go:215] "Topology Admit Handler" podUID="06d3b462-d34a-4562-b3c2-6a83b60fac79" podNamespace="calico-apiserver" podName="calico-apiserver-5579bb7b4d-2rxcz" Apr 30 03:28:58.280269 systemd[1]: Created slice kubepods-besteffort-podce3de429_7f35_47dd_ba9a_d97e4159a358.slice - libcontainer container kubepods-besteffort-podce3de429_7f35_47dd_ba9a_d97e4159a358.slice. Apr 30 03:28:58.284341 kubelet[2556]: I0430 03:28:58.281995 2556 topology_manager.go:215] "Topology Admit Handler" podUID="021ebc6e-397c-468a-9ff4-cdbf45e8c256" podNamespace="kube-system" podName="coredns-7db6d8ff4d-xsv99" Apr 30 03:28:58.287880 kubelet[2556]: I0430 03:28:58.287686 2556 topology_manager.go:215] "Topology Admit Handler" podUID="7b1efadb-18f7-436b-8e71-a7c0c7270888" podNamespace="calico-system" podName="calico-kube-controllers-5846bc4884-ttjjz" Apr 30 03:28:58.301702 systemd[1]: Created slice kubepods-burstable-pod021ebc6e_397c_468a_9ff4_cdbf45e8c256.slice - libcontainer container kubepods-burstable-pod021ebc6e_397c_468a_9ff4_cdbf45e8c256.slice. Apr 30 03:28:58.303624 kubelet[2556]: I0430 03:28:58.302933 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jqcb\" (UniqueName: \"kubernetes.io/projected/bb22c691-4fbf-4372-b30c-281e4f70d3e0-kube-api-access-6jqcb\") pod \"coredns-7db6d8ff4d-m94hd\" (UID: \"bb22c691-4fbf-4372-b30c-281e4f70d3e0\") " pod="kube-system/coredns-7db6d8ff4d-m94hd" Apr 30 03:28:58.303624 kubelet[2556]: I0430 03:28:58.303008 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ce3de429-7f35-47dd-ba9a-d97e4159a358-calico-apiserver-certs\") pod \"calico-apiserver-5579bb7b4d-fp4xj\" (UID: \"ce3de429-7f35-47dd-ba9a-d97e4159a358\") " pod="calico-apiserver/calico-apiserver-5579bb7b4d-fp4xj" Apr 30 03:28:58.303624 kubelet[2556]: I0430 03:28:58.303032 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvqr6\" (UniqueName: \"kubernetes.io/projected/ce3de429-7f35-47dd-ba9a-d97e4159a358-kube-api-access-xvqr6\") pod \"calico-apiserver-5579bb7b4d-fp4xj\" (UID: \"ce3de429-7f35-47dd-ba9a-d97e4159a358\") " pod="calico-apiserver/calico-apiserver-5579bb7b4d-fp4xj" Apr 30 03:28:58.303624 kubelet[2556]: I0430 03:28:58.303053 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bb22c691-4fbf-4372-b30c-281e4f70d3e0-config-volume\") pod \"coredns-7db6d8ff4d-m94hd\" (UID: \"bb22c691-4fbf-4372-b30c-281e4f70d3e0\") " pod="kube-system/coredns-7db6d8ff4d-m94hd" Apr 30 03:28:58.312377 systemd[1]: Created slice kubepods-besteffort-pod06d3b462_d34a_4562_b3c2_6a83b60fac79.slice - libcontainer container kubepods-besteffort-pod06d3b462_d34a_4562_b3c2_6a83b60fac79.slice. Apr 30 03:28:58.323899 systemd[1]: Created slice kubepods-besteffort-pod7b1efadb_18f7_436b_8e71_a7c0c7270888.slice - libcontainer container kubepods-besteffort-pod7b1efadb_18f7_436b_8e71_a7c0c7270888.slice. Apr 30 03:28:58.404479 kubelet[2556]: I0430 03:28:58.403444 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/021ebc6e-397c-468a-9ff4-cdbf45e8c256-config-volume\") pod \"coredns-7db6d8ff4d-xsv99\" (UID: \"021ebc6e-397c-468a-9ff4-cdbf45e8c256\") " pod="kube-system/coredns-7db6d8ff4d-xsv99" Apr 30 03:28:58.404479 kubelet[2556]: I0430 03:28:58.403540 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqnfh\" (UniqueName: \"kubernetes.io/projected/06d3b462-d34a-4562-b3c2-6a83b60fac79-kube-api-access-lqnfh\") pod \"calico-apiserver-5579bb7b4d-2rxcz\" (UID: \"06d3b462-d34a-4562-b3c2-6a83b60fac79\") " pod="calico-apiserver/calico-apiserver-5579bb7b4d-2rxcz" Apr 30 03:28:58.404479 kubelet[2556]: I0430 03:28:58.403572 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvnr5\" (UniqueName: \"kubernetes.io/projected/021ebc6e-397c-468a-9ff4-cdbf45e8c256-kube-api-access-dvnr5\") pod \"coredns-7db6d8ff4d-xsv99\" (UID: \"021ebc6e-397c-468a-9ff4-cdbf45e8c256\") " pod="kube-system/coredns-7db6d8ff4d-xsv99" Apr 30 03:28:58.404479 kubelet[2556]: I0430 03:28:58.403659 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/06d3b462-d34a-4562-b3c2-6a83b60fac79-calico-apiserver-certs\") pod \"calico-apiserver-5579bb7b4d-2rxcz\" (UID: \"06d3b462-d34a-4562-b3c2-6a83b60fac79\") " pod="calico-apiserver/calico-apiserver-5579bb7b4d-2rxcz" Apr 30 03:28:58.404479 kubelet[2556]: I0430 03:28:58.403689 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b1efadb-18f7-436b-8e71-a7c0c7270888-tigera-ca-bundle\") pod \"calico-kube-controllers-5846bc4884-ttjjz\" (UID: \"7b1efadb-18f7-436b-8e71-a7c0c7270888\") " pod="calico-system/calico-kube-controllers-5846bc4884-ttjjz" Apr 30 03:28:58.405283 kubelet[2556]: I0430 03:28:58.403735 2556 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pj2n\" (UniqueName: \"kubernetes.io/projected/7b1efadb-18f7-436b-8e71-a7c0c7270888-kube-api-access-5pj2n\") pod \"calico-kube-controllers-5846bc4884-ttjjz\" (UID: \"7b1efadb-18f7-436b-8e71-a7c0c7270888\") " pod="calico-system/calico-kube-controllers-5846bc4884-ttjjz" Apr 30 03:28:58.423705 kubelet[2556]: E0430 03:28:58.423672 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:58.425191 containerd[1473]: time="2025-04-30T03:28:58.425123143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 03:28:58.595259 containerd[1473]: time="2025-04-30T03:28:58.594807760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5579bb7b4d-fp4xj,Uid:ce3de429-7f35-47dd-ba9a-d97e4159a358,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:28:58.608441 kubelet[2556]: E0430 03:28:58.608310 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:58.611598 containerd[1473]: time="2025-04-30T03:28:58.610355854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xsv99,Uid:021ebc6e-397c-468a-9ff4-cdbf45e8c256,Namespace:kube-system,Attempt:0,}" Apr 30 03:28:58.623940 containerd[1473]: time="2025-04-30T03:28:58.623776321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5579bb7b4d-2rxcz,Uid:06d3b462-d34a-4562-b3c2-6a83b60fac79,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:28:58.630983 containerd[1473]: time="2025-04-30T03:28:58.630821689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5846bc4884-ttjjz,Uid:7b1efadb-18f7-436b-8e71-a7c0c7270888,Namespace:calico-system,Attempt:0,}" Apr 30 03:28:58.872787 kubelet[2556]: E0430 03:28:58.872369 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:28:58.873414 containerd[1473]: time="2025-04-30T03:28:58.873377493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-m94hd,Uid:bb22c691-4fbf-4372-b30c-281e4f70d3e0,Namespace:kube-system,Attempt:0,}" Apr 30 03:28:58.978736 containerd[1473]: time="2025-04-30T03:28:58.978663357Z" level=error msg="Failed to destroy network for sandbox \"728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:58.985067 containerd[1473]: time="2025-04-30T03:28:58.985008162Z" level=error msg="encountered an error cleaning up failed sandbox \"728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:58.985383 containerd[1473]: time="2025-04-30T03:28:58.985356979Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5579bb7b4d-2rxcz,Uid:06d3b462-d34a-4562-b3c2-6a83b60fac79,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:58.993393 containerd[1473]: time="2025-04-30T03:28:58.993345573Z" level=error msg="Failed to destroy network for sandbox \"1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:58.993562 kubelet[2556]: E0430 03:28:58.993427 2556 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:58.993562 kubelet[2556]: E0430 03:28:58.993498 2556 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5579bb7b4d-2rxcz" Apr 30 03:28:58.993824 kubelet[2556]: E0430 03:28:58.993570 2556 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5579bb7b4d-2rxcz" Apr 30 03:28:58.993824 kubelet[2556]: E0430 03:28:58.993629 2556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5579bb7b4d-2rxcz_calico-apiserver(06d3b462-d34a-4562-b3c2-6a83b60fac79)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5579bb7b4d-2rxcz_calico-apiserver(06d3b462-d34a-4562-b3c2-6a83b60fac79)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5579bb7b4d-2rxcz" podUID="06d3b462-d34a-4562-b3c2-6a83b60fac79" Apr 30 03:28:58.994647 containerd[1473]: time="2025-04-30T03:28:58.994609094Z" level=error msg="encountered an error cleaning up failed sandbox \"1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:58.994799 containerd[1473]: time="2025-04-30T03:28:58.994777831Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5579bb7b4d-fp4xj,Uid:ce3de429-7f35-47dd-ba9a-d97e4159a358,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:58.994948 containerd[1473]: time="2025-04-30T03:28:58.994216541Z" level=error msg="Failed to destroy network for sandbox \"f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:58.996801 containerd[1473]: time="2025-04-30T03:28:58.996576584Z" level=error msg="encountered an error cleaning up failed sandbox \"f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:58.997088 containerd[1473]: time="2025-04-30T03:28:58.997049616Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xsv99,Uid:021ebc6e-397c-468a-9ff4-cdbf45e8c256,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:58.997459 kubelet[2556]: E0430 03:28:58.997414 2556 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:58.997697 kubelet[2556]: E0430 03:28:58.997474 2556 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xsv99" Apr 30 03:28:58.997697 kubelet[2556]: E0430 03:28:58.997495 2556 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xsv99" Apr 30 03:28:58.997697 kubelet[2556]: E0430 03:28:58.997597 2556 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:58.997697 kubelet[2556]: E0430 03:28:58.997625 2556 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5579bb7b4d-fp4xj" Apr 30 03:28:58.997908 kubelet[2556]: E0430 03:28:58.997649 2556 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5579bb7b4d-fp4xj" Apr 30 03:28:58.997908 kubelet[2556]: E0430 03:28:58.997692 2556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5579bb7b4d-fp4xj_calico-apiserver(ce3de429-7f35-47dd-ba9a-d97e4159a358)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5579bb7b4d-fp4xj_calico-apiserver(ce3de429-7f35-47dd-ba9a-d97e4159a358)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5579bb7b4d-fp4xj" podUID="ce3de429-7f35-47dd-ba9a-d97e4159a358" Apr 30 03:28:58.997908 kubelet[2556]: E0430 03:28:58.997546 2556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-xsv99_kube-system(021ebc6e-397c-468a-9ff4-cdbf45e8c256)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-xsv99_kube-system(021ebc6e-397c-468a-9ff4-cdbf45e8c256)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-xsv99" podUID="021ebc6e-397c-468a-9ff4-cdbf45e8c256" Apr 30 03:28:58.998575 containerd[1473]: time="2025-04-30T03:28:58.998046115Z" level=error msg="Failed to destroy network for sandbox \"b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:58.999156 containerd[1473]: time="2025-04-30T03:28:58.998962005Z" level=error msg="encountered an error cleaning up failed sandbox \"b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:58.999156 containerd[1473]: time="2025-04-30T03:28:58.999035527Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5846bc4884-ttjjz,Uid:7b1efadb-18f7-436b-8e71-a7c0c7270888,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:58.999302 kubelet[2556]: E0430 03:28:58.999249 2556 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:58.999353 kubelet[2556]: E0430 03:28:58.999297 2556 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5846bc4884-ttjjz" Apr 30 03:28:58.999353 kubelet[2556]: E0430 03:28:58.999322 2556 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5846bc4884-ttjjz" Apr 30 03:28:58.999603 kubelet[2556]: E0430 03:28:58.999363 2556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5846bc4884-ttjjz_calico-system(7b1efadb-18f7-436b-8e71-a7c0c7270888)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5846bc4884-ttjjz_calico-system(7b1efadb-18f7-436b-8e71-a7c0c7270888)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5846bc4884-ttjjz" podUID="7b1efadb-18f7-436b-8e71-a7c0c7270888" Apr 30 03:28:59.039644 containerd[1473]: time="2025-04-30T03:28:59.039485943Z" level=error msg="Failed to destroy network for sandbox \"400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:59.040203 containerd[1473]: time="2025-04-30T03:28:59.040033136Z" level=error msg="encountered an error cleaning up failed sandbox \"400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:59.040203 containerd[1473]: time="2025-04-30T03:28:59.040091486Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-m94hd,Uid:bb22c691-4fbf-4372-b30c-281e4f70d3e0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:59.041046 kubelet[2556]: E0430 03:28:59.040605 2556 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:59.041046 kubelet[2556]: E0430 03:28:59.040684 2556 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-m94hd" Apr 30 03:28:59.041046 kubelet[2556]: E0430 03:28:59.040712 2556 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-m94hd" Apr 30 03:28:59.041243 kubelet[2556]: E0430 03:28:59.040765 2556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-m94hd_kube-system(bb22c691-4fbf-4372-b30c-281e4f70d3e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-m94hd_kube-system(bb22c691-4fbf-4372-b30c-281e4f70d3e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-m94hd" podUID="bb22c691-4fbf-4372-b30c-281e4f70d3e0" Apr 30 03:28:59.266392 systemd[1]: Created slice kubepods-besteffort-pod8f623e99_7bb9_4ed3_8866_963ff1311503.slice - libcontainer container kubepods-besteffort-pod8f623e99_7bb9_4ed3_8866_963ff1311503.slice. Apr 30 03:28:59.271421 containerd[1473]: time="2025-04-30T03:28:59.271375734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bfvvm,Uid:8f623e99-7bb9-4ed3-8866-963ff1311503,Namespace:calico-system,Attempt:0,}" Apr 30 03:28:59.353945 containerd[1473]: time="2025-04-30T03:28:59.353837431Z" level=error msg="Failed to destroy network for sandbox \"0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:59.354348 containerd[1473]: time="2025-04-30T03:28:59.354295706Z" level=error msg="encountered an error cleaning up failed sandbox \"0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:59.354494 containerd[1473]: time="2025-04-30T03:28:59.354384281Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bfvvm,Uid:8f623e99-7bb9-4ed3-8866-963ff1311503,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:59.354854 kubelet[2556]: E0430 03:28:59.354792 2556 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:59.355317 kubelet[2556]: E0430 03:28:59.354871 2556 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bfvvm" Apr 30 03:28:59.355317 kubelet[2556]: E0430 03:28:59.354898 2556 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bfvvm" Apr 30 03:28:59.355317 kubelet[2556]: E0430 03:28:59.354968 2556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bfvvm_calico-system(8f623e99-7bb9-4ed3-8866-963ff1311503)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bfvvm_calico-system(8f623e99-7bb9-4ed3-8866-963ff1311503)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bfvvm" podUID="8f623e99-7bb9-4ed3-8866-963ff1311503" Apr 30 03:28:59.426342 kubelet[2556]: I0430 03:28:59.426215 2556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" Apr 30 03:28:59.428388 kubelet[2556]: I0430 03:28:59.427492 2556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" Apr 30 03:28:59.430684 kubelet[2556]: I0430 03:28:59.430654 2556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" Apr 30 03:28:59.433483 containerd[1473]: time="2025-04-30T03:28:59.432880173Z" level=info msg="StopPodSandbox for \"400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d\"" Apr 30 03:28:59.434752 containerd[1473]: time="2025-04-30T03:28:59.434315318Z" level=info msg="StopPodSandbox for \"0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd\"" Apr 30 03:28:59.436717 containerd[1473]: time="2025-04-30T03:28:59.436662706Z" level=info msg="Ensure that sandbox 400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d in task-service has been cleanup successfully" Apr 30 03:28:59.437089 containerd[1473]: time="2025-04-30T03:28:59.437052837Z" level=info msg="Ensure that sandbox 0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd in task-service has been cleanup successfully" Apr 30 03:28:59.438059 containerd[1473]: time="2025-04-30T03:28:59.437706926Z" level=info msg="StopPodSandbox for \"b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9\"" Apr 30 03:28:59.440741 kubelet[2556]: I0430 03:28:59.440702 2556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" Apr 30 03:28:59.442255 containerd[1473]: time="2025-04-30T03:28:59.441368668Z" level=info msg="Ensure that sandbox b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9 in task-service has been cleanup successfully" Apr 30 03:28:59.446993 containerd[1473]: time="2025-04-30T03:28:59.446824641Z" level=info msg="StopPodSandbox for \"1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e\"" Apr 30 03:28:59.447167 containerd[1473]: time="2025-04-30T03:28:59.447105719Z" level=info msg="Ensure that sandbox 1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e in task-service has been cleanup successfully" Apr 30 03:28:59.456914 kubelet[2556]: I0430 03:28:59.456651 2556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" Apr 30 03:28:59.460010 containerd[1473]: time="2025-04-30T03:28:59.459595831Z" level=info msg="StopPodSandbox for \"728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e\"" Apr 30 03:28:59.460010 containerd[1473]: time="2025-04-30T03:28:59.459801687Z" level=info msg="Ensure that sandbox 728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e in task-service has been cleanup successfully" Apr 30 03:28:59.466379 kubelet[2556]: I0430 03:28:59.465670 2556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" Apr 30 03:28:59.468104 containerd[1473]: time="2025-04-30T03:28:59.467786010Z" level=info msg="StopPodSandbox for \"f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305\"" Apr 30 03:28:59.468104 containerd[1473]: time="2025-04-30T03:28:59.468039642Z" level=info msg="Ensure that sandbox f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305 in task-service has been cleanup successfully" Apr 30 03:28:59.565999 containerd[1473]: time="2025-04-30T03:28:59.565912939Z" level=error msg="StopPodSandbox for \"400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d\" failed" error="failed to destroy network for sandbox \"400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:59.566752 kubelet[2556]: E0430 03:28:59.566432 2556 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" Apr 30 03:28:59.566752 kubelet[2556]: E0430 03:28:59.566566 2556 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d"} Apr 30 03:28:59.566752 kubelet[2556]: E0430 03:28:59.566672 2556 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bb22c691-4fbf-4372-b30c-281e4f70d3e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:28:59.566752 kubelet[2556]: E0430 03:28:59.566706 2556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bb22c691-4fbf-4372-b30c-281e4f70d3e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-m94hd" podUID="bb22c691-4fbf-4372-b30c-281e4f70d3e0" Apr 30 03:28:59.592075 containerd[1473]: time="2025-04-30T03:28:59.592001543Z" level=error msg="StopPodSandbox for \"728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e\" failed" error="failed to destroy network for sandbox \"728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:59.595373 kubelet[2556]: E0430 03:28:59.595151 2556 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" Apr 30 03:28:59.595373 kubelet[2556]: E0430 03:28:59.595216 2556 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e"} Apr 30 03:28:59.595373 kubelet[2556]: E0430 03:28:59.595265 2556 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"06d3b462-d34a-4562-b3c2-6a83b60fac79\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:28:59.595373 kubelet[2556]: E0430 03:28:59.595299 2556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"06d3b462-d34a-4562-b3c2-6a83b60fac79\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5579bb7b4d-2rxcz" podUID="06d3b462-d34a-4562-b3c2-6a83b60fac79" Apr 30 03:28:59.605648 containerd[1473]: time="2025-04-30T03:28:59.605585446Z" level=error msg="StopPodSandbox for \"b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9\" failed" error="failed to destroy network for sandbox \"b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:59.606388 kubelet[2556]: E0430 03:28:59.606143 2556 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" Apr 30 03:28:59.606388 kubelet[2556]: E0430 03:28:59.606220 2556 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9"} Apr 30 03:28:59.606388 kubelet[2556]: E0430 03:28:59.606308 2556 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b1efadb-18f7-436b-8e71-a7c0c7270888\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:28:59.606388 kubelet[2556]: E0430 03:28:59.606346 2556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b1efadb-18f7-436b-8e71-a7c0c7270888\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5846bc4884-ttjjz" podUID="7b1efadb-18f7-436b-8e71-a7c0c7270888" Apr 30 03:28:59.608390 containerd[1473]: time="2025-04-30T03:28:59.607825242Z" level=error msg="StopPodSandbox for \"1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e\" failed" error="failed to destroy network for sandbox \"1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:59.608544 kubelet[2556]: E0430 03:28:59.608205 2556 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" Apr 30 03:28:59.608544 kubelet[2556]: E0430 03:28:59.608262 2556 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e"} Apr 30 03:28:59.608544 kubelet[2556]: E0430 03:28:59.608306 2556 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ce3de429-7f35-47dd-ba9a-d97e4159a358\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:28:59.608544 kubelet[2556]: E0430 03:28:59.608336 2556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ce3de429-7f35-47dd-ba9a-d97e4159a358\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5579bb7b4d-fp4xj" podUID="ce3de429-7f35-47dd-ba9a-d97e4159a358" Apr 30 03:28:59.610489 containerd[1473]: time="2025-04-30T03:28:59.609159555Z" level=error msg="StopPodSandbox for \"0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd\" failed" error="failed to destroy network for sandbox \"0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:59.610774 kubelet[2556]: E0430 03:28:59.610719 2556 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" Apr 30 03:28:59.610853 kubelet[2556]: E0430 03:28:59.610793 2556 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd"} Apr 30 03:28:59.610853 kubelet[2556]: E0430 03:28:59.610834 2556 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8f623e99-7bb9-4ed3-8866-963ff1311503\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:28:59.611009 kubelet[2556]: E0430 03:28:59.610857 2556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8f623e99-7bb9-4ed3-8866-963ff1311503\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bfvvm" podUID="8f623e99-7bb9-4ed3-8866-963ff1311503" Apr 30 03:28:59.623857 containerd[1473]: time="2025-04-30T03:28:59.623779373Z" level=error msg="StopPodSandbox for \"f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305\" failed" error="failed to destroy network for sandbox \"f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:59.624465 kubelet[2556]: E0430 03:28:59.624408 2556 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" Apr 30 03:28:59.624647 kubelet[2556]: E0430 03:28:59.624476 2556 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305"} Apr 30 03:28:59.624738 kubelet[2556]: E0430 03:28:59.624686 2556 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"021ebc6e-397c-468a-9ff4-cdbf45e8c256\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:28:59.624842 kubelet[2556]: E0430 03:28:59.624746 2556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"021ebc6e-397c-468a-9ff4-cdbf45e8c256\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-xsv99" podUID="021ebc6e-397c-468a-9ff4-cdbf45e8c256" Apr 30 03:29:06.809758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3285232703.mount: Deactivated successfully. Apr 30 03:29:06.981457 containerd[1473]: time="2025-04-30T03:29:06.967305356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:06.982192 containerd[1473]: time="2025-04-30T03:29:06.982135628Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 8.546488226s" Apr 30 03:29:06.982327 containerd[1473]: time="2025-04-30T03:29:06.982309493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" Apr 30 03:29:06.982504 containerd[1473]: time="2025-04-30T03:29:06.962158558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" Apr 30 03:29:07.043338 containerd[1473]: time="2025-04-30T03:29:07.042183139Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:07.043338 containerd[1473]: time="2025-04-30T03:29:07.043315882Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:07.161222 containerd[1473]: time="2025-04-30T03:29:07.159743815Z" level=info msg="CreateContainer within sandbox \"dfe85c2886403ec3035d79862f9eb429140e18e45de2b39967bdbf551b24fe40\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 03:29:07.295253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1513145416.mount: Deactivated successfully. Apr 30 03:29:07.315260 containerd[1473]: time="2025-04-30T03:29:07.315170300Z" level=info msg="CreateContainer within sandbox \"dfe85c2886403ec3035d79862f9eb429140e18e45de2b39967bdbf551b24fe40\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7a1c80f5b65e098e14fff113d7432864c6876f81c7373a480908160201a8ccbd\"" Apr 30 03:29:07.317611 containerd[1473]: time="2025-04-30T03:29:07.317569387Z" level=info msg="StartContainer for \"7a1c80f5b65e098e14fff113d7432864c6876f81c7373a480908160201a8ccbd\"" Apr 30 03:29:07.421042 systemd[1]: Started cri-containerd-7a1c80f5b65e098e14fff113d7432864c6876f81c7373a480908160201a8ccbd.scope - libcontainer container 7a1c80f5b65e098e14fff113d7432864c6876f81c7373a480908160201a8ccbd. Apr 30 03:29:07.492396 containerd[1473]: time="2025-04-30T03:29:07.491773651Z" level=info msg="StartContainer for \"7a1c80f5b65e098e14fff113d7432864c6876f81c7373a480908160201a8ccbd\" returns successfully" Apr 30 03:29:07.537978 kubelet[2556]: E0430 03:29:07.536212 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:29:07.603681 systemd[1]: Started sshd@8-143.198.63.212:22-139.178.89.65:56246.service - OpenSSH per-connection server daemon (139.178.89.65:56246). Apr 30 03:29:07.663350 kubelet[2556]: I0430 03:29:07.662062 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-b74fm" podStartSLOduration=1.443303788 podStartE2EDuration="22.662037935s" podCreationTimestamp="2025-04-30 03:28:45 +0000 UTC" firstStartedPulling="2025-04-30 03:28:45.828354695 +0000 UTC m=+21.752654419" lastFinishedPulling="2025-04-30 03:29:07.047088838 +0000 UTC m=+42.971388566" observedRunningTime="2025-04-30 03:29:07.652894214 +0000 UTC m=+43.577193947" watchObservedRunningTime="2025-04-30 03:29:07.662037935 +0000 UTC m=+43.586337668" Apr 30 03:29:07.811187 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 03:29:07.814701 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 03:29:07.817798 sshd[3659]: Accepted publickey for core from 139.178.89.65 port 56246 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:29:07.821143 sshd[3659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:07.837987 systemd-logind[1450]: New session 8 of user core. Apr 30 03:29:07.846867 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 03:29:08.140895 sshd[3659]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:08.147299 systemd-logind[1450]: Session 8 logged out. Waiting for processes to exit. Apr 30 03:29:08.148651 systemd[1]: sshd@8-143.198.63.212:22-139.178.89.65:56246.service: Deactivated successfully. Apr 30 03:29:08.155108 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 03:29:08.157286 systemd-logind[1450]: Removed session 8. Apr 30 03:29:08.522996 kubelet[2556]: E0430 03:29:08.522419 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:29:09.523615 kubelet[2556]: E0430 03:29:09.523117 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:29:10.062617 kernel: bpftool[3900]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 03:29:10.488855 systemd-networkd[1374]: vxlan.calico: Link UP Apr 30 03:29:10.488864 systemd-networkd[1374]: vxlan.calico: Gained carrier Apr 30 03:29:11.264902 containerd[1473]: time="2025-04-30T03:29:11.264834079Z" level=info msg="StopPodSandbox for \"b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9\"" Apr 30 03:29:11.267381 containerd[1473]: time="2025-04-30T03:29:11.265469109Z" level=info msg="StopPodSandbox for \"1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e\"" Apr 30 03:29:11.620839 containerd[1473]: 2025-04-30 03:29:11.400 [INFO][3997] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" Apr 30 03:29:11.620839 containerd[1473]: 2025-04-30 03:29:11.400 [INFO][3997] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" iface="eth0" netns="/var/run/netns/cni-97eea411-dbbf-d470-5607-e60287e82c59" Apr 30 03:29:11.620839 containerd[1473]: 2025-04-30 03:29:11.401 [INFO][3997] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" iface="eth0" netns="/var/run/netns/cni-97eea411-dbbf-d470-5607-e60287e82c59" Apr 30 03:29:11.620839 containerd[1473]: 2025-04-30 03:29:11.401 [INFO][3997] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" iface="eth0" netns="/var/run/netns/cni-97eea411-dbbf-d470-5607-e60287e82c59" Apr 30 03:29:11.620839 containerd[1473]: 2025-04-30 03:29:11.401 [INFO][3997] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" Apr 30 03:29:11.620839 containerd[1473]: 2025-04-30 03:29:11.401 [INFO][3997] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" Apr 30 03:29:11.620839 containerd[1473]: 2025-04-30 03:29:11.597 [INFO][4012] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" HandleID="k8s-pod-network.1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--fp4xj-eth0" Apr 30 03:29:11.620839 containerd[1473]: 2025-04-30 03:29:11.598 [INFO][4012] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:11.620839 containerd[1473]: 2025-04-30 03:29:11.599 [INFO][4012] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:11.620839 containerd[1473]: 2025-04-30 03:29:11.610 [WARNING][4012] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" HandleID="k8s-pod-network.1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--fp4xj-eth0" Apr 30 03:29:11.620839 containerd[1473]: 2025-04-30 03:29:11.610 [INFO][4012] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" HandleID="k8s-pod-network.1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--fp4xj-eth0" Apr 30 03:29:11.620839 containerd[1473]: 2025-04-30 03:29:11.613 [INFO][4012] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:11.620839 containerd[1473]: 2025-04-30 03:29:11.616 [INFO][3997] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" Apr 30 03:29:11.624438 systemd[1]: run-netns-cni\x2d97eea411\x2ddbbf\x2dd470\x2d5607\x2de60287e82c59.mount: Deactivated successfully. Apr 30 03:29:11.648438 containerd[1473]: 2025-04-30 03:29:11.392 [INFO][3996] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" Apr 30 03:29:11.648438 containerd[1473]: 2025-04-30 03:29:11.393 [INFO][3996] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" iface="eth0" netns="/var/run/netns/cni-da1d4f2d-0638-e9e3-061f-cfd2cb3a12a4" Apr 30 03:29:11.648438 containerd[1473]: 2025-04-30 03:29:11.394 [INFO][3996] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" iface="eth0" netns="/var/run/netns/cni-da1d4f2d-0638-e9e3-061f-cfd2cb3a12a4" Apr 30 03:29:11.648438 containerd[1473]: 2025-04-30 03:29:11.398 [INFO][3996] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" iface="eth0" netns="/var/run/netns/cni-da1d4f2d-0638-e9e3-061f-cfd2cb3a12a4" Apr 30 03:29:11.648438 containerd[1473]: 2025-04-30 03:29:11.399 [INFO][3996] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" Apr 30 03:29:11.648438 containerd[1473]: 2025-04-30 03:29:11.399 [INFO][3996] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" Apr 30 03:29:11.648438 containerd[1473]: 2025-04-30 03:29:11.597 [INFO][4010] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" HandleID="k8s-pod-network.b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--kube--controllers--5846bc4884--ttjjz-eth0" Apr 30 03:29:11.648438 containerd[1473]: 2025-04-30 03:29:11.598 [INFO][4010] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:11.648438 containerd[1473]: 2025-04-30 03:29:11.613 [INFO][4010] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:11.648438 containerd[1473]: 2025-04-30 03:29:11.632 [WARNING][4010] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" HandleID="k8s-pod-network.b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--kube--controllers--5846bc4884--ttjjz-eth0" Apr 30 03:29:11.648438 containerd[1473]: 2025-04-30 03:29:11.632 [INFO][4010] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" HandleID="k8s-pod-network.b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--kube--controllers--5846bc4884--ttjjz-eth0" Apr 30 03:29:11.648438 containerd[1473]: 2025-04-30 03:29:11.638 [INFO][4010] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:11.648438 containerd[1473]: 2025-04-30 03:29:11.643 [INFO][3996] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" Apr 30 03:29:11.648438 containerd[1473]: time="2025-04-30T03:29:11.648007390Z" level=info msg="TearDown network for sandbox \"b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9\" successfully" Apr 30 03:29:11.648438 containerd[1473]: time="2025-04-30T03:29:11.648050571Z" level=info msg="StopPodSandbox for \"b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9\" returns successfully" Apr 30 03:29:11.648438 containerd[1473]: time="2025-04-30T03:29:11.648315154Z" level=info msg="TearDown network for sandbox \"1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e\" successfully" Apr 30 03:29:11.648438 containerd[1473]: time="2025-04-30T03:29:11.648355060Z" level=info msg="StopPodSandbox for \"1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e\" returns successfully" Apr 30 03:29:11.651668 systemd[1]: run-netns-cni\x2dda1d4f2d\x2d0638\x2de9e3\x2d061f\x2dcfd2cb3a12a4.mount: Deactivated successfully. Apr 30 03:29:11.674142 containerd[1473]: time="2025-04-30T03:29:11.673039142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5846bc4884-ttjjz,Uid:7b1efadb-18f7-436b-8e71-a7c0c7270888,Namespace:calico-system,Attempt:1,}" Apr 30 03:29:11.684737 containerd[1473]: time="2025-04-30T03:29:11.684098127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5579bb7b4d-fp4xj,Uid:ce3de429-7f35-47dd-ba9a-d97e4159a358,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:29:12.011794 systemd-networkd[1374]: cali2231e44613b: Link UP Apr 30 03:29:12.012218 systemd-networkd[1374]: cali2231e44613b: Gained carrier Apr 30 03:29:12.053477 containerd[1473]: 2025-04-30 03:29:11.831 [INFO][4025] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--fp4xj-eth0 calico-apiserver-5579bb7b4d- calico-apiserver ce3de429-7f35-47dd-ba9a-d97e4159a358 872 0 2025-04-30 03:28:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5579bb7b4d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-0-7c044d2e24 calico-apiserver-5579bb7b4d-fp4xj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2231e44613b [] []}} ContainerID="415b229d4fab050610a6aac4537282762a155d2921ebdaaa3991bf094f9a1805" Namespace="calico-apiserver" Pod="calico-apiserver-5579bb7b4d-fp4xj" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--fp4xj-" Apr 30 03:29:12.053477 containerd[1473]: 2025-04-30 03:29:11.831 [INFO][4025] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="415b229d4fab050610a6aac4537282762a155d2921ebdaaa3991bf094f9a1805" Namespace="calico-apiserver" Pod="calico-apiserver-5579bb7b4d-fp4xj" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--fp4xj-eth0" Apr 30 03:29:12.053477 containerd[1473]: 2025-04-30 03:29:11.918 [INFO][4048] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="415b229d4fab050610a6aac4537282762a155d2921ebdaaa3991bf094f9a1805" HandleID="k8s-pod-network.415b229d4fab050610a6aac4537282762a155d2921ebdaaa3991bf094f9a1805" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--fp4xj-eth0" Apr 30 03:29:12.053477 containerd[1473]: 2025-04-30 03:29:11.934 [INFO][4048] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="415b229d4fab050610a6aac4537282762a155d2921ebdaaa3991bf094f9a1805" HandleID="k8s-pod-network.415b229d4fab050610a6aac4537282762a155d2921ebdaaa3991bf094f9a1805" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--fp4xj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001ffa70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-0-7c044d2e24", "pod":"calico-apiserver-5579bb7b4d-fp4xj", "timestamp":"2025-04-30 03:29:11.918062778 +0000 UTC"}, Hostname:"ci-4081.3.3-0-7c044d2e24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:12.053477 containerd[1473]: 2025-04-30 03:29:11.939 [INFO][4048] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:12.053477 containerd[1473]: 2025-04-30 03:29:11.940 [INFO][4048] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:12.053477 containerd[1473]: 2025-04-30 03:29:11.940 [INFO][4048] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-0-7c044d2e24' Apr 30 03:29:12.053477 containerd[1473]: 2025-04-30 03:29:11.946 [INFO][4048] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.415b229d4fab050610a6aac4537282762a155d2921ebdaaa3991bf094f9a1805" host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:12.053477 containerd[1473]: 2025-04-30 03:29:11.956 [INFO][4048] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:12.053477 containerd[1473]: 2025-04-30 03:29:11.964 [INFO][4048] ipam/ipam.go 489: Trying affinity for 192.168.74.128/26 host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:12.053477 containerd[1473]: 2025-04-30 03:29:11.967 [INFO][4048] ipam/ipam.go 155: Attempting to load block cidr=192.168.74.128/26 host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:12.053477 containerd[1473]: 2025-04-30 03:29:11.972 [INFO][4048] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:12.053477 containerd[1473]: 2025-04-30 03:29:11.972 [INFO][4048] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.415b229d4fab050610a6aac4537282762a155d2921ebdaaa3991bf094f9a1805" host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:12.053477 containerd[1473]: 2025-04-30 03:29:11.975 [INFO][4048] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.415b229d4fab050610a6aac4537282762a155d2921ebdaaa3991bf094f9a1805 Apr 30 03:29:12.053477 containerd[1473]: 2025-04-30 03:29:11.982 [INFO][4048] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.415b229d4fab050610a6aac4537282762a155d2921ebdaaa3991bf094f9a1805" host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:12.053477 containerd[1473]: 2025-04-30 03:29:11.995 [INFO][4048] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.74.129/26] block=192.168.74.128/26 handle="k8s-pod-network.415b229d4fab050610a6aac4537282762a155d2921ebdaaa3991bf094f9a1805" host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:12.053477 containerd[1473]: 2025-04-30 03:29:11.995 [INFO][4048] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.129/26] handle="k8s-pod-network.415b229d4fab050610a6aac4537282762a155d2921ebdaaa3991bf094f9a1805" host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:12.053477 containerd[1473]: 2025-04-30 03:29:11.995 [INFO][4048] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:12.053477 containerd[1473]: 2025-04-30 03:29:11.995 [INFO][4048] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.129/26] IPv6=[] ContainerID="415b229d4fab050610a6aac4537282762a155d2921ebdaaa3991bf094f9a1805" HandleID="k8s-pod-network.415b229d4fab050610a6aac4537282762a155d2921ebdaaa3991bf094f9a1805" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--fp4xj-eth0" Apr 30 03:29:12.054750 containerd[1473]: 2025-04-30 03:29:11.999 [INFO][4025] cni-plugin/k8s.go 386: Populated endpoint ContainerID="415b229d4fab050610a6aac4537282762a155d2921ebdaaa3991bf094f9a1805" Namespace="calico-apiserver" Pod="calico-apiserver-5579bb7b4d-fp4xj" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--fp4xj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--fp4xj-eth0", GenerateName:"calico-apiserver-5579bb7b4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ce3de429-7f35-47dd-ba9a-d97e4159a358", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5579bb7b4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-7c044d2e24", ContainerID:"", Pod:"calico-apiserver-5579bb7b4d-fp4xj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2231e44613b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:12.054750 containerd[1473]: 2025-04-30 03:29:11.999 [INFO][4025] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.74.129/32] ContainerID="415b229d4fab050610a6aac4537282762a155d2921ebdaaa3991bf094f9a1805" Namespace="calico-apiserver" Pod="calico-apiserver-5579bb7b4d-fp4xj" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--fp4xj-eth0" Apr 30 03:29:12.054750 containerd[1473]: 2025-04-30 03:29:12.000 [INFO][4025] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2231e44613b ContainerID="415b229d4fab050610a6aac4537282762a155d2921ebdaaa3991bf094f9a1805" Namespace="calico-apiserver" Pod="calico-apiserver-5579bb7b4d-fp4xj" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--fp4xj-eth0" Apr 30 03:29:12.054750 containerd[1473]: 2025-04-30 03:29:12.014 [INFO][4025] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="415b229d4fab050610a6aac4537282762a155d2921ebdaaa3991bf094f9a1805" Namespace="calico-apiserver" Pod="calico-apiserver-5579bb7b4d-fp4xj" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--fp4xj-eth0" Apr 30 03:29:12.054750 containerd[1473]: 2025-04-30 03:29:12.016 [INFO][4025] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="415b229d4fab050610a6aac4537282762a155d2921ebdaaa3991bf094f9a1805" Namespace="calico-apiserver" Pod="calico-apiserver-5579bb7b4d-fp4xj" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--fp4xj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--fp4xj-eth0", GenerateName:"calico-apiserver-5579bb7b4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ce3de429-7f35-47dd-ba9a-d97e4159a358", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5579bb7b4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-7c044d2e24", ContainerID:"415b229d4fab050610a6aac4537282762a155d2921ebdaaa3991bf094f9a1805", Pod:"calico-apiserver-5579bb7b4d-fp4xj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2231e44613b", MAC:"c2:53:f3:2a:30:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:12.054750 containerd[1473]: 2025-04-30 03:29:12.049 [INFO][4025] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="415b229d4fab050610a6aac4537282762a155d2921ebdaaa3991bf094f9a1805" Namespace="calico-apiserver" Pod="calico-apiserver-5579bb7b4d-fp4xj" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--fp4xj-eth0" Apr 30 03:29:12.145164 systemd-networkd[1374]: cali7377dafce28: Link UP Apr 30 03:29:12.147109 systemd-networkd[1374]: cali7377dafce28: Gained carrier Apr 30 03:29:12.191238 containerd[1473]: time="2025-04-30T03:29:12.189959489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:12.191238 containerd[1473]: time="2025-04-30T03:29:12.190105852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:12.191238 containerd[1473]: time="2025-04-30T03:29:12.190122376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:12.191238 containerd[1473]: time="2025-04-30T03:29:12.190252343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:12.228551 containerd[1473]: 2025-04-30 03:29:11.874 [INFO][4034] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--0--7c044d2e24-k8s-calico--kube--controllers--5846bc4884--ttjjz-eth0 calico-kube-controllers-5846bc4884- calico-system 7b1efadb-18f7-436b-8e71-a7c0c7270888 871 0 2025-04-30 03:28:45 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5846bc4884 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.3-0-7c044d2e24 calico-kube-controllers-5846bc4884-ttjjz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7377dafce28 [] []}} ContainerID="9aed92414a88e7842daf679e751537f30eaad5afe3004bbf26a9534f8b72a277" Namespace="calico-system" Pod="calico-kube-controllers-5846bc4884-ttjjz" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-calico--kube--controllers--5846bc4884--ttjjz-" Apr 30 03:29:12.228551 containerd[1473]: 2025-04-30 03:29:11.876 [INFO][4034] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9aed92414a88e7842daf679e751537f30eaad5afe3004bbf26a9534f8b72a277" Namespace="calico-system" Pod="calico-kube-controllers-5846bc4884-ttjjz" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-calico--kube--controllers--5846bc4884--ttjjz-eth0" Apr 30 03:29:12.228551 containerd[1473]: 2025-04-30 03:29:11.932 [INFO][4053] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9aed92414a88e7842daf679e751537f30eaad5afe3004bbf26a9534f8b72a277" HandleID="k8s-pod-network.9aed92414a88e7842daf679e751537f30eaad5afe3004bbf26a9534f8b72a277" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--kube--controllers--5846bc4884--ttjjz-eth0" Apr 30 03:29:12.228551 containerd[1473]: 2025-04-30 03:29:11.950 [INFO][4053] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9aed92414a88e7842daf679e751537f30eaad5afe3004bbf26a9534f8b72a277" HandleID="k8s-pod-network.9aed92414a88e7842daf679e751537f30eaad5afe3004bbf26a9534f8b72a277" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--kube--controllers--5846bc4884--ttjjz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b4b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-0-7c044d2e24", "pod":"calico-kube-controllers-5846bc4884-ttjjz", "timestamp":"2025-04-30 03:29:11.932900042 +0000 UTC"}, Hostname:"ci-4081.3.3-0-7c044d2e24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:12.228551 containerd[1473]: 2025-04-30 03:29:11.950 [INFO][4053] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:12.228551 containerd[1473]: 2025-04-30 03:29:11.995 [INFO][4053] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:12.228551 containerd[1473]: 2025-04-30 03:29:11.995 [INFO][4053] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-0-7c044d2e24' Apr 30 03:29:12.228551 containerd[1473]: 2025-04-30 03:29:12.004 [INFO][4053] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9aed92414a88e7842daf679e751537f30eaad5afe3004bbf26a9534f8b72a277" host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:12.228551 containerd[1473]: 2025-04-30 03:29:12.028 [INFO][4053] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:12.228551 containerd[1473]: 2025-04-30 03:29:12.052 [INFO][4053] ipam/ipam.go 489: Trying affinity for 192.168.74.128/26 host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:12.228551 containerd[1473]: 2025-04-30 03:29:12.058 [INFO][4053] ipam/ipam.go 155: Attempting to load block cidr=192.168.74.128/26 host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:12.228551 containerd[1473]: 2025-04-30 03:29:12.065 [INFO][4053] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:12.228551 containerd[1473]: 2025-04-30 03:29:12.065 [INFO][4053] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.9aed92414a88e7842daf679e751537f30eaad5afe3004bbf26a9534f8b72a277" host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:12.228551 containerd[1473]: 2025-04-30 03:29:12.073 [INFO][4053] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9aed92414a88e7842daf679e751537f30eaad5afe3004bbf26a9534f8b72a277 Apr 30 03:29:12.228551 containerd[1473]: 2025-04-30 03:29:12.087 [INFO][4053] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.9aed92414a88e7842daf679e751537f30eaad5afe3004bbf26a9534f8b72a277" host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:12.228551 containerd[1473]: 2025-04-30 03:29:12.124 [INFO][4053] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.74.130/26] block=192.168.74.128/26 handle="k8s-pod-network.9aed92414a88e7842daf679e751537f30eaad5afe3004bbf26a9534f8b72a277" host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:12.228551 containerd[1473]: 2025-04-30 03:29:12.125 [INFO][4053] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.130/26] handle="k8s-pod-network.9aed92414a88e7842daf679e751537f30eaad5afe3004bbf26a9534f8b72a277" host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:12.228551 containerd[1473]: 2025-04-30 03:29:12.125 [INFO][4053] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:12.228551 containerd[1473]: 2025-04-30 03:29:12.128 [INFO][4053] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.130/26] IPv6=[] ContainerID="9aed92414a88e7842daf679e751537f30eaad5afe3004bbf26a9534f8b72a277" HandleID="k8s-pod-network.9aed92414a88e7842daf679e751537f30eaad5afe3004bbf26a9534f8b72a277" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--kube--controllers--5846bc4884--ttjjz-eth0" Apr 30 03:29:12.231853 containerd[1473]: 2025-04-30 03:29:12.141 [INFO][4034] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9aed92414a88e7842daf679e751537f30eaad5afe3004bbf26a9534f8b72a277" Namespace="calico-system" Pod="calico-kube-controllers-5846bc4884-ttjjz" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-calico--kube--controllers--5846bc4884--ttjjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--7c044d2e24-k8s-calico--kube--controllers--5846bc4884--ttjjz-eth0", GenerateName:"calico-kube-controllers-5846bc4884-", Namespace:"calico-system", SelfLink:"", UID:"7b1efadb-18f7-436b-8e71-a7c0c7270888", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5846bc4884", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-7c044d2e24", ContainerID:"", Pod:"calico-kube-controllers-5846bc4884-ttjjz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7377dafce28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:12.231853 containerd[1473]: 2025-04-30 03:29:12.141 [INFO][4034] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.74.130/32] ContainerID="9aed92414a88e7842daf679e751537f30eaad5afe3004bbf26a9534f8b72a277" Namespace="calico-system" Pod="calico-kube-controllers-5846bc4884-ttjjz" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-calico--kube--controllers--5846bc4884--ttjjz-eth0" Apr 30 03:29:12.231853 containerd[1473]: 2025-04-30 03:29:12.141 [INFO][4034] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7377dafce28 ContainerID="9aed92414a88e7842daf679e751537f30eaad5afe3004bbf26a9534f8b72a277" Namespace="calico-system" Pod="calico-kube-controllers-5846bc4884-ttjjz" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-calico--kube--controllers--5846bc4884--ttjjz-eth0" Apr 30 03:29:12.231853 containerd[1473]: 2025-04-30 03:29:12.147 [INFO][4034] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9aed92414a88e7842daf679e751537f30eaad5afe3004bbf26a9534f8b72a277" Namespace="calico-system" Pod="calico-kube-controllers-5846bc4884-ttjjz" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-calico--kube--controllers--5846bc4884--ttjjz-eth0" Apr 30 03:29:12.231853 containerd[1473]: 2025-04-30 03:29:12.150 [INFO][4034] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9aed92414a88e7842daf679e751537f30eaad5afe3004bbf26a9534f8b72a277" Namespace="calico-system" Pod="calico-kube-controllers-5846bc4884-ttjjz" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-calico--kube--controllers--5846bc4884--ttjjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--7c044d2e24-k8s-calico--kube--controllers--5846bc4884--ttjjz-eth0", GenerateName:"calico-kube-controllers-5846bc4884-", Namespace:"calico-system", SelfLink:"", UID:"7b1efadb-18f7-436b-8e71-a7c0c7270888", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5846bc4884", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-7c044d2e24", ContainerID:"9aed92414a88e7842daf679e751537f30eaad5afe3004bbf26a9534f8b72a277", Pod:"calico-kube-controllers-5846bc4884-ttjjz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7377dafce28", MAC:"7a:43:23:37:d9:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:12.231853 containerd[1473]: 2025-04-30 03:29:12.201 [INFO][4034] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9aed92414a88e7842daf679e751537f30eaad5afe3004bbf26a9534f8b72a277" Namespace="calico-system" Pod="calico-kube-controllers-5846bc4884-ttjjz" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-calico--kube--controllers--5846bc4884--ttjjz-eth0" Apr 30 03:29:12.303951 systemd[1]: Started cri-containerd-415b229d4fab050610a6aac4537282762a155d2921ebdaaa3991bf094f9a1805.scope - libcontainer container 415b229d4fab050610a6aac4537282762a155d2921ebdaaa3991bf094f9a1805. Apr 30 03:29:12.310082 containerd[1473]: time="2025-04-30T03:29:12.307678104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:12.311304 containerd[1473]: time="2025-04-30T03:29:12.310781445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:12.316693 containerd[1473]: time="2025-04-30T03:29:12.311030927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:12.338665 containerd[1473]: time="2025-04-30T03:29:12.324173496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:12.344948 containerd[1473]: time="2025-04-30T03:29:12.342012643Z" level=info msg="StopPodSandbox for \"728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e\"" Apr 30 03:29:12.392924 systemd[1]: Started cri-containerd-9aed92414a88e7842daf679e751537f30eaad5afe3004bbf26a9534f8b72a277.scope - libcontainer container 9aed92414a88e7842daf679e751537f30eaad5afe3004bbf26a9534f8b72a277. Apr 30 03:29:12.462886 systemd[1]: Started sshd@9-143.198.63.212:22-92.255.57.132:45274.service - OpenSSH per-connection server daemon (92.255.57.132:45274). Apr 30 03:29:12.497530 systemd-networkd[1374]: vxlan.calico: Gained IPv6LL Apr 30 03:29:12.545224 containerd[1473]: time="2025-04-30T03:29:12.545044357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5846bc4884-ttjjz,Uid:7b1efadb-18f7-436b-8e71-a7c0c7270888,Namespace:calico-system,Attempt:1,} returns sandbox id \"9aed92414a88e7842daf679e751537f30eaad5afe3004bbf26a9534f8b72a277\"" Apr 30 03:29:12.547249 containerd[1473]: time="2025-04-30T03:29:12.547187772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5579bb7b4d-fp4xj,Uid:ce3de429-7f35-47dd-ba9a-d97e4159a358,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"415b229d4fab050610a6aac4537282762a155d2921ebdaaa3991bf094f9a1805\"" Apr 30 03:29:12.581815 containerd[1473]: time="2025-04-30T03:29:12.581263762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" Apr 30 03:29:12.687773 containerd[1473]: 2025-04-30 03:29:12.593 [INFO][4170] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" Apr 30 03:29:12.687773 containerd[1473]: 2025-04-30 03:29:12.594 [INFO][4170] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" iface="eth0" netns="/var/run/netns/cni-6bc12fba-a6f4-e5e8-f555-ab192ef60826" Apr 30 03:29:12.687773 containerd[1473]: 2025-04-30 03:29:12.595 [INFO][4170] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" iface="eth0" netns="/var/run/netns/cni-6bc12fba-a6f4-e5e8-f555-ab192ef60826" Apr 30 03:29:12.687773 containerd[1473]: 2025-04-30 03:29:12.596 [INFO][4170] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" iface="eth0" netns="/var/run/netns/cni-6bc12fba-a6f4-e5e8-f555-ab192ef60826" Apr 30 03:29:12.687773 containerd[1473]: 2025-04-30 03:29:12.597 [INFO][4170] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" Apr 30 03:29:12.687773 containerd[1473]: 2025-04-30 03:29:12.597 [INFO][4170] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" Apr 30 03:29:12.687773 containerd[1473]: 2025-04-30 03:29:12.667 [INFO][4191] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" HandleID="k8s-pod-network.728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--2rxcz-eth0" Apr 30 03:29:12.687773 containerd[1473]: 2025-04-30 03:29:12.668 [INFO][4191] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:12.687773 containerd[1473]: 2025-04-30 03:29:12.668 [INFO][4191] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:12.687773 containerd[1473]: 2025-04-30 03:29:12.678 [WARNING][4191] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" HandleID="k8s-pod-network.728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--2rxcz-eth0" Apr 30 03:29:12.687773 containerd[1473]: 2025-04-30 03:29:12.678 [INFO][4191] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" HandleID="k8s-pod-network.728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--2rxcz-eth0" Apr 30 03:29:12.687773 containerd[1473]: 2025-04-30 03:29:12.681 [INFO][4191] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:12.687773 containerd[1473]: 2025-04-30 03:29:12.684 [INFO][4170] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" Apr 30 03:29:12.690844 containerd[1473]: time="2025-04-30T03:29:12.688005428Z" level=info msg="TearDown network for sandbox \"728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e\" successfully" Apr 30 03:29:12.690844 containerd[1473]: time="2025-04-30T03:29:12.688045809Z" level=info msg="StopPodSandbox for \"728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e\" returns successfully" Apr 30 03:29:12.693087 containerd[1473]: time="2025-04-30T03:29:12.691573571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5579bb7b4d-2rxcz,Uid:06d3b462-d34a-4562-b3c2-6a83b60fac79,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:29:12.695876 systemd[1]: run-netns-cni\x2d6bc12fba\x2da6f4\x2de5e8\x2df555\x2dab192ef60826.mount: Deactivated successfully. Apr 30 03:29:12.909251 systemd-networkd[1374]: califc6963c1e46: Link UP Apr 30 03:29:12.911056 systemd-networkd[1374]: califc6963c1e46: Gained carrier Apr 30 03:29:12.935134 containerd[1473]: 2025-04-30 03:29:12.787 [INFO][4198] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--2rxcz-eth0 calico-apiserver-5579bb7b4d- calico-apiserver 06d3b462-d34a-4562-b3c2-6a83b60fac79 892 0 2025-04-30 03:28:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5579bb7b4d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-0-7c044d2e24 calico-apiserver-5579bb7b4d-2rxcz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califc6963c1e46 [] []}} ContainerID="53fa3f7fcef90f04d163f2e6dcdfb72dda51bb246a41a743b0595ff9479865e1" Namespace="calico-apiserver" Pod="calico-apiserver-5579bb7b4d-2rxcz" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--2rxcz-" Apr 30 03:29:12.935134 containerd[1473]: 2025-04-30 03:29:12.787 [INFO][4198] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="53fa3f7fcef90f04d163f2e6dcdfb72dda51bb246a41a743b0595ff9479865e1" Namespace="calico-apiserver" Pod="calico-apiserver-5579bb7b4d-2rxcz" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--2rxcz-eth0" Apr 30 03:29:12.935134 containerd[1473]: 2025-04-30 03:29:12.830 [INFO][4209] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53fa3f7fcef90f04d163f2e6dcdfb72dda51bb246a41a743b0595ff9479865e1" HandleID="k8s-pod-network.53fa3f7fcef90f04d163f2e6dcdfb72dda51bb246a41a743b0595ff9479865e1" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--2rxcz-eth0" Apr 30 03:29:12.935134 containerd[1473]: 2025-04-30 03:29:12.843 [INFO][4209] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="53fa3f7fcef90f04d163f2e6dcdfb72dda51bb246a41a743b0595ff9479865e1" HandleID="k8s-pod-network.53fa3f7fcef90f04d163f2e6dcdfb72dda51bb246a41a743b0595ff9479865e1" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--2rxcz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291110), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-0-7c044d2e24", "pod":"calico-apiserver-5579bb7b4d-2rxcz", "timestamp":"2025-04-30 03:29:12.830168588 +0000 UTC"}, Hostname:"ci-4081.3.3-0-7c044d2e24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:12.935134 containerd[1473]: 2025-04-30 03:29:12.843 [INFO][4209] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:12.935134 containerd[1473]: 2025-04-30 03:29:12.843 [INFO][4209] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:12.935134 containerd[1473]: 2025-04-30 03:29:12.844 [INFO][4209] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-0-7c044d2e24' Apr 30 03:29:12.935134 containerd[1473]: 2025-04-30 03:29:12.847 [INFO][4209] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.53fa3f7fcef90f04d163f2e6dcdfb72dda51bb246a41a743b0595ff9479865e1" host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:12.935134 containerd[1473]: 2025-04-30 03:29:12.855 [INFO][4209] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:12.935134 containerd[1473]: 2025-04-30 03:29:12.865 [INFO][4209] ipam/ipam.go 489: Trying affinity for 192.168.74.128/26 host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:12.935134 containerd[1473]: 2025-04-30 03:29:12.868 [INFO][4209] ipam/ipam.go 155: Attempting to load block cidr=192.168.74.128/26 host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:12.935134 containerd[1473]: 2025-04-30 03:29:12.872 [INFO][4209] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:12.935134 containerd[1473]: 2025-04-30 03:29:12.872 [INFO][4209] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.53fa3f7fcef90f04d163f2e6dcdfb72dda51bb246a41a743b0595ff9479865e1" host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:12.935134 containerd[1473]: 2025-04-30 03:29:12.875 [INFO][4209] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.53fa3f7fcef90f04d163f2e6dcdfb72dda51bb246a41a743b0595ff9479865e1 Apr 30 03:29:12.935134 containerd[1473]: 2025-04-30 03:29:12.883 [INFO][4209] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.53fa3f7fcef90f04d163f2e6dcdfb72dda51bb246a41a743b0595ff9479865e1" host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:12.935134 containerd[1473]: 2025-04-30 03:29:12.898 [INFO][4209] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.74.131/26] block=192.168.74.128/26 handle="k8s-pod-network.53fa3f7fcef90f04d163f2e6dcdfb72dda51bb246a41a743b0595ff9479865e1" host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:12.935134 containerd[1473]: 2025-04-30 03:29:12.898 [INFO][4209] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.131/26] handle="k8s-pod-network.53fa3f7fcef90f04d163f2e6dcdfb72dda51bb246a41a743b0595ff9479865e1" host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:12.935134 containerd[1473]: 2025-04-30 03:29:12.899 [INFO][4209] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:12.935134 containerd[1473]: 2025-04-30 03:29:12.899 [INFO][4209] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.131/26] IPv6=[] ContainerID="53fa3f7fcef90f04d163f2e6dcdfb72dda51bb246a41a743b0595ff9479865e1" HandleID="k8s-pod-network.53fa3f7fcef90f04d163f2e6dcdfb72dda51bb246a41a743b0595ff9479865e1" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--2rxcz-eth0" Apr 30 03:29:12.936547 containerd[1473]: 2025-04-30 03:29:12.904 [INFO][4198] cni-plugin/k8s.go 386: Populated endpoint ContainerID="53fa3f7fcef90f04d163f2e6dcdfb72dda51bb246a41a743b0595ff9479865e1" Namespace="calico-apiserver" Pod="calico-apiserver-5579bb7b4d-2rxcz" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--2rxcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--2rxcz-eth0", GenerateName:"calico-apiserver-5579bb7b4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"06d3b462-d34a-4562-b3c2-6a83b60fac79", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5579bb7b4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-7c044d2e24", ContainerID:"", Pod:"calico-apiserver-5579bb7b4d-2rxcz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califc6963c1e46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:12.936547 containerd[1473]: 2025-04-30 03:29:12.904 [INFO][4198] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.74.131/32] ContainerID="53fa3f7fcef90f04d163f2e6dcdfb72dda51bb246a41a743b0595ff9479865e1" Namespace="calico-apiserver" Pod="calico-apiserver-5579bb7b4d-2rxcz" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--2rxcz-eth0" Apr 30 03:29:12.936547 containerd[1473]: 2025-04-30 03:29:12.904 [INFO][4198] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc6963c1e46 ContainerID="53fa3f7fcef90f04d163f2e6dcdfb72dda51bb246a41a743b0595ff9479865e1" Namespace="calico-apiserver" Pod="calico-apiserver-5579bb7b4d-2rxcz" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--2rxcz-eth0" Apr 30 03:29:12.936547 containerd[1473]: 2025-04-30 03:29:12.912 [INFO][4198] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="53fa3f7fcef90f04d163f2e6dcdfb72dda51bb246a41a743b0595ff9479865e1" Namespace="calico-apiserver" Pod="calico-apiserver-5579bb7b4d-2rxcz" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--2rxcz-eth0" Apr 30 03:29:12.936547 containerd[1473]: 2025-04-30 03:29:12.913 [INFO][4198] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="53fa3f7fcef90f04d163f2e6dcdfb72dda51bb246a41a743b0595ff9479865e1" Namespace="calico-apiserver" Pod="calico-apiserver-5579bb7b4d-2rxcz" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--2rxcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--2rxcz-eth0", GenerateName:"calico-apiserver-5579bb7b4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"06d3b462-d34a-4562-b3c2-6a83b60fac79", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5579bb7b4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-7c044d2e24", ContainerID:"53fa3f7fcef90f04d163f2e6dcdfb72dda51bb246a41a743b0595ff9479865e1", Pod:"calico-apiserver-5579bb7b4d-2rxcz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califc6963c1e46", MAC:"16:13:29:55:df:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:12.936547 containerd[1473]: 2025-04-30 03:29:12.930 [INFO][4198] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="53fa3f7fcef90f04d163f2e6dcdfb72dda51bb246a41a743b0595ff9479865e1" Namespace="calico-apiserver" Pod="calico-apiserver-5579bb7b4d-2rxcz" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--2rxcz-eth0" Apr 30 03:29:13.004014 containerd[1473]: time="2025-04-30T03:29:13.003498695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:13.004014 containerd[1473]: time="2025-04-30T03:29:13.003621660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:13.004014 containerd[1473]: time="2025-04-30T03:29:13.003687908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:13.004014 containerd[1473]: time="2025-04-30T03:29:13.003825573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:13.048854 systemd[1]: Started cri-containerd-53fa3f7fcef90f04d163f2e6dcdfb72dda51bb246a41a743b0595ff9479865e1.scope - libcontainer container 53fa3f7fcef90f04d163f2e6dcdfb72dda51bb246a41a743b0595ff9479865e1. Apr 30 03:29:13.118581 containerd[1473]: time="2025-04-30T03:29:13.118437111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5579bb7b4d-2rxcz,Uid:06d3b462-d34a-4562-b3c2-6a83b60fac79,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"53fa3f7fcef90f04d163f2e6dcdfb72dda51bb246a41a743b0595ff9479865e1\"" Apr 30 03:29:13.167050 systemd[1]: Started sshd@10-143.198.63.212:22-139.178.89.65:56258.service - OpenSSH per-connection server daemon (139.178.89.65:56258). Apr 30 03:29:13.240718 sshd[4271]: Accepted publickey for core from 139.178.89.65 port 56258 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:29:13.243920 sshd[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:13.248089 sshd[4175]: Invalid user test2 from 92.255.57.132 port 45274 Apr 30 03:29:13.255649 systemd-logind[1450]: New session 9 of user core. Apr 30 03:29:13.259126 containerd[1473]: time="2025-04-30T03:29:13.258780720Z" level=info msg="StopPodSandbox for \"400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d\"" Apr 30 03:29:13.259126 containerd[1473]: time="2025-04-30T03:29:13.258869198Z" level=info msg="StopPodSandbox for \"0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd\"" Apr 30 03:29:13.260278 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 03:29:13.427419 sshd[4175]: Connection closed by invalid user test2 92.255.57.132 port 45274 [preauth] Apr 30 03:29:13.438740 systemd[1]: sshd@9-143.198.63.212:22-92.255.57.132:45274.service: Deactivated successfully. Apr 30 03:29:13.457424 systemd-networkd[1374]: cali2231e44613b: Gained IPv6LL Apr 30 03:29:13.491955 containerd[1473]: 2025-04-30 03:29:13.359 [INFO][4297] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" Apr 30 03:29:13.491955 containerd[1473]: 2025-04-30 03:29:13.362 [INFO][4297] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" iface="eth0" netns="/var/run/netns/cni-03a731b2-cfc8-06de-15aa-196b2102497c" Apr 30 03:29:13.491955 containerd[1473]: 2025-04-30 03:29:13.364 [INFO][4297] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" iface="eth0" netns="/var/run/netns/cni-03a731b2-cfc8-06de-15aa-196b2102497c" Apr 30 03:29:13.491955 containerd[1473]: 2025-04-30 03:29:13.369 [INFO][4297] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" iface="eth0" netns="/var/run/netns/cni-03a731b2-cfc8-06de-15aa-196b2102497c" Apr 30 03:29:13.491955 containerd[1473]: 2025-04-30 03:29:13.371 [INFO][4297] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" Apr 30 03:29:13.491955 containerd[1473]: 2025-04-30 03:29:13.371 [INFO][4297] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" Apr 30 03:29:13.491955 containerd[1473]: 2025-04-30 03:29:13.446 [INFO][4320] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" HandleID="k8s-pod-network.0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" Workload="ci--4081.3.3--0--7c044d2e24-k8s-csi--node--driver--bfvvm-eth0" Apr 30 03:29:13.491955 containerd[1473]: 2025-04-30 03:29:13.447 [INFO][4320] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:13.491955 containerd[1473]: 2025-04-30 03:29:13.447 [INFO][4320] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:13.491955 containerd[1473]: 2025-04-30 03:29:13.475 [WARNING][4320] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" HandleID="k8s-pod-network.0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" Workload="ci--4081.3.3--0--7c044d2e24-k8s-csi--node--driver--bfvvm-eth0" Apr 30 03:29:13.491955 containerd[1473]: 2025-04-30 03:29:13.475 [INFO][4320] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" HandleID="k8s-pod-network.0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" Workload="ci--4081.3.3--0--7c044d2e24-k8s-csi--node--driver--bfvvm-eth0" Apr 30 03:29:13.491955 containerd[1473]: 2025-04-30 03:29:13.480 [INFO][4320] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:13.491955 containerd[1473]: 2025-04-30 03:29:13.488 [INFO][4297] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" Apr 30 03:29:13.496127 containerd[1473]: time="2025-04-30T03:29:13.492307814Z" level=info msg="TearDown network for sandbox \"0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd\" successfully" Apr 30 03:29:13.496127 containerd[1473]: time="2025-04-30T03:29:13.492338767Z" level=info msg="StopPodSandbox for \"0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd\" returns successfully" Apr 30 03:29:13.496127 containerd[1473]: time="2025-04-30T03:29:13.494878942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bfvvm,Uid:8f623e99-7bb9-4ed3-8866-963ff1311503,Namespace:calico-system,Attempt:1,}" Apr 30 03:29:13.522781 systemd-networkd[1374]: cali7377dafce28: Gained IPv6LL Apr 30 03:29:13.527960 containerd[1473]: 2025-04-30 03:29:13.381 [INFO][4304] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" Apr 30 03:29:13.527960 containerd[1473]: 2025-04-30 03:29:13.381 [INFO][4304] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" iface="eth0" netns="/var/run/netns/cni-d1808d61-c23f-42b8-3881-521aab5716af" Apr 30 03:29:13.527960 containerd[1473]: 2025-04-30 03:29:13.383 [INFO][4304] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" iface="eth0" netns="/var/run/netns/cni-d1808d61-c23f-42b8-3881-521aab5716af" Apr 30 03:29:13.527960 containerd[1473]: 2025-04-30 03:29:13.385 [INFO][4304] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" iface="eth0" netns="/var/run/netns/cni-d1808d61-c23f-42b8-3881-521aab5716af" Apr 30 03:29:13.527960 containerd[1473]: 2025-04-30 03:29:13.385 [INFO][4304] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" Apr 30 03:29:13.527960 containerd[1473]: 2025-04-30 03:29:13.385 [INFO][4304] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" Apr 30 03:29:13.527960 containerd[1473]: 2025-04-30 03:29:13.479 [INFO][4325] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" HandleID="k8s-pod-network.400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" Workload="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--m94hd-eth0" Apr 30 03:29:13.527960 containerd[1473]: 2025-04-30 03:29:13.481 [INFO][4325] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:13.527960 containerd[1473]: 2025-04-30 03:29:13.481 [INFO][4325] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:13.527960 containerd[1473]: 2025-04-30 03:29:13.497 [WARNING][4325] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" HandleID="k8s-pod-network.400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" Workload="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--m94hd-eth0" Apr 30 03:29:13.527960 containerd[1473]: 2025-04-30 03:29:13.497 [INFO][4325] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" HandleID="k8s-pod-network.400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" Workload="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--m94hd-eth0" Apr 30 03:29:13.527960 containerd[1473]: 2025-04-30 03:29:13.504 [INFO][4325] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:13.527960 containerd[1473]: 2025-04-30 03:29:13.510 [INFO][4304] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" Apr 30 03:29:13.529329 containerd[1473]: time="2025-04-30T03:29:13.528989544Z" level=info msg="TearDown network for sandbox \"400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d\" successfully" Apr 30 03:29:13.529329 containerd[1473]: time="2025-04-30T03:29:13.529040034Z" level=info msg="StopPodSandbox for \"400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d\" returns successfully" Apr 30 03:29:13.530807 kubelet[2556]: E0430 03:29:13.529464 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:29:13.532454 containerd[1473]: time="2025-04-30T03:29:13.532056174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-m94hd,Uid:bb22c691-4fbf-4372-b30c-281e4f70d3e0,Namespace:kube-system,Attempt:1,}" Apr 30 03:29:13.606682 sshd[4271]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:13.610852 systemd[1]: sshd@10-143.198.63.212:22-139.178.89.65:56258.service: Deactivated successfully. Apr 30 03:29:13.617653 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 03:29:13.619188 systemd-logind[1450]: Session 9 logged out. Waiting for processes to exit. Apr 30 03:29:13.629996 systemd[1]: run-netns-cni\x2d03a731b2\x2dcfc8\x2d06de\x2d15aa\x2d196b2102497c.mount: Deactivated successfully. Apr 30 03:29:13.631300 systemd[1]: run-netns-cni\x2dd1808d61\x2dc23f\x2d42b8\x2d3881\x2d521aab5716af.mount: Deactivated successfully. Apr 30 03:29:13.639400 systemd-logind[1450]: Removed session 9. Apr 30 03:29:13.800645 systemd-networkd[1374]: califf68f2818bb: Link UP Apr 30 03:29:13.802807 systemd-networkd[1374]: califf68f2818bb: Gained carrier Apr 30 03:29:13.834741 containerd[1473]: 2025-04-30 03:29:13.626 [INFO][4348] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--m94hd-eth0 coredns-7db6d8ff4d- kube-system bb22c691-4fbf-4372-b30c-281e4f70d3e0 906 0 2025-04-30 03:28:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-0-7c044d2e24 coredns-7db6d8ff4d-m94hd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califf68f2818bb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="aae761b65af1085c54bf75fb0f2dc91e9a953022829d295fe9792802243ef366" Namespace="kube-system" Pod="coredns-7db6d8ff4d-m94hd" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--m94hd-" Apr 30 03:29:13.834741 containerd[1473]: 2025-04-30 03:29:13.629 [INFO][4348] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="aae761b65af1085c54bf75fb0f2dc91e9a953022829d295fe9792802243ef366" Namespace="kube-system" Pod="coredns-7db6d8ff4d-m94hd" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--m94hd-eth0" Apr 30 03:29:13.834741 containerd[1473]: 2025-04-30 03:29:13.674 [INFO][4369] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aae761b65af1085c54bf75fb0f2dc91e9a953022829d295fe9792802243ef366" HandleID="k8s-pod-network.aae761b65af1085c54bf75fb0f2dc91e9a953022829d295fe9792802243ef366" Workload="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--m94hd-eth0" Apr 30 03:29:13.834741 containerd[1473]: 2025-04-30 03:29:13.692 [INFO][4369] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aae761b65af1085c54bf75fb0f2dc91e9a953022829d295fe9792802243ef366" HandleID="k8s-pod-network.aae761b65af1085c54bf75fb0f2dc91e9a953022829d295fe9792802243ef366" Workload="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--m94hd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000312ae0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-0-7c044d2e24", "pod":"coredns-7db6d8ff4d-m94hd", "timestamp":"2025-04-30 03:29:13.674717567 +0000 UTC"}, Hostname:"ci-4081.3.3-0-7c044d2e24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:13.834741 containerd[1473]: 2025-04-30 03:29:13.692 [INFO][4369] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:13.834741 containerd[1473]: 2025-04-30 03:29:13.693 [INFO][4369] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:13.834741 containerd[1473]: 2025-04-30 03:29:13.693 [INFO][4369] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-0-7c044d2e24' Apr 30 03:29:13.834741 containerd[1473]: 2025-04-30 03:29:13.699 [INFO][4369] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.aae761b65af1085c54bf75fb0f2dc91e9a953022829d295fe9792802243ef366" host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:13.834741 containerd[1473]: 2025-04-30 03:29:13.715 [INFO][4369] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:13.834741 containerd[1473]: 2025-04-30 03:29:13.727 [INFO][4369] ipam/ipam.go 489: Trying affinity for 192.168.74.128/26 host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:13.834741 containerd[1473]: 2025-04-30 03:29:13.733 [INFO][4369] ipam/ipam.go 155: Attempting to load block cidr=192.168.74.128/26 host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:13.834741 containerd[1473]: 2025-04-30 03:29:13.743 [INFO][4369] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:13.834741 containerd[1473]: 2025-04-30 03:29:13.743 [INFO][4369] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.aae761b65af1085c54bf75fb0f2dc91e9a953022829d295fe9792802243ef366" host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:13.834741 containerd[1473]: 2025-04-30 03:29:13.752 [INFO][4369] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.aae761b65af1085c54bf75fb0f2dc91e9a953022829d295fe9792802243ef366 Apr 30 03:29:13.834741 containerd[1473]: 2025-04-30 03:29:13.763 [INFO][4369] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.aae761b65af1085c54bf75fb0f2dc91e9a953022829d295fe9792802243ef366" host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:13.834741 containerd[1473]: 2025-04-30 03:29:13.785 [INFO][4369] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.74.132/26] block=192.168.74.128/26 handle="k8s-pod-network.aae761b65af1085c54bf75fb0f2dc91e9a953022829d295fe9792802243ef366" host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:13.834741 containerd[1473]: 2025-04-30 03:29:13.785 [INFO][4369] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.132/26] handle="k8s-pod-network.aae761b65af1085c54bf75fb0f2dc91e9a953022829d295fe9792802243ef366" host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:13.834741 containerd[1473]: 2025-04-30 03:29:13.785 [INFO][4369] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:13.834741 containerd[1473]: 2025-04-30 03:29:13.785 [INFO][4369] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.132/26] IPv6=[] ContainerID="aae761b65af1085c54bf75fb0f2dc91e9a953022829d295fe9792802243ef366" HandleID="k8s-pod-network.aae761b65af1085c54bf75fb0f2dc91e9a953022829d295fe9792802243ef366" Workload="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--m94hd-eth0" Apr 30 03:29:13.839383 containerd[1473]: 2025-04-30 03:29:13.790 [INFO][4348] cni-plugin/k8s.go 386: Populated endpoint ContainerID="aae761b65af1085c54bf75fb0f2dc91e9a953022829d295fe9792802243ef366" Namespace="kube-system" Pod="coredns-7db6d8ff4d-m94hd" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--m94hd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--m94hd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bb22c691-4fbf-4372-b30c-281e4f70d3e0", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-7c044d2e24", ContainerID:"", Pod:"coredns-7db6d8ff4d-m94hd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf68f2818bb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:13.839383 containerd[1473]: 2025-04-30 03:29:13.791 [INFO][4348] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.74.132/32] ContainerID="aae761b65af1085c54bf75fb0f2dc91e9a953022829d295fe9792802243ef366" Namespace="kube-system" Pod="coredns-7db6d8ff4d-m94hd" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--m94hd-eth0" Apr 30 03:29:13.839383 containerd[1473]: 2025-04-30 03:29:13.791 [INFO][4348] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf68f2818bb ContainerID="aae761b65af1085c54bf75fb0f2dc91e9a953022829d295fe9792802243ef366" Namespace="kube-system" Pod="coredns-7db6d8ff4d-m94hd" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--m94hd-eth0" Apr 30 03:29:13.839383 containerd[1473]: 2025-04-30 03:29:13.805 [INFO][4348] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aae761b65af1085c54bf75fb0f2dc91e9a953022829d295fe9792802243ef366" Namespace="kube-system" Pod="coredns-7db6d8ff4d-m94hd" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--m94hd-eth0" Apr 30 03:29:13.839383 containerd[1473]: 2025-04-30 03:29:13.808 [INFO][4348] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="aae761b65af1085c54bf75fb0f2dc91e9a953022829d295fe9792802243ef366" Namespace="kube-system" Pod="coredns-7db6d8ff4d-m94hd" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--m94hd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--m94hd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bb22c691-4fbf-4372-b30c-281e4f70d3e0", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-7c044d2e24", ContainerID:"aae761b65af1085c54bf75fb0f2dc91e9a953022829d295fe9792802243ef366", Pod:"coredns-7db6d8ff4d-m94hd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf68f2818bb", MAC:"56:a6:cc:c2:6b:66", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:13.839383 containerd[1473]: 2025-04-30 03:29:13.828 [INFO][4348] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="aae761b65af1085c54bf75fb0f2dc91e9a953022829d295fe9792802243ef366" Namespace="kube-system" Pod="coredns-7db6d8ff4d-m94hd" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--m94hd-eth0" Apr 30 03:29:13.884938 containerd[1473]: time="2025-04-30T03:29:13.884590520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:13.884938 containerd[1473]: time="2025-04-30T03:29:13.884687842Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:13.884938 containerd[1473]: time="2025-04-30T03:29:13.884704916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:13.885322 containerd[1473]: time="2025-04-30T03:29:13.885252991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:13.902021 systemd-networkd[1374]: califab5f7313e2: Link UP Apr 30 03:29:13.904143 systemd-networkd[1374]: califab5f7313e2: Gained carrier Apr 30 03:29:13.940772 systemd[1]: Started cri-containerd-aae761b65af1085c54bf75fb0f2dc91e9a953022829d295fe9792802243ef366.scope - libcontainer container aae761b65af1085c54bf75fb0f2dc91e9a953022829d295fe9792802243ef366. Apr 30 03:29:13.944579 containerd[1473]: 2025-04-30 03:29:13.620 [INFO][4339] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--0--7c044d2e24-k8s-csi--node--driver--bfvvm-eth0 csi-node-driver- calico-system 8f623e99-7bb9-4ed3-8866-963ff1311503 905 0 2025-04-30 03:28:45 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.3-0-7c044d2e24 csi-node-driver-bfvvm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califab5f7313e2 [] []}} ContainerID="be0dd55239f7b13536961312c4737a0dd1ac004c7640e905ee81eb07885b5eec" Namespace="calico-system" Pod="csi-node-driver-bfvvm" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-csi--node--driver--bfvvm-" Apr 30 03:29:13.944579 containerd[1473]: 2025-04-30 03:29:13.622 [INFO][4339] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="be0dd55239f7b13536961312c4737a0dd1ac004c7640e905ee81eb07885b5eec" Namespace="calico-system" Pod="csi-node-driver-bfvvm" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-csi--node--driver--bfvvm-eth0" Apr 30 03:29:13.944579 containerd[1473]: 2025-04-30 03:29:13.673 [INFO][4364] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be0dd55239f7b13536961312c4737a0dd1ac004c7640e905ee81eb07885b5eec" HandleID="k8s-pod-network.be0dd55239f7b13536961312c4737a0dd1ac004c7640e905ee81eb07885b5eec" Workload="ci--4081.3.3--0--7c044d2e24-k8s-csi--node--driver--bfvvm-eth0" Apr 30 03:29:13.944579 containerd[1473]: 2025-04-30 03:29:13.703 [INFO][4364] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="be0dd55239f7b13536961312c4737a0dd1ac004c7640e905ee81eb07885b5eec" HandleID="k8s-pod-network.be0dd55239f7b13536961312c4737a0dd1ac004c7640e905ee81eb07885b5eec" Workload="ci--4081.3.3--0--7c044d2e24-k8s-csi--node--driver--bfvvm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b390), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-0-7c044d2e24", "pod":"csi-node-driver-bfvvm", "timestamp":"2025-04-30 03:29:13.673460863 +0000 UTC"}, Hostname:"ci-4081.3.3-0-7c044d2e24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:13.944579 containerd[1473]: 2025-04-30 03:29:13.704 [INFO][4364] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:13.944579 containerd[1473]: 2025-04-30 03:29:13.785 [INFO][4364] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:13.944579 containerd[1473]: 2025-04-30 03:29:13.785 [INFO][4364] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-0-7c044d2e24' Apr 30 03:29:13.944579 containerd[1473]: 2025-04-30 03:29:13.795 [INFO][4364] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.be0dd55239f7b13536961312c4737a0dd1ac004c7640e905ee81eb07885b5eec" host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:13.944579 containerd[1473]: 2025-04-30 03:29:13.822 [INFO][4364] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:13.944579 containerd[1473]: 2025-04-30 03:29:13.838 [INFO][4364] ipam/ipam.go 489: Trying affinity for 192.168.74.128/26 host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:13.944579 containerd[1473]: 2025-04-30 03:29:13.843 [INFO][4364] ipam/ipam.go 155: Attempting to load block cidr=192.168.74.128/26 host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:13.944579 containerd[1473]: 2025-04-30 03:29:13.849 [INFO][4364] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:13.944579 containerd[1473]: 2025-04-30 03:29:13.849 [INFO][4364] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.be0dd55239f7b13536961312c4737a0dd1ac004c7640e905ee81eb07885b5eec" host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:13.944579 containerd[1473]: 2025-04-30 03:29:13.854 [INFO][4364] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.be0dd55239f7b13536961312c4737a0dd1ac004c7640e905ee81eb07885b5eec Apr 30 03:29:13.944579 containerd[1473]: 2025-04-30 03:29:13.868 [INFO][4364] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.be0dd55239f7b13536961312c4737a0dd1ac004c7640e905ee81eb07885b5eec" host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:13.944579 containerd[1473]: 2025-04-30 03:29:13.879 [INFO][4364] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.74.133/26] block=192.168.74.128/26 handle="k8s-pod-network.be0dd55239f7b13536961312c4737a0dd1ac004c7640e905ee81eb07885b5eec" host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:13.944579 containerd[1473]: 2025-04-30 03:29:13.879 [INFO][4364] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.133/26] handle="k8s-pod-network.be0dd55239f7b13536961312c4737a0dd1ac004c7640e905ee81eb07885b5eec" host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:13.944579 containerd[1473]: 2025-04-30 03:29:13.879 [INFO][4364] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:13.944579 containerd[1473]: 2025-04-30 03:29:13.879 [INFO][4364] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.133/26] IPv6=[] ContainerID="be0dd55239f7b13536961312c4737a0dd1ac004c7640e905ee81eb07885b5eec" HandleID="k8s-pod-network.be0dd55239f7b13536961312c4737a0dd1ac004c7640e905ee81eb07885b5eec" Workload="ci--4081.3.3--0--7c044d2e24-k8s-csi--node--driver--bfvvm-eth0" Apr 30 03:29:13.945340 containerd[1473]: 2025-04-30 03:29:13.884 [INFO][4339] cni-plugin/k8s.go 386: Populated endpoint ContainerID="be0dd55239f7b13536961312c4737a0dd1ac004c7640e905ee81eb07885b5eec" Namespace="calico-system" Pod="csi-node-driver-bfvvm" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-csi--node--driver--bfvvm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--7c044d2e24-k8s-csi--node--driver--bfvvm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8f623e99-7bb9-4ed3-8866-963ff1311503", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-7c044d2e24", ContainerID:"", Pod:"csi-node-driver-bfvvm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califab5f7313e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:13.945340 containerd[1473]: 2025-04-30 03:29:13.887 [INFO][4339] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.74.133/32] ContainerID="be0dd55239f7b13536961312c4737a0dd1ac004c7640e905ee81eb07885b5eec" Namespace="calico-system" Pod="csi-node-driver-bfvvm" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-csi--node--driver--bfvvm-eth0" Apr 30 03:29:13.945340 containerd[1473]: 2025-04-30 03:29:13.887 [INFO][4339] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califab5f7313e2 ContainerID="be0dd55239f7b13536961312c4737a0dd1ac004c7640e905ee81eb07885b5eec" Namespace="calico-system" Pod="csi-node-driver-bfvvm" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-csi--node--driver--bfvvm-eth0" Apr 30 03:29:13.945340 containerd[1473]: 2025-04-30 03:29:13.905 [INFO][4339] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be0dd55239f7b13536961312c4737a0dd1ac004c7640e905ee81eb07885b5eec" Namespace="calico-system" Pod="csi-node-driver-bfvvm" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-csi--node--driver--bfvvm-eth0" Apr 30 03:29:13.945340 containerd[1473]: 2025-04-30 03:29:13.906 [INFO][4339] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="be0dd55239f7b13536961312c4737a0dd1ac004c7640e905ee81eb07885b5eec" Namespace="calico-system" Pod="csi-node-driver-bfvvm" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-csi--node--driver--bfvvm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--7c044d2e24-k8s-csi--node--driver--bfvvm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8f623e99-7bb9-4ed3-8866-963ff1311503", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-7c044d2e24", ContainerID:"be0dd55239f7b13536961312c4737a0dd1ac004c7640e905ee81eb07885b5eec", Pod:"csi-node-driver-bfvvm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califab5f7313e2", MAC:"72:89:c4:a3:88:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:13.945340 containerd[1473]: 2025-04-30 03:29:13.933 [INFO][4339] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="be0dd55239f7b13536961312c4737a0dd1ac004c7640e905ee81eb07885b5eec" Namespace="calico-system" Pod="csi-node-driver-bfvvm" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-csi--node--driver--bfvvm-eth0" Apr 30 03:29:13.994017 containerd[1473]: time="2025-04-30T03:29:13.991375899Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:13.994017 containerd[1473]: time="2025-04-30T03:29:13.991434423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:13.994017 containerd[1473]: time="2025-04-30T03:29:13.991445551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:13.994017 containerd[1473]: time="2025-04-30T03:29:13.991548294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:14.027717 systemd[1]: Started cri-containerd-be0dd55239f7b13536961312c4737a0dd1ac004c7640e905ee81eb07885b5eec.scope - libcontainer container be0dd55239f7b13536961312c4737a0dd1ac004c7640e905ee81eb07885b5eec. Apr 30 03:29:14.039221 containerd[1473]: time="2025-04-30T03:29:14.039030359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-m94hd,Uid:bb22c691-4fbf-4372-b30c-281e4f70d3e0,Namespace:kube-system,Attempt:1,} returns sandbox id \"aae761b65af1085c54bf75fb0f2dc91e9a953022829d295fe9792802243ef366\"" Apr 30 03:29:14.040888 kubelet[2556]: E0430 03:29:14.040799 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:29:14.055436 containerd[1473]: time="2025-04-30T03:29:14.054534622Z" level=info msg="CreateContainer within sandbox \"aae761b65af1085c54bf75fb0f2dc91e9a953022829d295fe9792802243ef366\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:29:14.090345 containerd[1473]: time="2025-04-30T03:29:14.090264527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bfvvm,Uid:8f623e99-7bb9-4ed3-8866-963ff1311503,Namespace:calico-system,Attempt:1,} returns sandbox id \"be0dd55239f7b13536961312c4737a0dd1ac004c7640e905ee81eb07885b5eec\"" Apr 30 03:29:14.091600 containerd[1473]: time="2025-04-30T03:29:14.091559698Z" level=info msg="CreateContainer within sandbox \"aae761b65af1085c54bf75fb0f2dc91e9a953022829d295fe9792802243ef366\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"18596bce1fb61d62f053c96d9f0fe0f0fd7320d8ac176e152cef46a26ae81716\"" Apr 30 03:29:14.092773 containerd[1473]: time="2025-04-30T03:29:14.092720404Z" level=info msg="StartContainer for \"18596bce1fb61d62f053c96d9f0fe0f0fd7320d8ac176e152cef46a26ae81716\"" Apr 30 03:29:14.097747 systemd-networkd[1374]: califc6963c1e46: Gained IPv6LL Apr 30 03:29:14.145848 systemd[1]: Started cri-containerd-18596bce1fb61d62f053c96d9f0fe0f0fd7320d8ac176e152cef46a26ae81716.scope - libcontainer container 18596bce1fb61d62f053c96d9f0fe0f0fd7320d8ac176e152cef46a26ae81716. Apr 30 03:29:14.184278 containerd[1473]: time="2025-04-30T03:29:14.184106946Z" level=info msg="StartContainer for \"18596bce1fb61d62f053c96d9f0fe0f0fd7320d8ac176e152cef46a26ae81716\" returns successfully" Apr 30 03:29:14.626663 kubelet[2556]: E0430 03:29:14.625975 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:29:14.660438 kubelet[2556]: I0430 03:29:14.660356 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-m94hd" podStartSLOduration=36.660328751 podStartE2EDuration="36.660328751s" podCreationTimestamp="2025-04-30 03:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:14.659967908 +0000 UTC m=+50.584267646" watchObservedRunningTime="2025-04-30 03:29:14.660328751 +0000 UTC m=+50.584628488" Apr 30 03:29:14.929008 systemd-networkd[1374]: califf68f2818bb: Gained IPv6LL Apr 30 03:29:15.259222 containerd[1473]: time="2025-04-30T03:29:15.258400089Z" level=info msg="StopPodSandbox for \"f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305\"" Apr 30 03:29:15.604548 containerd[1473]: 2025-04-30 03:29:15.481 [INFO][4546] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" Apr 30 03:29:15.604548 containerd[1473]: 2025-04-30 03:29:15.482 [INFO][4546] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" iface="eth0" netns="/var/run/netns/cni-b163d8b8-b39b-e45d-e9e1-4ecad162ef89" Apr 30 03:29:15.604548 containerd[1473]: 2025-04-30 03:29:15.483 [INFO][4546] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" iface="eth0" netns="/var/run/netns/cni-b163d8b8-b39b-e45d-e9e1-4ecad162ef89" Apr 30 03:29:15.604548 containerd[1473]: 2025-04-30 03:29:15.484 [INFO][4546] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" iface="eth0" netns="/var/run/netns/cni-b163d8b8-b39b-e45d-e9e1-4ecad162ef89" Apr 30 03:29:15.604548 containerd[1473]: 2025-04-30 03:29:15.484 [INFO][4546] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" Apr 30 03:29:15.604548 containerd[1473]: 2025-04-30 03:29:15.484 [INFO][4546] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" Apr 30 03:29:15.604548 containerd[1473]: 2025-04-30 03:29:15.565 [INFO][4553] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" HandleID="k8s-pod-network.f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" Workload="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--xsv99-eth0" Apr 30 03:29:15.604548 containerd[1473]: 2025-04-30 03:29:15.565 [INFO][4553] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:15.604548 containerd[1473]: 2025-04-30 03:29:15.565 [INFO][4553] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:15.604548 containerd[1473]: 2025-04-30 03:29:15.588 [WARNING][4553] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" HandleID="k8s-pod-network.f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" Workload="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--xsv99-eth0" Apr 30 03:29:15.604548 containerd[1473]: 2025-04-30 03:29:15.588 [INFO][4553] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" HandleID="k8s-pod-network.f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" Workload="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--xsv99-eth0" Apr 30 03:29:15.604548 containerd[1473]: 2025-04-30 03:29:15.593 [INFO][4553] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:15.604548 containerd[1473]: 2025-04-30 03:29:15.598 [INFO][4546] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" Apr 30 03:29:15.607429 containerd[1473]: time="2025-04-30T03:29:15.606360983Z" level=info msg="TearDown network for sandbox \"f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305\" successfully" Apr 30 03:29:15.607429 containerd[1473]: time="2025-04-30T03:29:15.606598247Z" level=info msg="StopPodSandbox for \"f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305\" returns successfully" Apr 30 03:29:15.608533 kubelet[2556]: E0430 03:29:15.607877 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:29:15.609176 systemd[1]: run-netns-cni\x2db163d8b8\x2db39b\x2de45d\x2de9e1\x2d4ecad162ef89.mount: Deactivated successfully. Apr 30 03:29:15.611322 containerd[1473]: time="2025-04-30T03:29:15.610314020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xsv99,Uid:021ebc6e-397c-468a-9ff4-cdbf45e8c256,Namespace:kube-system,Attempt:1,}" Apr 30 03:29:15.668553 kubelet[2556]: E0430 03:29:15.668231 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:29:15.889877 systemd-networkd[1374]: califab5f7313e2: Gained IPv6LL Apr 30 03:29:16.060828 systemd-networkd[1374]: calibd21a73678a: Link UP Apr 30 03:29:16.066119 systemd-networkd[1374]: calibd21a73678a: Gained carrier Apr 30 03:29:16.124770 containerd[1473]: 2025-04-30 03:29:15.760 [INFO][4561] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--xsv99-eth0 coredns-7db6d8ff4d- kube-system 021ebc6e-397c-468a-9ff4-cdbf45e8c256 940 0 2025-04-30 03:28:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-0-7c044d2e24 coredns-7db6d8ff4d-xsv99 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibd21a73678a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6929e6757925339501a83cfd69db5530e4d05dbc20fe06e1c494d89eb459d727" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xsv99" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--xsv99-" Apr 30 03:29:16.124770 containerd[1473]: 2025-04-30 03:29:15.761 [INFO][4561] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6929e6757925339501a83cfd69db5530e4d05dbc20fe06e1c494d89eb459d727" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xsv99" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--xsv99-eth0" Apr 30 03:29:16.124770 containerd[1473]: 2025-04-30 03:29:15.909 [INFO][4575] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6929e6757925339501a83cfd69db5530e4d05dbc20fe06e1c494d89eb459d727" HandleID="k8s-pod-network.6929e6757925339501a83cfd69db5530e4d05dbc20fe06e1c494d89eb459d727" Workload="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--xsv99-eth0" Apr 30 03:29:16.124770 containerd[1473]: 2025-04-30 03:29:15.946 [INFO][4575] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6929e6757925339501a83cfd69db5530e4d05dbc20fe06e1c494d89eb459d727" HandleID="k8s-pod-network.6929e6757925339501a83cfd69db5530e4d05dbc20fe06e1c494d89eb459d727" Workload="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--xsv99-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011bb20), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-0-7c044d2e24", "pod":"coredns-7db6d8ff4d-xsv99", "timestamp":"2025-04-30 03:29:15.909068823 +0000 UTC"}, Hostname:"ci-4081.3.3-0-7c044d2e24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:16.124770 containerd[1473]: 2025-04-30 03:29:15.946 [INFO][4575] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:16.124770 containerd[1473]: 2025-04-30 03:29:15.947 [INFO][4575] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:16.124770 containerd[1473]: 2025-04-30 03:29:15.947 [INFO][4575] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-0-7c044d2e24' Apr 30 03:29:16.124770 containerd[1473]: 2025-04-30 03:29:15.951 [INFO][4575] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6929e6757925339501a83cfd69db5530e4d05dbc20fe06e1c494d89eb459d727" host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:16.124770 containerd[1473]: 2025-04-30 03:29:15.961 [INFO][4575] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:16.124770 containerd[1473]: 2025-04-30 03:29:15.976 [INFO][4575] ipam/ipam.go 489: Trying affinity for 192.168.74.128/26 host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:16.124770 containerd[1473]: 2025-04-30 03:29:15.982 [INFO][4575] ipam/ipam.go 155: Attempting to load block cidr=192.168.74.128/26 host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:16.124770 containerd[1473]: 2025-04-30 03:29:15.989 [INFO][4575] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:16.124770 containerd[1473]: 2025-04-30 03:29:15.989 [INFO][4575] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.6929e6757925339501a83cfd69db5530e4d05dbc20fe06e1c494d89eb459d727" host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:16.124770 containerd[1473]: 2025-04-30 03:29:15.997 [INFO][4575] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6929e6757925339501a83cfd69db5530e4d05dbc20fe06e1c494d89eb459d727 Apr 30 03:29:16.124770 containerd[1473]: 2025-04-30 03:29:16.011 [INFO][4575] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.6929e6757925339501a83cfd69db5530e4d05dbc20fe06e1c494d89eb459d727" host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:16.124770 containerd[1473]: 2025-04-30 03:29:16.030 [INFO][4575] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.74.134/26] block=192.168.74.128/26 handle="k8s-pod-network.6929e6757925339501a83cfd69db5530e4d05dbc20fe06e1c494d89eb459d727" host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:16.124770 containerd[1473]: 2025-04-30 03:29:16.030 [INFO][4575] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.134/26] handle="k8s-pod-network.6929e6757925339501a83cfd69db5530e4d05dbc20fe06e1c494d89eb459d727" host="ci-4081.3.3-0-7c044d2e24" Apr 30 03:29:16.124770 containerd[1473]: 2025-04-30 03:29:16.030 [INFO][4575] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:16.124770 containerd[1473]: 2025-04-30 03:29:16.031 [INFO][4575] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.134/26] IPv6=[] ContainerID="6929e6757925339501a83cfd69db5530e4d05dbc20fe06e1c494d89eb459d727" HandleID="k8s-pod-network.6929e6757925339501a83cfd69db5530e4d05dbc20fe06e1c494d89eb459d727" Workload="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--xsv99-eth0" Apr 30 03:29:16.129007 containerd[1473]: 2025-04-30 03:29:16.042 [INFO][4561] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6929e6757925339501a83cfd69db5530e4d05dbc20fe06e1c494d89eb459d727" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xsv99" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--xsv99-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--xsv99-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"021ebc6e-397c-468a-9ff4-cdbf45e8c256", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-7c044d2e24", ContainerID:"", Pod:"coredns-7db6d8ff4d-xsv99", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibd21a73678a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:16.129007 containerd[1473]: 2025-04-30 03:29:16.044 [INFO][4561] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.74.134/32] ContainerID="6929e6757925339501a83cfd69db5530e4d05dbc20fe06e1c494d89eb459d727" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xsv99" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--xsv99-eth0" Apr 30 03:29:16.129007 containerd[1473]: 2025-04-30 03:29:16.045 [INFO][4561] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd21a73678a ContainerID="6929e6757925339501a83cfd69db5530e4d05dbc20fe06e1c494d89eb459d727" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xsv99" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--xsv99-eth0" Apr 30 03:29:16.129007 containerd[1473]: 2025-04-30 03:29:16.073 [INFO][4561] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6929e6757925339501a83cfd69db5530e4d05dbc20fe06e1c494d89eb459d727" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xsv99" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--xsv99-eth0" Apr 30 03:29:16.129007 containerd[1473]: 2025-04-30 03:29:16.080 [INFO][4561] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6929e6757925339501a83cfd69db5530e4d05dbc20fe06e1c494d89eb459d727" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xsv99" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--xsv99-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--xsv99-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"021ebc6e-397c-468a-9ff4-cdbf45e8c256", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-7c044d2e24", ContainerID:"6929e6757925339501a83cfd69db5530e4d05dbc20fe06e1c494d89eb459d727", Pod:"coredns-7db6d8ff4d-xsv99", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibd21a73678a", MAC:"2a:9b:11:cd:ca:8d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:16.129007 containerd[1473]: 2025-04-30 03:29:16.116 [INFO][4561] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6929e6757925339501a83cfd69db5530e4d05dbc20fe06e1c494d89eb459d727" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xsv99" WorkloadEndpoint="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--xsv99-eth0" Apr 30 03:29:16.271144 containerd[1473]: time="2025-04-30T03:29:16.265957076Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:16.271144 containerd[1473]: time="2025-04-30T03:29:16.266083156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:16.271144 containerd[1473]: time="2025-04-30T03:29:16.266149352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:16.281139 containerd[1473]: time="2025-04-30T03:29:16.273914212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:16.369659 systemd[1]: Started cri-containerd-6929e6757925339501a83cfd69db5530e4d05dbc20fe06e1c494d89eb459d727.scope - libcontainer container 6929e6757925339501a83cfd69db5530e4d05dbc20fe06e1c494d89eb459d727. Apr 30 03:29:16.407388 systemd[1]: run-containerd-runc-k8s.io-6929e6757925339501a83cfd69db5530e4d05dbc20fe06e1c494d89eb459d727-runc.Pf0vLg.mount: Deactivated successfully. Apr 30 03:29:16.533698 containerd[1473]: time="2025-04-30T03:29:16.533546048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xsv99,Uid:021ebc6e-397c-468a-9ff4-cdbf45e8c256,Namespace:kube-system,Attempt:1,} returns sandbox id \"6929e6757925339501a83cfd69db5530e4d05dbc20fe06e1c494d89eb459d727\"" Apr 30 03:29:16.536260 kubelet[2556]: E0430 03:29:16.536203 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:29:16.546378 containerd[1473]: time="2025-04-30T03:29:16.546310423Z" level=info msg="CreateContainer within sandbox \"6929e6757925339501a83cfd69db5530e4d05dbc20fe06e1c494d89eb459d727\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:29:16.573011 containerd[1473]: time="2025-04-30T03:29:16.572811263Z" level=info msg="CreateContainer within sandbox \"6929e6757925339501a83cfd69db5530e4d05dbc20fe06e1c494d89eb459d727\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ea2fb215101a33a1713d5181401d2131b70bb68281e3f350d617bcd6389a132c\"" Apr 30 03:29:16.575615 containerd[1473]: time="2025-04-30T03:29:16.575500479Z" level=info msg="StartContainer for \"ea2fb215101a33a1713d5181401d2131b70bb68281e3f350d617bcd6389a132c\"" Apr 30 03:29:16.663128 systemd[1]: Started cri-containerd-ea2fb215101a33a1713d5181401d2131b70bb68281e3f350d617bcd6389a132c.scope - libcontainer container ea2fb215101a33a1713d5181401d2131b70bb68281e3f350d617bcd6389a132c. Apr 30 03:29:16.712306 kubelet[2556]: E0430 03:29:16.711872 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:29:16.780628 containerd[1473]: time="2025-04-30T03:29:16.779837271Z" level=info msg="StartContainer for \"ea2fb215101a33a1713d5181401d2131b70bb68281e3f350d617bcd6389a132c\" returns successfully" Apr 30 03:29:17.092933 containerd[1473]: time="2025-04-30T03:29:17.091896449Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:17.093938 containerd[1473]: time="2025-04-30T03:29:17.093874108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" Apr 30 03:29:17.094710 containerd[1473]: time="2025-04-30T03:29:17.094669417Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:17.099795 containerd[1473]: time="2025-04-30T03:29:17.099706093Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:17.101713 containerd[1473]: time="2025-04-30T03:29:17.101649606Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 4.520322339s" Apr 30 03:29:17.101945 containerd[1473]: time="2025-04-30T03:29:17.101921558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" Apr 30 03:29:17.107931 containerd[1473]: time="2025-04-30T03:29:17.107857964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:29:17.142296 containerd[1473]: time="2025-04-30T03:29:17.142226769Z" level=info msg="CreateContainer within sandbox \"9aed92414a88e7842daf679e751537f30eaad5afe3004bbf26a9534f8b72a277\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 03:29:17.168469 containerd[1473]: time="2025-04-30T03:29:17.167697777Z" level=info msg="CreateContainer within sandbox \"9aed92414a88e7842daf679e751537f30eaad5afe3004bbf26a9534f8b72a277\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9e465124f75b13af8125cf9f43d8af01eb3537368d89f3787d234846777d08b9\"" Apr 30 03:29:17.169029 containerd[1473]: time="2025-04-30T03:29:17.168985933Z" level=info msg="StartContainer for \"9e465124f75b13af8125cf9f43d8af01eb3537368d89f3787d234846777d08b9\"" Apr 30 03:29:17.225814 systemd[1]: Started cri-containerd-9e465124f75b13af8125cf9f43d8af01eb3537368d89f3787d234846777d08b9.scope - libcontainer container 9e465124f75b13af8125cf9f43d8af01eb3537368d89f3787d234846777d08b9. Apr 30 03:29:17.297562 systemd-networkd[1374]: calibd21a73678a: Gained IPv6LL Apr 30 03:29:17.335204 containerd[1473]: time="2025-04-30T03:29:17.335111577Z" level=info msg="StartContainer for \"9e465124f75b13af8125cf9f43d8af01eb3537368d89f3787d234846777d08b9\" returns successfully" Apr 30 03:29:17.731012 kubelet[2556]: E0430 03:29:17.730857 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:29:17.753192 systemd[1]: run-containerd-runc-k8s.io-9e465124f75b13af8125cf9f43d8af01eb3537368d89f3787d234846777d08b9-runc.TJ6nfX.mount: Deactivated successfully. Apr 30 03:29:17.772416 kubelet[2556]: I0430 03:29:17.772304 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5846bc4884-ttjjz" podStartSLOduration=28.236361613 podStartE2EDuration="32.772275722s" podCreationTimestamp="2025-04-30 03:28:45 +0000 UTC" firstStartedPulling="2025-04-30 03:29:12.571478596 +0000 UTC m=+48.495778310" lastFinishedPulling="2025-04-30 03:29:17.107392676 +0000 UTC m=+53.031692419" observedRunningTime="2025-04-30 03:29:17.753976686 +0000 UTC m=+53.678276449" watchObservedRunningTime="2025-04-30 03:29:17.772275722 +0000 UTC m=+53.696575458" Apr 30 03:29:17.850137 kubelet[2556]: I0430 03:29:17.848492 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-xsv99" podStartSLOduration=39.848463711 podStartE2EDuration="39.848463711s" podCreationTimestamp="2025-04-30 03:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:17.771133602 +0000 UTC m=+53.695433337" watchObservedRunningTime="2025-04-30 03:29:17.848463711 +0000 UTC m=+53.772763445" Apr 30 03:29:18.626695 systemd[1]: Started sshd@11-143.198.63.212:22-139.178.89.65:40542.service - OpenSSH per-connection server daemon (139.178.89.65:40542). Apr 30 03:29:18.719627 sshd[4740]: Accepted publickey for core from 139.178.89.65 port 40542 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:29:18.723079 sshd[4740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:18.733554 kubelet[2556]: E0430 03:29:18.733457 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:29:18.735289 systemd-logind[1450]: New session 10 of user core. Apr 30 03:29:18.740136 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 03:29:19.062130 sshd[4740]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:19.066981 systemd[1]: sshd@11-143.198.63.212:22-139.178.89.65:40542.service: Deactivated successfully. Apr 30 03:29:19.069938 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 03:29:19.071066 systemd-logind[1450]: Session 10 logged out. Waiting for processes to exit. Apr 30 03:29:19.072638 systemd-logind[1450]: Removed session 10. Apr 30 03:29:19.644157 kubelet[2556]: E0430 03:29:19.643535 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:29:19.740032 kubelet[2556]: E0430 03:29:19.739993 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:29:21.005630 containerd[1473]: time="2025-04-30T03:29:21.005065192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:21.015356 containerd[1473]: time="2025-04-30T03:29:21.006205623Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" Apr 30 03:29:21.015356 containerd[1473]: time="2025-04-30T03:29:21.014965295Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:21.017042 containerd[1473]: time="2025-04-30T03:29:21.016988162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:21.018215 containerd[1473]: time="2025-04-30T03:29:21.017746449Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 3.909836582s" Apr 30 03:29:21.018215 containerd[1473]: time="2025-04-30T03:29:21.017781603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:29:21.021191 containerd[1473]: time="2025-04-30T03:29:21.020898797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:29:21.024763 containerd[1473]: time="2025-04-30T03:29:21.023986196Z" level=info msg="CreateContainer within sandbox \"415b229d4fab050610a6aac4537282762a155d2921ebdaaa3991bf094f9a1805\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:29:21.040978 containerd[1473]: time="2025-04-30T03:29:21.040590637Z" level=info msg="CreateContainer within sandbox \"415b229d4fab050610a6aac4537282762a155d2921ebdaaa3991bf094f9a1805\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"63c9cd098ab5f2992eaa6e9d964d6b24c61dff4c54a31baec0558de55d257900\"" Apr 30 03:29:21.044473 containerd[1473]: time="2025-04-30T03:29:21.044401333Z" level=info msg="StartContainer for \"63c9cd098ab5f2992eaa6e9d964d6b24c61dff4c54a31baec0558de55d257900\"" Apr 30 03:29:21.048046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3814673029.mount: Deactivated successfully. Apr 30 03:29:21.086828 systemd[1]: Started cri-containerd-63c9cd098ab5f2992eaa6e9d964d6b24c61dff4c54a31baec0558de55d257900.scope - libcontainer container 63c9cd098ab5f2992eaa6e9d964d6b24c61dff4c54a31baec0558de55d257900. Apr 30 03:29:21.146216 containerd[1473]: time="2025-04-30T03:29:21.146041021Z" level=info msg="StartContainer for \"63c9cd098ab5f2992eaa6e9d964d6b24c61dff4c54a31baec0558de55d257900\" returns successfully" Apr 30 03:29:21.740551 containerd[1473]: time="2025-04-30T03:29:21.740353678Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:21.742662 containerd[1473]: time="2025-04-30T03:29:21.741639817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" Apr 30 03:29:21.753319 containerd[1473]: time="2025-04-30T03:29:21.753266538Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 732.320948ms" Apr 30 03:29:21.753319 containerd[1473]: time="2025-04-30T03:29:21.753311422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:29:21.756053 containerd[1473]: time="2025-04-30T03:29:21.755383301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 03:29:21.758749 containerd[1473]: time="2025-04-30T03:29:21.758271921Z" level=info msg="CreateContainer within sandbox \"53fa3f7fcef90f04d163f2e6dcdfb72dda51bb246a41a743b0595ff9479865e1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:29:21.784008 containerd[1473]: time="2025-04-30T03:29:21.783959176Z" level=info msg="CreateContainer within sandbox \"53fa3f7fcef90f04d163f2e6dcdfb72dda51bb246a41a743b0595ff9479865e1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"520ddd07b71aed2602a0bcfd57233e8d41d088b8af13f31c9aba21fc43c4d859\"" Apr 30 03:29:21.785545 containerd[1473]: time="2025-04-30T03:29:21.785174241Z" level=info msg="StartContainer for \"520ddd07b71aed2602a0bcfd57233e8d41d088b8af13f31c9aba21fc43c4d859\"" Apr 30 03:29:21.845807 systemd[1]: Started cri-containerd-520ddd07b71aed2602a0bcfd57233e8d41d088b8af13f31c9aba21fc43c4d859.scope - libcontainer container 520ddd07b71aed2602a0bcfd57233e8d41d088b8af13f31c9aba21fc43c4d859. Apr 30 03:29:21.926151 containerd[1473]: time="2025-04-30T03:29:21.925522281Z" level=info msg="StartContainer for \"520ddd07b71aed2602a0bcfd57233e8d41d088b8af13f31c9aba21fc43c4d859\" returns successfully" Apr 30 03:29:22.758463 kubelet[2556]: I0430 03:29:22.758252 2556 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:29:22.789561 kubelet[2556]: I0430 03:29:22.789031 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5579bb7b4d-fp4xj" podStartSLOduration=29.356789631 podStartE2EDuration="37.789001701s" podCreationTimestamp="2025-04-30 03:28:45 +0000 UTC" firstStartedPulling="2025-04-30 03:29:12.587216741 +0000 UTC m=+48.511516460" lastFinishedPulling="2025-04-30 03:29:21.019428806 +0000 UTC m=+56.943728530" observedRunningTime="2025-04-30 03:29:21.779857838 +0000 UTC m=+57.704157571" watchObservedRunningTime="2025-04-30 03:29:22.789001701 +0000 UTC m=+58.713301437" Apr 30 03:29:23.761074 kubelet[2556]: I0430 03:29:23.760310 2556 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:29:24.082199 systemd[1]: Started sshd@12-143.198.63.212:22-139.178.89.65:40544.service - OpenSSH per-connection server daemon (139.178.89.65:40544). Apr 30 03:29:24.222131 sshd[4874]: Accepted publickey for core from 139.178.89.65 port 40544 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:29:24.231302 sshd[4874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:24.254010 systemd-logind[1450]: New session 11 of user core. Apr 30 03:29:24.258579 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 03:29:24.293138 containerd[1473]: time="2025-04-30T03:29:24.293087507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:24.295198 containerd[1473]: time="2025-04-30T03:29:24.294744825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" Apr 30 03:29:24.295198 containerd[1473]: time="2025-04-30T03:29:24.294941447Z" level=info msg="StopPodSandbox for \"1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e\"" Apr 30 03:29:24.297544 containerd[1473]: time="2025-04-30T03:29:24.296678522Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:24.301733 containerd[1473]: time="2025-04-30T03:29:24.301666640Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:24.303789 containerd[1473]: time="2025-04-30T03:29:24.303738617Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 2.548307457s" Apr 30 03:29:24.304020 containerd[1473]: time="2025-04-30T03:29:24.303985270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" Apr 30 03:29:24.310612 containerd[1473]: time="2025-04-30T03:29:24.310063836Z" level=info msg="CreateContainer within sandbox \"be0dd55239f7b13536961312c4737a0dd1ac004c7640e905ee81eb07885b5eec\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 03:29:24.351642 containerd[1473]: time="2025-04-30T03:29:24.346982554Z" level=info msg="CreateContainer within sandbox \"be0dd55239f7b13536961312c4737a0dd1ac004c7640e905ee81eb07885b5eec\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9b568e5207cab14db6a03cc1799004d34651d5724f15e6b7727e8dd4c54e6f58\"" Apr 30 03:29:24.351642 containerd[1473]: time="2025-04-30T03:29:24.348479980Z" level=info msg="StartContainer for \"9b568e5207cab14db6a03cc1799004d34651d5724f15e6b7727e8dd4c54e6f58\"" Apr 30 03:29:24.347712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2166019952.mount: Deactivated successfully. Apr 30 03:29:24.495836 systemd[1]: Started cri-containerd-9b568e5207cab14db6a03cc1799004d34651d5724f15e6b7727e8dd4c54e6f58.scope - libcontainer container 9b568e5207cab14db6a03cc1799004d34651d5724f15e6b7727e8dd4c54e6f58. Apr 30 03:29:24.591158 containerd[1473]: time="2025-04-30T03:29:24.590771311Z" level=info msg="StartContainer for \"9b568e5207cab14db6a03cc1799004d34651d5724f15e6b7727e8dd4c54e6f58\" returns successfully" Apr 30 03:29:24.599914 containerd[1473]: time="2025-04-30T03:29:24.599750290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 03:29:24.717876 containerd[1473]: 2025-04-30 03:29:24.643 [WARNING][4896] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--fp4xj-eth0", GenerateName:"calico-apiserver-5579bb7b4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ce3de429-7f35-47dd-ba9a-d97e4159a358", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5579bb7b4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-7c044d2e24", ContainerID:"415b229d4fab050610a6aac4537282762a155d2921ebdaaa3991bf094f9a1805", Pod:"calico-apiserver-5579bb7b4d-fp4xj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2231e44613b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:24.717876 containerd[1473]: 2025-04-30 03:29:24.644 [INFO][4896] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" Apr 30 03:29:24.717876 containerd[1473]: 2025-04-30 03:29:24.644 [INFO][4896] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" iface="eth0" netns="" Apr 30 03:29:24.717876 containerd[1473]: 2025-04-30 03:29:24.644 [INFO][4896] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" Apr 30 03:29:24.717876 containerd[1473]: 2025-04-30 03:29:24.644 [INFO][4896] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" Apr 30 03:29:24.717876 containerd[1473]: 2025-04-30 03:29:24.695 [INFO][4941] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" HandleID="k8s-pod-network.1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--fp4xj-eth0" Apr 30 03:29:24.717876 containerd[1473]: 2025-04-30 03:29:24.695 [INFO][4941] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:24.717876 containerd[1473]: 2025-04-30 03:29:24.695 [INFO][4941] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:24.717876 containerd[1473]: 2025-04-30 03:29:24.707 [WARNING][4941] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" HandleID="k8s-pod-network.1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--fp4xj-eth0" Apr 30 03:29:24.717876 containerd[1473]: 2025-04-30 03:29:24.707 [INFO][4941] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" HandleID="k8s-pod-network.1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--fp4xj-eth0" Apr 30 03:29:24.717876 containerd[1473]: 2025-04-30 03:29:24.710 [INFO][4941] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:24.717876 containerd[1473]: 2025-04-30 03:29:24.714 [INFO][4896] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" Apr 30 03:29:24.719264 containerd[1473]: time="2025-04-30T03:29:24.717837389Z" level=info msg="TearDown network for sandbox \"1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e\" successfully" Apr 30 03:29:24.719264 containerd[1473]: time="2025-04-30T03:29:24.718188025Z" level=info msg="StopPodSandbox for \"1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e\" returns successfully" Apr 30 03:29:24.825654 containerd[1473]: time="2025-04-30T03:29:24.825573110Z" level=info msg="RemovePodSandbox for \"1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e\"" Apr 30 03:29:24.830673 containerd[1473]: time="2025-04-30T03:29:24.830598116Z" level=info msg="Forcibly stopping sandbox \"1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e\"" Apr 30 03:29:25.039034 sshd[4874]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:25.045976 containerd[1473]: 2025-04-30 03:29:24.951 [WARNING][4964] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--fp4xj-eth0", GenerateName:"calico-apiserver-5579bb7b4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ce3de429-7f35-47dd-ba9a-d97e4159a358", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5579bb7b4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-7c044d2e24", ContainerID:"415b229d4fab050610a6aac4537282762a155d2921ebdaaa3991bf094f9a1805", Pod:"calico-apiserver-5579bb7b4d-fp4xj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2231e44613b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:25.045976 containerd[1473]: 2025-04-30 03:29:24.954 [INFO][4964] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" Apr 30 03:29:25.045976 containerd[1473]: 2025-04-30 03:29:24.954 [INFO][4964] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" iface="eth0" netns="" Apr 30 03:29:25.045976 containerd[1473]: 2025-04-30 03:29:24.954 [INFO][4964] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" Apr 30 03:29:25.045976 containerd[1473]: 2025-04-30 03:29:24.954 [INFO][4964] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" Apr 30 03:29:25.045976 containerd[1473]: 2025-04-30 03:29:25.010 [INFO][4971] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" HandleID="k8s-pod-network.1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--fp4xj-eth0" Apr 30 03:29:25.045976 containerd[1473]: 2025-04-30 03:29:25.010 [INFO][4971] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:25.045976 containerd[1473]: 2025-04-30 03:29:25.010 [INFO][4971] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:25.045976 containerd[1473]: 2025-04-30 03:29:25.025 [WARNING][4971] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" HandleID="k8s-pod-network.1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--fp4xj-eth0" Apr 30 03:29:25.045976 containerd[1473]: 2025-04-30 03:29:25.025 [INFO][4971] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" HandleID="k8s-pod-network.1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--fp4xj-eth0" Apr 30 03:29:25.045976 containerd[1473]: 2025-04-30 03:29:25.035 [INFO][4971] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:25.045976 containerd[1473]: 2025-04-30 03:29:25.040 [INFO][4964] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e" Apr 30 03:29:25.046733 containerd[1473]: time="2025-04-30T03:29:25.045965296Z" level=info msg="TearDown network for sandbox \"1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e\" successfully" Apr 30 03:29:25.053725 systemd[1]: sshd@12-143.198.63.212:22-139.178.89.65:40544.service: Deactivated successfully. Apr 30 03:29:25.060911 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 03:29:25.064162 systemd-logind[1450]: Session 11 logged out. Waiting for processes to exit. Apr 30 03:29:25.076159 systemd[1]: Started sshd@13-143.198.63.212:22-139.178.89.65:40546.service - OpenSSH per-connection server daemon (139.178.89.65:40546). Apr 30 03:29:25.079431 containerd[1473]: time="2025-04-30T03:29:25.079365566Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:29:25.079885 containerd[1473]: time="2025-04-30T03:29:25.079704947Z" level=info msg="RemovePodSandbox \"1a9bb391f15dbe10be08afa184d6a68671c908327c412ff5db8a6502a648904e\" returns successfully" Apr 30 03:29:25.081288 containerd[1473]: time="2025-04-30T03:29:25.080700674Z" level=info msg="StopPodSandbox for \"0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd\"" Apr 30 03:29:25.081074 systemd-logind[1450]: Removed session 11. Apr 30 03:29:25.154887 sshd[4981]: Accepted publickey for core from 139.178.89.65 port 40546 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:29:25.155976 sshd[4981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:25.163138 systemd-logind[1450]: New session 12 of user core. Apr 30 03:29:25.169992 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 03:29:25.235783 containerd[1473]: 2025-04-30 03:29:25.165 [WARNING][4994] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--7c044d2e24-k8s-csi--node--driver--bfvvm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8f623e99-7bb9-4ed3-8866-963ff1311503", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-7c044d2e24", ContainerID:"be0dd55239f7b13536961312c4737a0dd1ac004c7640e905ee81eb07885b5eec", Pod:"csi-node-driver-bfvvm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califab5f7313e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:25.235783 containerd[1473]: 2025-04-30 03:29:25.166 [INFO][4994] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" Apr 30 03:29:25.235783 containerd[1473]: 2025-04-30 03:29:25.166 [INFO][4994] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" iface="eth0" netns="" Apr 30 03:29:25.235783 containerd[1473]: 2025-04-30 03:29:25.166 [INFO][4994] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" Apr 30 03:29:25.235783 containerd[1473]: 2025-04-30 03:29:25.166 [INFO][4994] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" Apr 30 03:29:25.235783 containerd[1473]: 2025-04-30 03:29:25.214 [INFO][5002] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" HandleID="k8s-pod-network.0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" Workload="ci--4081.3.3--0--7c044d2e24-k8s-csi--node--driver--bfvvm-eth0" Apr 30 03:29:25.235783 containerd[1473]: 2025-04-30 03:29:25.215 [INFO][5002] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:25.235783 containerd[1473]: 2025-04-30 03:29:25.215 [INFO][5002] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:25.235783 containerd[1473]: 2025-04-30 03:29:25.224 [WARNING][5002] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" HandleID="k8s-pod-network.0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" Workload="ci--4081.3.3--0--7c044d2e24-k8s-csi--node--driver--bfvvm-eth0" Apr 30 03:29:25.235783 containerd[1473]: 2025-04-30 03:29:25.224 [INFO][5002] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" HandleID="k8s-pod-network.0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" Workload="ci--4081.3.3--0--7c044d2e24-k8s-csi--node--driver--bfvvm-eth0" Apr 30 03:29:25.235783 containerd[1473]: 2025-04-30 03:29:25.227 [INFO][5002] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:25.235783 containerd[1473]: 2025-04-30 03:29:25.230 [INFO][4994] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" Apr 30 03:29:25.237721 containerd[1473]: time="2025-04-30T03:29:25.235848415Z" level=info msg="TearDown network for sandbox \"0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd\" successfully" Apr 30 03:29:25.237721 containerd[1473]: time="2025-04-30T03:29:25.235883248Z" level=info msg="StopPodSandbox for \"0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd\" returns successfully" Apr 30 03:29:25.237721 containerd[1473]: time="2025-04-30T03:29:25.237201159Z" level=info msg="RemovePodSandbox for \"0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd\"" Apr 30 03:29:25.237721 containerd[1473]: time="2025-04-30T03:29:25.237724088Z" level=info msg="Forcibly stopping sandbox \"0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd\"" Apr 30 03:29:25.443353 containerd[1473]: 2025-04-30 03:29:25.342 [WARNING][5025] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--7c044d2e24-k8s-csi--node--driver--bfvvm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8f623e99-7bb9-4ed3-8866-963ff1311503", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-7c044d2e24", ContainerID:"be0dd55239f7b13536961312c4737a0dd1ac004c7640e905ee81eb07885b5eec", Pod:"csi-node-driver-bfvvm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califab5f7313e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:25.443353 containerd[1473]: 2025-04-30 03:29:25.343 [INFO][5025] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" Apr 30 03:29:25.443353 containerd[1473]: 2025-04-30 03:29:25.343 [INFO][5025] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" iface="eth0" netns="" Apr 30 03:29:25.443353 containerd[1473]: 2025-04-30 03:29:25.344 [INFO][5025] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" Apr 30 03:29:25.443353 containerd[1473]: 2025-04-30 03:29:25.344 [INFO][5025] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" Apr 30 03:29:25.443353 containerd[1473]: 2025-04-30 03:29:25.423 [INFO][5033] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" HandleID="k8s-pod-network.0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" Workload="ci--4081.3.3--0--7c044d2e24-k8s-csi--node--driver--bfvvm-eth0" Apr 30 03:29:25.443353 containerd[1473]: 2025-04-30 03:29:25.423 [INFO][5033] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:25.443353 containerd[1473]: 2025-04-30 03:29:25.423 [INFO][5033] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:25.443353 containerd[1473]: 2025-04-30 03:29:25.433 [WARNING][5033] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" HandleID="k8s-pod-network.0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" Workload="ci--4081.3.3--0--7c044d2e24-k8s-csi--node--driver--bfvvm-eth0" Apr 30 03:29:25.443353 containerd[1473]: 2025-04-30 03:29:25.434 [INFO][5033] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" HandleID="k8s-pod-network.0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" Workload="ci--4081.3.3--0--7c044d2e24-k8s-csi--node--driver--bfvvm-eth0" Apr 30 03:29:25.443353 containerd[1473]: 2025-04-30 03:29:25.437 [INFO][5033] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:25.443353 containerd[1473]: 2025-04-30 03:29:25.440 [INFO][5025] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd" Apr 30 03:29:25.443353 containerd[1473]: time="2025-04-30T03:29:25.443323447Z" level=info msg="TearDown network for sandbox \"0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd\" successfully" Apr 30 03:29:25.450036 containerd[1473]: time="2025-04-30T03:29:25.449918043Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:29:25.450222 containerd[1473]: time="2025-04-30T03:29:25.450102662Z" level=info msg="RemovePodSandbox \"0a660ab86b6848b2bc929122bdebe21e9f69b0360781e4fd60a4a4d886e6e1bd\" returns successfully" Apr 30 03:29:25.453035 containerd[1473]: time="2025-04-30T03:29:25.452407344Z" level=info msg="StopPodSandbox for \"400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d\"" Apr 30 03:29:25.499237 sshd[4981]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:25.515943 systemd[1]: sshd@13-143.198.63.212:22-139.178.89.65:40546.service: Deactivated successfully. Apr 30 03:29:25.522834 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 03:29:25.534788 systemd-logind[1450]: Session 12 logged out. Waiting for processes to exit. Apr 30 03:29:25.540105 systemd[1]: Started sshd@14-143.198.63.212:22-139.178.89.65:40554.service - OpenSSH per-connection server daemon (139.178.89.65:40554). Apr 30 03:29:25.547427 systemd-logind[1450]: Removed session 12. Apr 30 03:29:25.649743 sshd[5061]: Accepted publickey for core from 139.178.89.65 port 40554 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:29:25.655102 sshd[5061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:25.669624 systemd-logind[1450]: New session 13 of user core. Apr 30 03:29:25.675003 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 03:29:25.734023 containerd[1473]: 2025-04-30 03:29:25.646 [WARNING][5052] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--m94hd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bb22c691-4fbf-4372-b30c-281e4f70d3e0", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-7c044d2e24", ContainerID:"aae761b65af1085c54bf75fb0f2dc91e9a953022829d295fe9792802243ef366", Pod:"coredns-7db6d8ff4d-m94hd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf68f2818bb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:25.734023 containerd[1473]: 2025-04-30 03:29:25.646 [INFO][5052] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" Apr 30 03:29:25.734023 containerd[1473]: 2025-04-30 03:29:25.646 [INFO][5052] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" iface="eth0" netns="" Apr 30 03:29:25.734023 containerd[1473]: 2025-04-30 03:29:25.646 [INFO][5052] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" Apr 30 03:29:25.734023 containerd[1473]: 2025-04-30 03:29:25.646 [INFO][5052] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" Apr 30 03:29:25.734023 containerd[1473]: 2025-04-30 03:29:25.709 [INFO][5065] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" HandleID="k8s-pod-network.400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" Workload="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--m94hd-eth0" Apr 30 03:29:25.734023 containerd[1473]: 2025-04-30 03:29:25.710 [INFO][5065] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:25.734023 containerd[1473]: 2025-04-30 03:29:25.710 [INFO][5065] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:25.734023 containerd[1473]: 2025-04-30 03:29:25.719 [WARNING][5065] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" HandleID="k8s-pod-network.400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" Workload="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--m94hd-eth0" Apr 30 03:29:25.734023 containerd[1473]: 2025-04-30 03:29:25.719 [INFO][5065] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" HandleID="k8s-pod-network.400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" Workload="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--m94hd-eth0" Apr 30 03:29:25.734023 containerd[1473]: 2025-04-30 03:29:25.724 [INFO][5065] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:25.734023 containerd[1473]: 2025-04-30 03:29:25.728 [INFO][5052] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" Apr 30 03:29:25.734023 containerd[1473]: time="2025-04-30T03:29:25.733972673Z" level=info msg="TearDown network for sandbox \"400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d\" successfully" Apr 30 03:29:25.734023 containerd[1473]: time="2025-04-30T03:29:25.734021301Z" level=info msg="StopPodSandbox for \"400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d\" returns successfully" Apr 30 03:29:25.739452 containerd[1473]: time="2025-04-30T03:29:25.736900758Z" level=info msg="RemovePodSandbox for \"400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d\"" Apr 30 03:29:25.739452 containerd[1473]: time="2025-04-30T03:29:25.736956318Z" level=info msg="Forcibly stopping sandbox \"400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d\"" Apr 30 03:29:25.916864 sshd[5061]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:25.927120 systemd[1]: sshd@14-143.198.63.212:22-139.178.89.65:40554.service: Deactivated successfully. Apr 30 03:29:25.936753 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 03:29:25.939252 systemd-logind[1450]: Session 13 logged out. Waiting for processes to exit. Apr 30 03:29:25.942474 systemd-logind[1450]: Removed session 13. Apr 30 03:29:25.954731 containerd[1473]: 2025-04-30 03:29:25.866 [WARNING][5090] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--m94hd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bb22c691-4fbf-4372-b30c-281e4f70d3e0", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-7c044d2e24", ContainerID:"aae761b65af1085c54bf75fb0f2dc91e9a953022829d295fe9792802243ef366", Pod:"coredns-7db6d8ff4d-m94hd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf68f2818bb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:25.954731 containerd[1473]: 2025-04-30 03:29:25.866 [INFO][5090] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" Apr 30 03:29:25.954731 containerd[1473]: 2025-04-30 03:29:25.866 [INFO][5090] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" iface="eth0" netns="" Apr 30 03:29:25.954731 containerd[1473]: 2025-04-30 03:29:25.866 [INFO][5090] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" Apr 30 03:29:25.954731 containerd[1473]: 2025-04-30 03:29:25.866 [INFO][5090] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" Apr 30 03:29:25.954731 containerd[1473]: 2025-04-30 03:29:25.916 [INFO][5098] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" HandleID="k8s-pod-network.400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" Workload="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--m94hd-eth0" Apr 30 03:29:25.954731 containerd[1473]: 2025-04-30 03:29:25.917 [INFO][5098] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:25.954731 containerd[1473]: 2025-04-30 03:29:25.917 [INFO][5098] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:25.954731 containerd[1473]: 2025-04-30 03:29:25.942 [WARNING][5098] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" HandleID="k8s-pod-network.400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" Workload="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--m94hd-eth0" Apr 30 03:29:25.954731 containerd[1473]: 2025-04-30 03:29:25.943 [INFO][5098] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" HandleID="k8s-pod-network.400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" Workload="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--m94hd-eth0" Apr 30 03:29:25.954731 containerd[1473]: 2025-04-30 03:29:25.948 [INFO][5098] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:25.954731 containerd[1473]: 2025-04-30 03:29:25.950 [INFO][5090] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d" Apr 30 03:29:25.954731 containerd[1473]: time="2025-04-30T03:29:25.953800700Z" level=info msg="TearDown network for sandbox \"400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d\" successfully" Apr 30 03:29:25.957726 containerd[1473]: time="2025-04-30T03:29:25.957648226Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:29:25.958286 containerd[1473]: time="2025-04-30T03:29:25.957747685Z" level=info msg="RemovePodSandbox \"400e217bc624b7e9549c513e8df21bae4d83af0440d31402c71cdfd807fc423d\" returns successfully" Apr 30 03:29:25.959027 containerd[1473]: time="2025-04-30T03:29:25.958666728Z" level=info msg="StopPodSandbox for \"b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9\"" Apr 30 03:29:26.094103 containerd[1473]: 2025-04-30 03:29:26.034 [WARNING][5118] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--7c044d2e24-k8s-calico--kube--controllers--5846bc4884--ttjjz-eth0", GenerateName:"calico-kube-controllers-5846bc4884-", Namespace:"calico-system", SelfLink:"", UID:"7b1efadb-18f7-436b-8e71-a7c0c7270888", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5846bc4884", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-7c044d2e24", ContainerID:"9aed92414a88e7842daf679e751537f30eaad5afe3004bbf26a9534f8b72a277", Pod:"calico-kube-controllers-5846bc4884-ttjjz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7377dafce28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:26.094103 containerd[1473]: 2025-04-30 03:29:26.035 [INFO][5118] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" Apr 30 03:29:26.094103 containerd[1473]: 2025-04-30 03:29:26.035 [INFO][5118] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" iface="eth0" netns="" Apr 30 03:29:26.094103 containerd[1473]: 2025-04-30 03:29:26.035 [INFO][5118] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" Apr 30 03:29:26.094103 containerd[1473]: 2025-04-30 03:29:26.035 [INFO][5118] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" Apr 30 03:29:26.094103 containerd[1473]: 2025-04-30 03:29:26.071 [INFO][5125] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" HandleID="k8s-pod-network.b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--kube--controllers--5846bc4884--ttjjz-eth0" Apr 30 03:29:26.094103 containerd[1473]: 2025-04-30 03:29:26.071 [INFO][5125] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:26.094103 containerd[1473]: 2025-04-30 03:29:26.072 [INFO][5125] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:26.094103 containerd[1473]: 2025-04-30 03:29:26.084 [WARNING][5125] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" HandleID="k8s-pod-network.b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--kube--controllers--5846bc4884--ttjjz-eth0" Apr 30 03:29:26.094103 containerd[1473]: 2025-04-30 03:29:26.084 [INFO][5125] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" HandleID="k8s-pod-network.b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--kube--controllers--5846bc4884--ttjjz-eth0" Apr 30 03:29:26.094103 containerd[1473]: 2025-04-30 03:29:26.088 [INFO][5125] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:26.094103 containerd[1473]: 2025-04-30 03:29:26.091 [INFO][5118] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" Apr 30 03:29:26.095745 containerd[1473]: time="2025-04-30T03:29:26.094147218Z" level=info msg="TearDown network for sandbox \"b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9\" successfully" Apr 30 03:29:26.095745 containerd[1473]: time="2025-04-30T03:29:26.094185496Z" level=info msg="StopPodSandbox for \"b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9\" returns successfully" Apr 30 03:29:26.096492 containerd[1473]: time="2025-04-30T03:29:26.096403757Z" level=info msg="RemovePodSandbox for \"b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9\"" Apr 30 03:29:26.096492 containerd[1473]: time="2025-04-30T03:29:26.096466975Z" level=info msg="Forcibly stopping sandbox \"b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9\"" Apr 30 03:29:26.220179 containerd[1473]: 2025-04-30 03:29:26.159 [WARNING][5143] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--7c044d2e24-k8s-calico--kube--controllers--5846bc4884--ttjjz-eth0", GenerateName:"calico-kube-controllers-5846bc4884-", Namespace:"calico-system", SelfLink:"", UID:"7b1efadb-18f7-436b-8e71-a7c0c7270888", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5846bc4884", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-7c044d2e24", ContainerID:"9aed92414a88e7842daf679e751537f30eaad5afe3004bbf26a9534f8b72a277", Pod:"calico-kube-controllers-5846bc4884-ttjjz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7377dafce28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:26.220179 containerd[1473]: 2025-04-30 03:29:26.160 [INFO][5143] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" Apr 30 03:29:26.220179 containerd[1473]: 2025-04-30 03:29:26.160 [INFO][5143] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" iface="eth0" netns="" Apr 30 03:29:26.220179 containerd[1473]: 2025-04-30 03:29:26.160 [INFO][5143] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" Apr 30 03:29:26.220179 containerd[1473]: 2025-04-30 03:29:26.160 [INFO][5143] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" Apr 30 03:29:26.220179 containerd[1473]: 2025-04-30 03:29:26.200 [INFO][5150] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" HandleID="k8s-pod-network.b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--kube--controllers--5846bc4884--ttjjz-eth0" Apr 30 03:29:26.220179 containerd[1473]: 2025-04-30 03:29:26.200 [INFO][5150] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:26.220179 containerd[1473]: 2025-04-30 03:29:26.200 [INFO][5150] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:26.220179 containerd[1473]: 2025-04-30 03:29:26.210 [WARNING][5150] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" HandleID="k8s-pod-network.b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--kube--controllers--5846bc4884--ttjjz-eth0" Apr 30 03:29:26.220179 containerd[1473]: 2025-04-30 03:29:26.210 [INFO][5150] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" HandleID="k8s-pod-network.b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--kube--controllers--5846bc4884--ttjjz-eth0" Apr 30 03:29:26.220179 containerd[1473]: 2025-04-30 03:29:26.213 [INFO][5150] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:26.220179 containerd[1473]: 2025-04-30 03:29:26.216 [INFO][5143] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9" Apr 30 03:29:26.221054 containerd[1473]: time="2025-04-30T03:29:26.220280843Z" level=info msg="TearDown network for sandbox \"b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9\" successfully" Apr 30 03:29:26.224726 containerd[1473]: time="2025-04-30T03:29:26.224644901Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:29:26.224883 containerd[1473]: time="2025-04-30T03:29:26.224756417Z" level=info msg="RemovePodSandbox \"b44ff0bfea8194182e29f7577048f1e36064bbec16164538164531b74be129d9\" returns successfully" Apr 30 03:29:26.226160 containerd[1473]: time="2025-04-30T03:29:26.225753306Z" level=info msg="StopPodSandbox for \"f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305\"" Apr 30 03:29:26.363107 containerd[1473]: 2025-04-30 03:29:26.300 [WARNING][5168] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--xsv99-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"021ebc6e-397c-468a-9ff4-cdbf45e8c256", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-7c044d2e24", ContainerID:"6929e6757925339501a83cfd69db5530e4d05dbc20fe06e1c494d89eb459d727", Pod:"coredns-7db6d8ff4d-xsv99", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibd21a73678a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:26.363107 containerd[1473]: 2025-04-30 03:29:26.300 [INFO][5168] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" Apr 30 03:29:26.363107 containerd[1473]: 2025-04-30 03:29:26.300 [INFO][5168] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" iface="eth0" netns="" Apr 30 03:29:26.363107 containerd[1473]: 2025-04-30 03:29:26.300 [INFO][5168] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" Apr 30 03:29:26.363107 containerd[1473]: 2025-04-30 03:29:26.301 [INFO][5168] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" Apr 30 03:29:26.363107 containerd[1473]: 2025-04-30 03:29:26.345 [INFO][5176] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" HandleID="k8s-pod-network.f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" Workload="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--xsv99-eth0" Apr 30 03:29:26.363107 containerd[1473]: 2025-04-30 03:29:26.345 [INFO][5176] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:26.363107 containerd[1473]: 2025-04-30 03:29:26.345 [INFO][5176] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:26.363107 containerd[1473]: 2025-04-30 03:29:26.354 [WARNING][5176] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" HandleID="k8s-pod-network.f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" Workload="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--xsv99-eth0" Apr 30 03:29:26.363107 containerd[1473]: 2025-04-30 03:29:26.354 [INFO][5176] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" HandleID="k8s-pod-network.f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" Workload="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--xsv99-eth0" Apr 30 03:29:26.363107 containerd[1473]: 2025-04-30 03:29:26.357 [INFO][5176] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:26.363107 containerd[1473]: 2025-04-30 03:29:26.359 [INFO][5168] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" Apr 30 03:29:26.364663 containerd[1473]: time="2025-04-30T03:29:26.364425036Z" level=info msg="TearDown network for sandbox \"f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305\" successfully" Apr 30 03:29:26.364663 containerd[1473]: time="2025-04-30T03:29:26.364521144Z" level=info msg="StopPodSandbox for \"f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305\" returns successfully" Apr 30 03:29:26.366225 containerd[1473]: time="2025-04-30T03:29:26.366163185Z" level=info msg="RemovePodSandbox for \"f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305\"" Apr 30 03:29:26.366318 containerd[1473]: time="2025-04-30T03:29:26.366251677Z" level=info msg="Forcibly stopping sandbox \"f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305\"" Apr 30 03:29:26.497912 containerd[1473]: 2025-04-30 03:29:26.439 [WARNING][5195] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--xsv99-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"021ebc6e-397c-468a-9ff4-cdbf45e8c256", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-7c044d2e24", ContainerID:"6929e6757925339501a83cfd69db5530e4d05dbc20fe06e1c494d89eb459d727", Pod:"coredns-7db6d8ff4d-xsv99", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibd21a73678a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:26.497912 containerd[1473]: 2025-04-30 03:29:26.439 [INFO][5195] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" Apr 30 03:29:26.497912 containerd[1473]: 2025-04-30 03:29:26.439 [INFO][5195] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" iface="eth0" netns="" Apr 30 03:29:26.497912 containerd[1473]: 2025-04-30 03:29:26.439 [INFO][5195] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" Apr 30 03:29:26.497912 containerd[1473]: 2025-04-30 03:29:26.439 [INFO][5195] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" Apr 30 03:29:26.497912 containerd[1473]: 2025-04-30 03:29:26.475 [INFO][5202] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" HandleID="k8s-pod-network.f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" Workload="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--xsv99-eth0" Apr 30 03:29:26.497912 containerd[1473]: 2025-04-30 03:29:26.475 [INFO][5202] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:26.497912 containerd[1473]: 2025-04-30 03:29:26.475 [INFO][5202] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:26.497912 containerd[1473]: 2025-04-30 03:29:26.487 [WARNING][5202] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" HandleID="k8s-pod-network.f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" Workload="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--xsv99-eth0" Apr 30 03:29:26.497912 containerd[1473]: 2025-04-30 03:29:26.487 [INFO][5202] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" HandleID="k8s-pod-network.f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" Workload="ci--4081.3.3--0--7c044d2e24-k8s-coredns--7db6d8ff4d--xsv99-eth0" Apr 30 03:29:26.497912 containerd[1473]: 2025-04-30 03:29:26.490 [INFO][5202] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:26.497912 containerd[1473]: 2025-04-30 03:29:26.492 [INFO][5195] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305" Apr 30 03:29:26.497912 containerd[1473]: time="2025-04-30T03:29:26.497919311Z" level=info msg="TearDown network for sandbox \"f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305\" successfully" Apr 30 03:29:26.506735 containerd[1473]: time="2025-04-30T03:29:26.506541402Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:29:26.506938 containerd[1473]: time="2025-04-30T03:29:26.506839670Z" level=info msg="RemovePodSandbox \"f23b36c6458d5edbc07288946c400c22294b5c895af70fde7e6661ac78be8305\" returns successfully" Apr 30 03:29:26.507965 containerd[1473]: time="2025-04-30T03:29:26.507541793Z" level=info msg="StopPodSandbox for \"728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e\"" Apr 30 03:29:26.605229 systemd[1]: Started sshd@15-143.198.63.212:22-218.92.0.157:55729.service - OpenSSH per-connection server daemon (218.92.0.157:55729). Apr 30 03:29:26.626897 containerd[1473]: 2025-04-30 03:29:26.562 [WARNING][5220] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--2rxcz-eth0", GenerateName:"calico-apiserver-5579bb7b4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"06d3b462-d34a-4562-b3c2-6a83b60fac79", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5579bb7b4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-7c044d2e24", ContainerID:"53fa3f7fcef90f04d163f2e6dcdfb72dda51bb246a41a743b0595ff9479865e1", Pod:"calico-apiserver-5579bb7b4d-2rxcz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califc6963c1e46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:26.626897 containerd[1473]: 2025-04-30 03:29:26.563 [INFO][5220] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" Apr 30 03:29:26.626897 containerd[1473]: 2025-04-30 03:29:26.563 [INFO][5220] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" iface="eth0" netns="" Apr 30 03:29:26.626897 containerd[1473]: 2025-04-30 03:29:26.563 [INFO][5220] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" Apr 30 03:29:26.626897 containerd[1473]: 2025-04-30 03:29:26.563 [INFO][5220] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" Apr 30 03:29:26.626897 containerd[1473]: 2025-04-30 03:29:26.604 [INFO][5227] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" HandleID="k8s-pod-network.728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--2rxcz-eth0" Apr 30 03:29:26.626897 containerd[1473]: 2025-04-30 03:29:26.605 [INFO][5227] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:26.626897 containerd[1473]: 2025-04-30 03:29:26.605 [INFO][5227] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:26.626897 containerd[1473]: 2025-04-30 03:29:26.616 [WARNING][5227] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" HandleID="k8s-pod-network.728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--2rxcz-eth0" Apr 30 03:29:26.626897 containerd[1473]: 2025-04-30 03:29:26.617 [INFO][5227] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" HandleID="k8s-pod-network.728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--2rxcz-eth0" Apr 30 03:29:26.626897 containerd[1473]: 2025-04-30 03:29:26.619 [INFO][5227] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:26.626897 containerd[1473]: 2025-04-30 03:29:26.623 [INFO][5220] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" Apr 30 03:29:26.630096 containerd[1473]: time="2025-04-30T03:29:26.626942636Z" level=info msg="TearDown network for sandbox \"728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e\" successfully" Apr 30 03:29:26.630096 containerd[1473]: time="2025-04-30T03:29:26.626978003Z" level=info msg="StopPodSandbox for \"728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e\" returns successfully" Apr 30 03:29:26.630096 containerd[1473]: time="2025-04-30T03:29:26.627932601Z" level=info msg="RemovePodSandbox for \"728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e\"" Apr 30 03:29:26.630096 containerd[1473]: time="2025-04-30T03:29:26.627976136Z" level=info msg="Forcibly stopping sandbox \"728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e\"" Apr 30 03:29:26.762900 containerd[1473]: 2025-04-30 03:29:26.697 [WARNING][5248] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--2rxcz-eth0", GenerateName:"calico-apiserver-5579bb7b4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"06d3b462-d34a-4562-b3c2-6a83b60fac79", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5579bb7b4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-0-7c044d2e24", ContainerID:"53fa3f7fcef90f04d163f2e6dcdfb72dda51bb246a41a743b0595ff9479865e1", Pod:"calico-apiserver-5579bb7b4d-2rxcz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califc6963c1e46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:26.762900 containerd[1473]: 2025-04-30 03:29:26.697 [INFO][5248] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" Apr 30 03:29:26.762900 containerd[1473]: 2025-04-30 03:29:26.698 [INFO][5248] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" iface="eth0" netns="" Apr 30 03:29:26.762900 containerd[1473]: 2025-04-30 03:29:26.698 [INFO][5248] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" Apr 30 03:29:26.762900 containerd[1473]: 2025-04-30 03:29:26.698 [INFO][5248] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" Apr 30 03:29:26.762900 containerd[1473]: 2025-04-30 03:29:26.743 [INFO][5255] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" HandleID="k8s-pod-network.728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--2rxcz-eth0" Apr 30 03:29:26.762900 containerd[1473]: 2025-04-30 03:29:26.743 [INFO][5255] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:26.762900 containerd[1473]: 2025-04-30 03:29:26.743 [INFO][5255] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:26.762900 containerd[1473]: 2025-04-30 03:29:26.753 [WARNING][5255] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" HandleID="k8s-pod-network.728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--2rxcz-eth0" Apr 30 03:29:26.762900 containerd[1473]: 2025-04-30 03:29:26.753 [INFO][5255] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" HandleID="k8s-pod-network.728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" Workload="ci--4081.3.3--0--7c044d2e24-k8s-calico--apiserver--5579bb7b4d--2rxcz-eth0" Apr 30 03:29:26.762900 containerd[1473]: 2025-04-30 03:29:26.757 [INFO][5255] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:26.762900 containerd[1473]: 2025-04-30 03:29:26.759 [INFO][5248] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e" Apr 30 03:29:26.763764 containerd[1473]: time="2025-04-30T03:29:26.763075385Z" level=info msg="TearDown network for sandbox \"728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e\" successfully" Apr 30 03:29:26.793626 containerd[1473]: time="2025-04-30T03:29:26.793394159Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:29:26.793626 containerd[1473]: time="2025-04-30T03:29:26.793568967Z" level=info msg="RemovePodSandbox \"728125c2774de7202ecf829857302c8ba9a5b6a956d3fd68f979a2d5f2c2b39e\" returns successfully" Apr 30 03:29:27.309599 containerd[1473]: time="2025-04-30T03:29:27.309541066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:27.311843 containerd[1473]: time="2025-04-30T03:29:27.311756506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" Apr 30 03:29:27.312961 containerd[1473]: time="2025-04-30T03:29:27.312872179Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:27.316721 containerd[1473]: time="2025-04-30T03:29:27.316628985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:27.318213 containerd[1473]: time="2025-04-30T03:29:27.318037815Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.717681035s" Apr 30 03:29:27.318213 containerd[1473]: time="2025-04-30T03:29:27.318091706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" Apr 30 03:29:27.325120 containerd[1473]: time="2025-04-30T03:29:27.325019273Z" level=info msg="CreateContainer within sandbox \"be0dd55239f7b13536961312c4737a0dd1ac004c7640e905ee81eb07885b5eec\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 03:29:27.348638 containerd[1473]: time="2025-04-30T03:29:27.347637312Z" level=info msg="CreateContainer within sandbox \"be0dd55239f7b13536961312c4737a0dd1ac004c7640e905ee81eb07885b5eec\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"49eb523cef98d05baca50e125213294082fa1cc2006bb278377612593cfe4025\"" Apr 30 03:29:27.352168 containerd[1473]: time="2025-04-30T03:29:27.352127340Z" level=info msg="StartContainer for \"49eb523cef98d05baca50e125213294082fa1cc2006bb278377612593cfe4025\"" Apr 30 03:29:27.428870 systemd[1]: Started cri-containerd-49eb523cef98d05baca50e125213294082fa1cc2006bb278377612593cfe4025.scope - libcontainer container 49eb523cef98d05baca50e125213294082fa1cc2006bb278377612593cfe4025. Apr 30 03:29:27.484983 containerd[1473]: time="2025-04-30T03:29:27.482969951Z" level=info msg="StartContainer for \"49eb523cef98d05baca50e125213294082fa1cc2006bb278377612593cfe4025\" returns successfully" Apr 30 03:29:27.643196 sshd[5297]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Apr 30 03:29:27.841349 kubelet[2556]: I0430 03:29:27.839558 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5579bb7b4d-2rxcz" podStartSLOduration=34.205905593 podStartE2EDuration="42.839533075s" podCreationTimestamp="2025-04-30 03:28:45 +0000 UTC" firstStartedPulling="2025-04-30 03:29:13.12089024 +0000 UTC m=+49.045189959" lastFinishedPulling="2025-04-30 03:29:21.754517714 +0000 UTC m=+57.678817441" observedRunningTime="2025-04-30 03:29:22.790293142 +0000 UTC m=+58.714592875" watchObservedRunningTime="2025-04-30 03:29:27.839533075 +0000 UTC m=+63.763832804" Apr 30 03:29:28.523617 kubelet[2556]: I0430 03:29:28.523538 2556 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 03:29:28.523617 kubelet[2556]: I0430 03:29:28.523647 2556 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 03:29:29.297998 sshd[5233]: PAM: Permission denied for root from 218.92.0.157 Apr 30 03:29:29.573652 sshd[5323]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Apr 30 03:29:30.936271 systemd[1]: Started sshd@16-143.198.63.212:22-139.178.89.65:33122.service - OpenSSH per-connection server daemon (139.178.89.65:33122). Apr 30 03:29:31.027571 sshd[5331]: Accepted publickey for core from 139.178.89.65 port 33122 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:29:31.030706 sshd[5331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:31.036702 systemd-logind[1450]: New session 14 of user core. Apr 30 03:29:31.046848 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 03:29:31.249129 sshd[5331]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:31.254776 systemd-logind[1450]: Session 14 logged out. Waiting for processes to exit. Apr 30 03:29:31.255587 systemd[1]: sshd@16-143.198.63.212:22-139.178.89.65:33122.service: Deactivated successfully. Apr 30 03:29:31.258433 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 03:29:31.260080 systemd-logind[1450]: Removed session 14. Apr 30 03:29:31.499840 sshd[5233]: PAM: Permission denied for root from 218.92.0.157 Apr 30 03:29:31.774176 sshd[5343]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Apr 30 03:29:33.974708 sshd[5233]: PAM: Permission denied for root from 218.92.0.157 Apr 30 03:29:34.111659 sshd[5233]: Received disconnect from 218.92.0.157 port 55729:11: [preauth] Apr 30 03:29:34.111659 sshd[5233]: Disconnected from authenticating user root 218.92.0.157 port 55729 [preauth] Apr 30 03:29:34.114939 systemd[1]: sshd@15-143.198.63.212:22-218.92.0.157:55729.service: Deactivated successfully. Apr 30 03:29:36.270976 systemd[1]: Started sshd@17-143.198.63.212:22-139.178.89.65:33134.service - OpenSSH per-connection server daemon (139.178.89.65:33134). Apr 30 03:29:36.317162 sshd[5349]: Accepted publickey for core from 139.178.89.65 port 33134 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:29:36.317969 sshd[5349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:36.324976 systemd-logind[1450]: New session 15 of user core. Apr 30 03:29:36.330987 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 03:29:36.365359 kubelet[2556]: I0430 03:29:36.365010 2556 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:29:36.402541 kubelet[2556]: I0430 03:29:36.401874 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bfvvm" podStartSLOduration=38.174858012 podStartE2EDuration="51.401857684s" podCreationTimestamp="2025-04-30 03:28:45 +0000 UTC" firstStartedPulling="2025-04-30 03:29:14.093648025 +0000 UTC m=+50.017947765" lastFinishedPulling="2025-04-30 03:29:27.32064771 +0000 UTC m=+63.244947437" observedRunningTime="2025-04-30 03:29:27.843025123 +0000 UTC m=+63.767324874" watchObservedRunningTime="2025-04-30 03:29:36.401857684 +0000 UTC m=+72.326157418" Apr 30 03:29:36.523845 sshd[5349]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:36.530280 systemd[1]: sshd@17-143.198.63.212:22-139.178.89.65:33134.service: Deactivated successfully. Apr 30 03:29:36.533200 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 03:29:36.536024 systemd-logind[1450]: Session 15 logged out. Waiting for processes to exit. Apr 30 03:29:36.538730 systemd-logind[1450]: Removed session 15. Apr 30 03:29:41.554872 systemd[1]: Started sshd@18-143.198.63.212:22-139.178.89.65:45404.service - OpenSSH per-connection server daemon (139.178.89.65:45404). Apr 30 03:29:41.654721 sshd[5368]: Accepted publickey for core from 139.178.89.65 port 45404 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:29:41.656756 sshd[5368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:41.662504 systemd-logind[1450]: New session 16 of user core. Apr 30 03:29:41.674912 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 03:29:41.879120 sshd[5368]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:41.887903 systemd[1]: sshd@18-143.198.63.212:22-139.178.89.65:45404.service: Deactivated successfully. Apr 30 03:29:41.889967 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 03:29:41.890810 systemd-logind[1450]: Session 16 logged out. Waiting for processes to exit. Apr 30 03:29:41.892371 systemd-logind[1450]: Removed session 16. Apr 30 03:29:42.763057 kubelet[2556]: I0430 03:29:42.762825 2556 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:29:46.904065 systemd[1]: Started sshd@19-143.198.63.212:22-139.178.89.65:37092.service - OpenSSH per-connection server daemon (139.178.89.65:37092). Apr 30 03:29:46.985658 sshd[5384]: Accepted publickey for core from 139.178.89.65 port 37092 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:29:46.991070 sshd[5384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:47.000665 systemd-logind[1450]: New session 17 of user core. Apr 30 03:29:47.006043 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 03:29:47.326363 sshd[5384]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:47.339678 systemd[1]: sshd@19-143.198.63.212:22-139.178.89.65:37092.service: Deactivated successfully. Apr 30 03:29:47.343178 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 03:29:47.346921 systemd-logind[1450]: Session 17 logged out. Waiting for processes to exit. Apr 30 03:29:47.358022 systemd[1]: Started sshd@20-143.198.63.212:22-139.178.89.65:37108.service - OpenSSH per-connection server daemon (139.178.89.65:37108). Apr 30 03:29:47.361394 systemd-logind[1450]: Removed session 17. Apr 30 03:29:47.419482 sshd[5397]: Accepted publickey for core from 139.178.89.65 port 37108 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:29:47.420212 sshd[5397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:47.428230 systemd-logind[1450]: New session 18 of user core. Apr 30 03:29:47.435002 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 03:29:47.868823 sshd[5397]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:47.886457 systemd[1]: sshd@20-143.198.63.212:22-139.178.89.65:37108.service: Deactivated successfully. Apr 30 03:29:47.890824 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 03:29:47.894752 systemd-logind[1450]: Session 18 logged out. Waiting for processes to exit. Apr 30 03:29:47.905831 systemd[1]: Started sshd@21-143.198.63.212:22-139.178.89.65:37116.service - OpenSSH per-connection server daemon (139.178.89.65:37116). Apr 30 03:29:47.912378 systemd-logind[1450]: Removed session 18. Apr 30 03:29:47.999131 sshd[5412]: Accepted publickey for core from 139.178.89.65 port 37116 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:29:48.001750 sshd[5412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:48.014876 systemd-logind[1450]: New session 19 of user core. Apr 30 03:29:48.020803 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 03:29:49.500008 systemd[1]: run-containerd-runc-k8s.io-7a1c80f5b65e098e14fff113d7432864c6876f81c7373a480908160201a8ccbd-runc.phGJ0w.mount: Deactivated successfully. Apr 30 03:29:51.236029 sshd[5412]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:51.253413 systemd[1]: Started sshd@22-143.198.63.212:22-139.178.89.65:37124.service - OpenSSH per-connection server daemon (139.178.89.65:37124). Apr 30 03:29:51.254081 systemd[1]: sshd@21-143.198.63.212:22-139.178.89.65:37116.service: Deactivated successfully. Apr 30 03:29:51.265222 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 03:29:51.282434 systemd-logind[1450]: Session 19 logged out. Waiting for processes to exit. Apr 30 03:29:51.287997 systemd-logind[1450]: Removed session 19. Apr 30 03:29:51.358964 sshd[5475]: Accepted publickey for core from 139.178.89.65 port 37124 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:29:51.361275 sshd[5475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:51.374554 systemd-logind[1450]: New session 20 of user core. Apr 30 03:29:51.379832 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 03:29:52.254383 sshd[5475]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:52.265530 systemd[1]: sshd@22-143.198.63.212:22-139.178.89.65:37124.service: Deactivated successfully. Apr 30 03:29:52.272371 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 03:29:52.276304 systemd-logind[1450]: Session 20 logged out. Waiting for processes to exit. Apr 30 03:29:52.283735 systemd-logind[1450]: Removed session 20. Apr 30 03:29:52.292776 systemd[1]: Started sshd@23-143.198.63.212:22-139.178.89.65:37132.service - OpenSSH per-connection server daemon (139.178.89.65:37132). Apr 30 03:29:52.414413 sshd[5490]: Accepted publickey for core from 139.178.89.65 port 37132 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:29:52.417412 sshd[5490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:52.424309 systemd-logind[1450]: New session 21 of user core. Apr 30 03:29:52.431858 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 03:29:52.665471 kubelet[2556]: E0430 03:29:52.661910 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:29:52.716866 sshd[5490]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:52.727568 systemd[1]: sshd@23-143.198.63.212:22-139.178.89.65:37132.service: Deactivated successfully. Apr 30 03:29:52.733721 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 03:29:52.736673 systemd-logind[1450]: Session 21 logged out. Waiting for processes to exit. Apr 30 03:29:52.741240 systemd-logind[1450]: Removed session 21. Apr 30 03:29:54.260931 kubelet[2556]: E0430 03:29:54.259857 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:29:57.735897 systemd[1]: Started sshd@24-143.198.63.212:22-139.178.89.65:58232.service - OpenSSH per-connection server daemon (139.178.89.65:58232). Apr 30 03:29:57.822439 sshd[5508]: Accepted publickey for core from 139.178.89.65 port 58232 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:29:57.825459 sshd[5508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:57.835613 systemd-logind[1450]: New session 22 of user core. Apr 30 03:29:57.840159 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 03:29:58.033803 sshd[5508]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:58.038910 systemd[1]: sshd@24-143.198.63.212:22-139.178.89.65:58232.service: Deactivated successfully. Apr 30 03:29:58.042078 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 03:29:58.046781 systemd-logind[1450]: Session 22 logged out. Waiting for processes to exit. Apr 30 03:29:58.049414 systemd-logind[1450]: Removed session 22. Apr 30 03:29:58.260298 kubelet[2556]: E0430 03:29:58.259469 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:29:59.259794 kubelet[2556]: E0430 03:29:59.259355 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:30:03.061183 systemd[1]: Started sshd@25-143.198.63.212:22-139.178.89.65:58244.service - OpenSSH per-connection server daemon (139.178.89.65:58244). Apr 30 03:30:03.256275 sshd[5538]: Accepted publickey for core from 139.178.89.65 port 58244 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:30:03.263332 sshd[5538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:03.273861 systemd-logind[1450]: New session 23 of user core. Apr 30 03:30:03.284019 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 03:30:03.575667 sshd[5538]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:03.583044 systemd[1]: sshd@25-143.198.63.212:22-139.178.89.65:58244.service: Deactivated successfully. Apr 30 03:30:03.587703 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 03:30:03.589100 systemd-logind[1450]: Session 23 logged out. Waiting for processes to exit. Apr 30 03:30:03.591019 systemd-logind[1450]: Removed session 23. Apr 30 03:30:07.258535 kubelet[2556]: E0430 03:30:07.258472 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:30:08.596164 systemd[1]: Started sshd@26-143.198.63.212:22-139.178.89.65:36766.service - OpenSSH per-connection server daemon (139.178.89.65:36766). Apr 30 03:30:08.644486 sshd[5551]: Accepted publickey for core from 139.178.89.65 port 36766 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:30:08.647248 sshd[5551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:08.655083 systemd-logind[1450]: New session 24 of user core. Apr 30 03:30:08.665263 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 03:30:08.820134 sshd[5551]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:08.826749 systemd[1]: sshd@26-143.198.63.212:22-139.178.89.65:36766.service: Deactivated successfully. Apr 30 03:30:08.830069 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 03:30:08.832229 systemd-logind[1450]: Session 24 logged out. Waiting for processes to exit. Apr 30 03:30:08.834082 systemd-logind[1450]: Removed session 24. Apr 30 03:30:13.841031 systemd[1]: Started sshd@27-143.198.63.212:22-139.178.89.65:36772.service - OpenSSH per-connection server daemon (139.178.89.65:36772). Apr 30 03:30:13.895425 sshd[5566]: Accepted publickey for core from 139.178.89.65 port 36772 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:30:13.898167 sshd[5566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:13.905079 systemd-logind[1450]: New session 25 of user core. Apr 30 03:30:13.912995 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 03:30:14.071995 sshd[5566]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:14.081351 systemd-logind[1450]: Session 25 logged out. Waiting for processes to exit. Apr 30 03:30:14.081803 systemd[1]: sshd@27-143.198.63.212:22-139.178.89.65:36772.service: Deactivated successfully. Apr 30 03:30:14.086727 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 03:30:14.089710 systemd-logind[1450]: Removed session 25.