Apr 30 03:44:31.821597 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 03:44:31.821616 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:44:31.821623 kernel: BIOS-provided physical RAM map: Apr 30 03:44:31.821629 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 30 03:44:31.821633 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 30 03:44:31.821637 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 30 03:44:31.821643 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Apr 30 03:44:31.821647 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Apr 30 03:44:31.821653 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 30 03:44:31.821657 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 30 03:44:31.821662 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 30 03:44:31.821667 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 30 03:44:31.821671 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 30 03:44:31.821676 kernel: NX (Execute Disable) protection: active Apr 30 03:44:31.821683 kernel: APIC: Static calls initialized Apr 30 03:44:31.821688 kernel: SMBIOS 3.0.0 present. Apr 30 03:44:31.821693 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Apr 30 03:44:31.821697 kernel: Hypervisor detected: KVM Apr 30 03:44:31.821702 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 03:44:31.821707 kernel: kvm-clock: using sched offset of 2883825387 cycles Apr 30 03:44:31.821712 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 03:44:31.821717 kernel: tsc: Detected 2445.404 MHz processor Apr 30 03:44:31.821723 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 03:44:31.821729 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 03:44:31.821734 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Apr 30 03:44:31.821739 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 30 03:44:31.821744 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 03:44:31.821749 kernel: Using GB pages for direct mapping Apr 30 03:44:31.821754 kernel: ACPI: Early table checksum verification disabled Apr 30 03:44:31.821759 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Apr 30 03:44:31.821764 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:44:31.821769 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:44:31.821775 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:44:31.821780 kernel: ACPI: FACS 0x000000007CFE0000 000040 Apr 30 03:44:31.821785 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:44:31.821790 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:44:31.821795 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:44:31.821800 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:44:31.821805 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Apr 30 03:44:31.821810 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Apr 30 03:44:31.821818 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Apr 30 03:44:31.821823 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Apr 30 03:44:31.821829 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Apr 30 03:44:31.821834 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Apr 30 03:44:31.821839 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Apr 30 03:44:31.821844 kernel: No NUMA configuration found Apr 30 03:44:31.821850 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Apr 30 03:44:31.821856 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Apr 30 03:44:31.821861 kernel: Zone ranges: Apr 30 03:44:31.821866 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 03:44:31.821872 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Apr 30 03:44:31.821877 kernel: Normal empty Apr 30 03:44:31.821882 kernel: Movable zone start for each node Apr 30 03:44:31.821887 kernel: Early memory node ranges Apr 30 03:44:31.821914 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 30 03:44:31.821920 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Apr 30 03:44:31.821928 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Apr 30 03:44:31.821933 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:44:31.821938 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 30 03:44:31.821943 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 30 03:44:31.821948 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 30 03:44:31.821953 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 03:44:31.821959 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 30 03:44:31.821964 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 03:44:31.821969 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 03:44:31.821976 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 03:44:31.821981 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 03:44:31.821987 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 03:44:31.821992 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 03:44:31.821997 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 03:44:31.822002 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 03:44:31.822007 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 03:44:31.822012 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 30 03:44:31.822018 kernel: Booting paravirtualized kernel on KVM Apr 30 03:44:31.822024 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 03:44:31.822030 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 03:44:31.822035 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 03:44:31.822040 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 03:44:31.822045 kernel: pcpu-alloc: [0] 0 1 Apr 30 03:44:31.822050 kernel: kvm-guest: PV spinlocks disabled, no host support Apr 30 03:44:31.822057 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:44:31.822062 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 03:44:31.822069 kernel: random: crng init done Apr 30 03:44:31.822074 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 03:44:31.822080 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 03:44:31.822085 kernel: Fallback order for Node 0: 0 Apr 30 03:44:31.822090 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Apr 30 03:44:31.822095 kernel: Policy zone: DMA32 Apr 30 03:44:31.822100 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 03:44:31.822106 kernel: Memory: 1922052K/2047464K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 125152K reserved, 0K cma-reserved) Apr 30 03:44:31.822111 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 03:44:31.822118 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 03:44:31.822123 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 03:44:31.822128 kernel: Dynamic Preempt: voluntary Apr 30 03:44:31.822133 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 03:44:31.822139 kernel: rcu: RCU event tracing is enabled. Apr 30 03:44:31.822145 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 03:44:31.822150 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 03:44:31.822156 kernel: Rude variant of Tasks RCU enabled. Apr 30 03:44:31.822161 kernel: Tracing variant of Tasks RCU enabled. Apr 30 03:44:31.822166 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 03:44:31.822173 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 03:44:31.822178 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 30 03:44:31.822183 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 03:44:31.822189 kernel: Console: colour VGA+ 80x25 Apr 30 03:44:31.822194 kernel: printk: console [tty0] enabled Apr 30 03:44:31.822199 kernel: printk: console [ttyS0] enabled Apr 30 03:44:31.822206 kernel: ACPI: Core revision 20230628 Apr 30 03:44:31.822216 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 30 03:44:31.822227 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 03:44:31.822238 kernel: x2apic enabled Apr 30 03:44:31.822244 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 03:44:31.822249 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 30 03:44:31.822254 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 30 03:44:31.822260 kernel: Calibrating delay loop (skipped) preset value.. 4890.80 BogoMIPS (lpj=2445404) Apr 30 03:44:31.822265 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 30 03:44:31.822270 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 30 03:44:31.822276 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 30 03:44:31.822287 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 03:44:31.822293 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 03:44:31.822298 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 03:44:31.822304 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 03:44:31.822311 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Apr 30 03:44:31.822316 kernel: RETBleed: Mitigation: untrained return thunk Apr 30 03:44:31.822322 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 03:44:31.822327 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 03:44:31.822333 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 03:44:31.822340 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 03:44:31.822346 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 03:44:31.822351 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 03:44:31.822357 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Apr 30 03:44:31.822362 kernel: Freeing SMP alternatives memory: 32K Apr 30 03:44:31.822368 kernel: pid_max: default: 32768 minimum: 301 Apr 30 03:44:31.822373 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 03:44:31.822390 kernel: landlock: Up and running. Apr 30 03:44:31.822397 kernel: SELinux: Initializing. Apr 30 03:44:31.822403 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 03:44:31.822408 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 03:44:31.822414 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Apr 30 03:44:31.822420 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:44:31.822425 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:44:31.822431 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:44:31.822437 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 30 03:44:31.822442 kernel: ... version: 0 Apr 30 03:44:31.822449 kernel: ... bit width: 48 Apr 30 03:44:31.822455 kernel: ... generic registers: 6 Apr 30 03:44:31.822460 kernel: ... value mask: 0000ffffffffffff Apr 30 03:44:31.822466 kernel: ... max period: 00007fffffffffff Apr 30 03:44:31.822471 kernel: ... fixed-purpose events: 0 Apr 30 03:44:31.822477 kernel: ... event mask: 000000000000003f Apr 30 03:44:31.822482 kernel: signal: max sigframe size: 1776 Apr 30 03:44:31.822488 kernel: rcu: Hierarchical SRCU implementation. Apr 30 03:44:31.822493 kernel: rcu: Max phase no-delay instances is 400. Apr 30 03:44:31.822500 kernel: smp: Bringing up secondary CPUs ... Apr 30 03:44:31.822506 kernel: smpboot: x86: Booting SMP configuration: Apr 30 03:44:31.822511 kernel: .... node #0, CPUs: #1 Apr 30 03:44:31.822517 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 03:44:31.822522 kernel: smpboot: Max logical packages: 1 Apr 30 03:44:31.822528 kernel: smpboot: Total of 2 processors activated (9781.61 BogoMIPS) Apr 30 03:44:31.822533 kernel: devtmpfs: initialized Apr 30 03:44:31.822539 kernel: x86/mm: Memory block size: 128MB Apr 30 03:44:31.822544 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 03:44:31.822551 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 03:44:31.822557 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 03:44:31.822562 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 03:44:31.822568 kernel: audit: initializing netlink subsys (disabled) Apr 30 03:44:31.822573 kernel: audit: type=2000 audit(1745984670.699:1): state=initialized audit_enabled=0 res=1 Apr 30 03:44:31.822579 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 03:44:31.822584 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 03:44:31.822590 kernel: cpuidle: using governor menu Apr 30 03:44:31.822595 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 03:44:31.822601 kernel: dca service started, version 1.12.1 Apr 30 03:44:31.822608 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 30 03:44:31.822613 kernel: PCI: Using configuration type 1 for base access Apr 30 03:44:31.822619 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 03:44:31.822624 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 03:44:31.822630 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 03:44:31.822636 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 03:44:31.822641 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 03:44:31.822647 kernel: ACPI: Added _OSI(Module Device) Apr 30 03:44:31.822653 kernel: ACPI: Added _OSI(Processor Device) Apr 30 03:44:31.822659 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 03:44:31.822664 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 03:44:31.822670 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 03:44:31.822675 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 03:44:31.822681 kernel: ACPI: Interpreter enabled Apr 30 03:44:31.822687 kernel: ACPI: PM: (supports S0 S5) Apr 30 03:44:31.822692 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 03:44:31.822698 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 03:44:31.822703 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 03:44:31.822710 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 30 03:44:31.822715 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 03:44:31.822828 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 03:44:31.825233 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 30 03:44:31.825313 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 30 03:44:31.825323 kernel: PCI host bridge to bus 0000:00 Apr 30 03:44:31.825401 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 03:44:31.825467 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 03:44:31.825522 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 03:44:31.825576 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Apr 30 03:44:31.825629 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 30 03:44:31.825682 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 30 03:44:31.825736 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 03:44:31.825817 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 30 03:44:31.825911 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Apr 30 03:44:31.825986 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Apr 30 03:44:31.826051 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Apr 30 03:44:31.826113 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Apr 30 03:44:31.826175 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Apr 30 03:44:31.826238 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 03:44:31.826312 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 30 03:44:31.826386 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Apr 30 03:44:31.826458 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 30 03:44:31.826520 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Apr 30 03:44:31.826590 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 30 03:44:31.826651 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Apr 30 03:44:31.826723 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 30 03:44:31.826786 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Apr 30 03:44:31.826855 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 30 03:44:31.827521 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Apr 30 03:44:31.827600 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 30 03:44:31.827663 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Apr 30 03:44:31.827738 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 30 03:44:31.827800 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Apr 30 03:44:31.827870 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 30 03:44:31.829633 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Apr 30 03:44:31.829711 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 30 03:44:31.829773 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Apr 30 03:44:31.829844 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 30 03:44:31.829932 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 30 03:44:31.830005 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 30 03:44:31.830066 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Apr 30 03:44:31.830126 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Apr 30 03:44:31.830193 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 30 03:44:31.830254 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 30 03:44:31.830334 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 03:44:31.830413 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Apr 30 03:44:31.830479 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Apr 30 03:44:31.830541 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Apr 30 03:44:31.830601 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 30 03:44:31.830661 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Apr 30 03:44:31.830721 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Apr 30 03:44:31.830812 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 30 03:44:31.830879 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Apr 30 03:44:31.831016 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 30 03:44:31.831080 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Apr 30 03:44:31.831140 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 03:44:31.831208 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 30 03:44:31.831277 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Apr 30 03:44:31.831338 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Apr 30 03:44:31.831411 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 30 03:44:31.831473 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Apr 30 03:44:31.831533 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 30 03:44:31.831601 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 30 03:44:31.831664 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Apr 30 03:44:31.831729 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 30 03:44:31.831791 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Apr 30 03:44:31.831851 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 30 03:44:31.833645 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 30 03:44:31.833722 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Apr 30 03:44:31.833787 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Apr 30 03:44:31.833847 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 30 03:44:31.833973 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Apr 30 03:44:31.834058 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 30 03:44:31.834131 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 30 03:44:31.834197 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Apr 30 03:44:31.834286 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Apr 30 03:44:31.834351 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 30 03:44:31.834426 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Apr 30 03:44:31.834488 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 30 03:44:31.834500 kernel: acpiphp: Slot [0] registered Apr 30 03:44:31.834570 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 03:44:31.834634 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Apr 30 03:44:31.834697 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Apr 30 03:44:31.834758 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Apr 30 03:44:31.834819 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 30 03:44:31.834879 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Apr 30 03:44:31.835046 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 30 03:44:31.835058 kernel: acpiphp: Slot [0-2] registered Apr 30 03:44:31.835118 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 30 03:44:31.835178 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Apr 30 03:44:31.835239 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 30 03:44:31.835247 kernel: acpiphp: Slot [0-3] registered Apr 30 03:44:31.835304 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 30 03:44:31.835363 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Apr 30 03:44:31.835435 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 30 03:44:31.835447 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 03:44:31.835453 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 03:44:31.835459 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 03:44:31.835465 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 03:44:31.835470 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 30 03:44:31.835476 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 30 03:44:31.835481 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 30 03:44:31.835487 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 30 03:44:31.835492 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 30 03:44:31.835499 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 30 03:44:31.835505 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 30 03:44:31.835511 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 30 03:44:31.835516 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 30 03:44:31.835522 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 30 03:44:31.835527 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 30 03:44:31.835533 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 30 03:44:31.835538 kernel: iommu: Default domain type: Translated Apr 30 03:44:31.835544 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 03:44:31.835551 kernel: PCI: Using ACPI for IRQ routing Apr 30 03:44:31.835556 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 03:44:31.835562 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 30 03:44:31.835571 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Apr 30 03:44:31.835693 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 30 03:44:31.835801 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 30 03:44:31.835953 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 03:44:31.835969 kernel: vgaarb: loaded Apr 30 03:44:31.835985 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 30 03:44:31.835997 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 30 03:44:31.836007 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 03:44:31.836017 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 03:44:31.836027 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 03:44:31.836038 kernel: pnp: PnP ACPI init Apr 30 03:44:31.836161 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 30 03:44:31.836179 kernel: pnp: PnP ACPI: found 5 devices Apr 30 03:44:31.836186 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 03:44:31.836195 kernel: NET: Registered PF_INET protocol family Apr 30 03:44:31.836201 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 03:44:31.836207 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 30 03:44:31.836212 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 03:44:31.836218 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:44:31.836224 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 03:44:31.836230 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 30 03:44:31.836236 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 03:44:31.836243 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 03:44:31.836249 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 03:44:31.836254 kernel: NET: Registered PF_XDP protocol family Apr 30 03:44:31.836324 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 30 03:44:31.836398 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 30 03:44:31.836463 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 30 03:44:31.836523 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Apr 30 03:44:31.836584 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Apr 30 03:44:31.836662 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Apr 30 03:44:31.836796 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 30 03:44:31.836858 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Apr 30 03:44:31.836967 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Apr 30 03:44:31.837030 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 30 03:44:31.837091 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Apr 30 03:44:31.837150 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 03:44:31.837208 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 30 03:44:31.837273 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Apr 30 03:44:31.837332 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 30 03:44:31.837406 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 30 03:44:31.837469 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Apr 30 03:44:31.837529 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 30 03:44:31.837589 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 30 03:44:31.837707 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Apr 30 03:44:31.837823 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 30 03:44:31.838989 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 30 03:44:31.839102 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Apr 30 03:44:31.839172 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 30 03:44:31.839240 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 30 03:44:31.839306 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Apr 30 03:44:31.839370 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Apr 30 03:44:31.839448 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 30 03:44:31.839515 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 30 03:44:31.839579 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Apr 30 03:44:31.839651 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Apr 30 03:44:31.839715 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 30 03:44:31.839780 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 30 03:44:31.839848 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Apr 30 03:44:31.839933 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Apr 30 03:44:31.840006 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 30 03:44:31.840073 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 03:44:31.840131 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 03:44:31.840186 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 03:44:31.840242 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Apr 30 03:44:31.840302 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 30 03:44:31.840360 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 30 03:44:31.840442 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Apr 30 03:44:31.840505 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Apr 30 03:44:31.840573 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Apr 30 03:44:31.840633 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 03:44:31.840698 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Apr 30 03:44:31.840762 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 30 03:44:31.840828 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Apr 30 03:44:31.840886 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 30 03:44:31.843006 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Apr 30 03:44:31.843074 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 30 03:44:31.843140 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Apr 30 03:44:31.843206 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 30 03:44:31.843270 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Apr 30 03:44:31.843328 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Apr 30 03:44:31.843396 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 30 03:44:31.843468 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Apr 30 03:44:31.843528 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Apr 30 03:44:31.843589 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 30 03:44:31.843653 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Apr 30 03:44:31.843711 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Apr 30 03:44:31.843769 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 30 03:44:31.843778 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 30 03:44:31.843785 kernel: PCI: CLS 0 bytes, default 64 Apr 30 03:44:31.843791 kernel: Initialise system trusted keyrings Apr 30 03:44:31.843798 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 30 03:44:31.843807 kernel: Key type asymmetric registered Apr 30 03:44:31.843813 kernel: Asymmetric key parser 'x509' registered Apr 30 03:44:31.843819 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 03:44:31.843826 kernel: io scheduler mq-deadline registered Apr 30 03:44:31.843832 kernel: io scheduler kyber registered Apr 30 03:44:31.843838 kernel: io scheduler bfq registered Apr 30 03:44:31.843927 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Apr 30 03:44:31.843998 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Apr 30 03:44:31.844063 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Apr 30 03:44:31.844131 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Apr 30 03:44:31.844195 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Apr 30 03:44:31.844257 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Apr 30 03:44:31.844321 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Apr 30 03:44:31.844393 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Apr 30 03:44:31.844458 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Apr 30 03:44:31.844563 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Apr 30 03:44:31.844629 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Apr 30 03:44:31.844696 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Apr 30 03:44:31.844758 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Apr 30 03:44:31.844819 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Apr 30 03:44:31.844880 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Apr 30 03:44:31.847080 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Apr 30 03:44:31.847095 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 30 03:44:31.847162 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Apr 30 03:44:31.847244 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Apr 30 03:44:31.847266 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 03:44:31.847273 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Apr 30 03:44:31.847280 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 03:44:31.847286 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 03:44:31.847292 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 03:44:31.847298 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 03:44:31.847305 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 03:44:31.847391 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 30 03:44:31.847403 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Apr 30 03:44:31.847473 kernel: rtc_cmos 00:03: registered as rtc0 Apr 30 03:44:31.847530 kernel: rtc_cmos 00:03: setting system clock to 2025-04-30T03:44:31 UTC (1745984671) Apr 30 03:44:31.847585 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 30 03:44:31.847594 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 30 03:44:31.847601 kernel: NET: Registered PF_INET6 protocol family Apr 30 03:44:31.847607 kernel: Segment Routing with IPv6 Apr 30 03:44:31.847614 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 03:44:31.847620 kernel: NET: Registered PF_PACKET protocol family Apr 30 03:44:31.847629 kernel: Key type dns_resolver registered Apr 30 03:44:31.847635 kernel: IPI shorthand broadcast: enabled Apr 30 03:44:31.847642 kernel: sched_clock: Marking stable (1041191612, 133783189)->(1183867306, -8892505) Apr 30 03:44:31.847647 kernel: registered taskstats version 1 Apr 30 03:44:31.847653 kernel: Loading compiled-in X.509 certificates Apr 30 03:44:31.847660 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 03:44:31.847666 kernel: Key type .fscrypt registered Apr 30 03:44:31.847672 kernel: Key type fscrypt-provisioning registered Apr 30 03:44:31.847678 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 03:44:31.847686 kernel: ima: Allocated hash algorithm: sha1 Apr 30 03:44:31.847693 kernel: ima: No architecture policies found Apr 30 03:44:31.847698 kernel: clk: Disabling unused clocks Apr 30 03:44:31.847705 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 03:44:31.847711 kernel: Write protecting the kernel read-only data: 36864k Apr 30 03:44:31.847717 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 03:44:31.847725 kernel: Run /init as init process Apr 30 03:44:31.847731 kernel: with arguments: Apr 30 03:44:31.847737 kernel: /init Apr 30 03:44:31.847745 kernel: with environment: Apr 30 03:44:31.847751 kernel: HOME=/ Apr 30 03:44:31.847757 kernel: TERM=linux Apr 30 03:44:31.847762 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 03:44:31.847770 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:44:31.847779 systemd[1]: Detected virtualization kvm. Apr 30 03:44:31.847786 systemd[1]: Detected architecture x86-64. Apr 30 03:44:31.847792 systemd[1]: Running in initrd. Apr 30 03:44:31.847800 systemd[1]: No hostname configured, using default hostname. Apr 30 03:44:31.847806 systemd[1]: Hostname set to . Apr 30 03:44:31.847813 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:44:31.847819 systemd[1]: Queued start job for default target initrd.target. Apr 30 03:44:31.847826 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:44:31.847834 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:44:31.847841 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 03:44:31.847847 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:44:31.847856 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 03:44:31.847862 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 03:44:31.847870 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 03:44:31.847876 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 03:44:31.847883 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:44:31.847905 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:44:31.847915 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:44:31.847922 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:44:31.847928 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:44:31.847934 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:44:31.847941 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:44:31.847947 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:44:31.847954 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:44:31.847960 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:44:31.847967 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:44:31.847975 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:44:31.847981 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:44:31.847988 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:44:31.847994 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 03:44:31.848000 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:44:31.848007 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 03:44:31.848013 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 03:44:31.848020 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:44:31.848026 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:44:31.848051 systemd-journald[188]: Collecting audit messages is disabled. Apr 30 03:44:31.848069 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:44:31.848075 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 03:44:31.848082 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:44:31.848091 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 03:44:31.848098 systemd-journald[188]: Journal started Apr 30 03:44:31.848115 systemd-journald[188]: Runtime Journal (/run/log/journal/5be4ce21156d49778ccee502572975be) is 4.8M, max 38.4M, 33.6M free. Apr 30 03:44:31.840968 systemd-modules-load[189]: Inserted module 'overlay' Apr 30 03:44:31.876636 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 03:44:31.876653 kernel: Bridge firewalling registered Apr 30 03:44:31.876661 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:44:31.864547 systemd-modules-load[189]: Inserted module 'br_netfilter' Apr 30 03:44:31.877950 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:44:31.878581 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:44:31.885040 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:44:31.886297 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:44:31.891037 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:44:31.896000 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:44:31.898580 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:44:31.901161 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:44:31.904020 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 03:44:31.907044 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:44:31.908395 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:44:31.909935 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:44:31.918697 dracut-cmdline[216]: dracut-dracut-053 Apr 30 03:44:31.919285 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:44:31.922324 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:44:31.925659 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:44:31.942112 systemd-resolved[221]: Positive Trust Anchors: Apr 30 03:44:31.942124 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:44:31.942149 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:44:31.944530 systemd-resolved[221]: Defaulting to hostname 'linux'. Apr 30 03:44:31.945273 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:44:31.949221 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:44:31.978913 kernel: SCSI subsystem initialized Apr 30 03:44:31.986921 kernel: Loading iSCSI transport class v2.0-870. Apr 30 03:44:31.994922 kernel: iscsi: registered transport (tcp) Apr 30 03:44:32.011932 kernel: iscsi: registered transport (qla4xxx) Apr 30 03:44:32.011981 kernel: QLogic iSCSI HBA Driver Apr 30 03:44:32.043982 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 03:44:32.048024 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 03:44:32.068282 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 03:44:32.068330 kernel: device-mapper: uevent: version 1.0.3 Apr 30 03:44:32.068346 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 03:44:32.110940 kernel: raid6: avx2x4 gen() 32395 MB/s Apr 30 03:44:32.127933 kernel: raid6: avx2x2 gen() 29835 MB/s Apr 30 03:44:32.145075 kernel: raid6: avx2x1 gen() 24955 MB/s Apr 30 03:44:32.145164 kernel: raid6: using algorithm avx2x4 gen() 32395 MB/s Apr 30 03:44:32.163162 kernel: raid6: .... xor() 4738 MB/s, rmw enabled Apr 30 03:44:32.163220 kernel: raid6: using avx2x2 recovery algorithm Apr 30 03:44:32.179947 kernel: xor: automatically using best checksumming function avx Apr 30 03:44:32.302931 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 03:44:32.315404 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:44:32.324103 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:44:32.346612 systemd-udevd[406]: Using default interface naming scheme 'v255'. Apr 30 03:44:32.351917 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:44:32.357346 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 03:44:32.369251 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Apr 30 03:44:32.391022 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:44:32.397018 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:44:32.432922 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:44:32.439041 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 03:44:32.452391 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 03:44:32.454304 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:44:32.455703 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:44:32.456957 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:44:32.462025 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 03:44:32.471356 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:44:32.498923 kernel: scsi host0: Virtio SCSI HBA Apr 30 03:44:32.535922 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 30 03:44:32.545565 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 03:44:32.545614 kernel: ACPI: bus type USB registered Apr 30 03:44:32.549518 kernel: usbcore: registered new interface driver usbfs Apr 30 03:44:32.549546 kernel: usbcore: registered new interface driver hub Apr 30 03:44:32.554918 kernel: usbcore: registered new device driver usb Apr 30 03:44:32.569000 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 03:44:32.569030 kernel: AES CTR mode by8 optimization enabled Apr 30 03:44:32.569238 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:44:32.569880 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:44:32.571710 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:44:32.572818 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:44:32.572932 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:44:32.574614 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:44:32.592774 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:44:32.598340 kernel: libata version 3.00 loaded. Apr 30 03:44:32.602128 kernel: ahci 0000:00:1f.2: version 3.0 Apr 30 03:44:32.615780 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 30 03:44:32.615793 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 03:44:32.619077 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 30 03:44:32.619175 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 30 03:44:32.619252 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 30 03:44:32.619332 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 30 03:44:32.619430 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 03:44:32.619509 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 30 03:44:32.619585 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 30 03:44:32.619661 kernel: scsi host1: ahci Apr 30 03:44:32.619751 kernel: scsi host2: ahci Apr 30 03:44:32.619828 kernel: hub 1-0:1.0: USB hub found Apr 30 03:44:32.619955 kernel: hub 1-0:1.0: 4 ports detected Apr 30 03:44:32.620041 kernel: scsi host3: ahci Apr 30 03:44:32.620123 kernel: scsi host4: ahci Apr 30 03:44:32.620198 kernel: scsi host5: ahci Apr 30 03:44:32.620279 kernel: scsi host6: ahci Apr 30 03:44:32.620353 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 Apr 30 03:44:32.620373 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 Apr 30 03:44:32.620381 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 Apr 30 03:44:32.620391 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 Apr 30 03:44:32.620398 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 Apr 30 03:44:32.620405 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 Apr 30 03:44:32.620412 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 30 03:44:32.620504 kernel: hub 2-0:1.0: USB hub found Apr 30 03:44:32.620588 kernel: hub 2-0:1.0: 4 ports detected Apr 30 03:44:32.673702 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:44:32.682041 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:44:32.696194 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:44:32.855140 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 30 03:44:32.934912 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 30 03:44:32.934989 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 30 03:44:32.935011 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 30 03:44:32.937492 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 30 03:44:32.937527 kernel: ata1.00: applying bridge limits Apr 30 03:44:32.937905 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 30 03:44:32.941021 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 30 03:44:32.943047 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 30 03:44:32.947307 kernel: ata1.00: configured for UDMA/100 Apr 30 03:44:32.947346 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 30 03:44:32.976981 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 30 03:44:32.997022 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 30 03:44:32.997127 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 30 03:44:32.997207 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 30 03:44:32.997284 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 03:44:32.997375 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 03:44:32.997385 kernel: GPT:17805311 != 80003071 Apr 30 03:44:32.997392 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 03:44:32.997399 kernel: GPT:17805311 != 80003071 Apr 30 03:44:32.997406 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 03:44:32.997413 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:44:32.997420 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 30 03:44:32.997501 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 03:44:33.003009 kernel: usbcore: registered new interface driver usbhid Apr 30 03:44:33.003067 kernel: usbhid: USB HID core driver Apr 30 03:44:33.008915 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Apr 30 03:44:33.008937 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 30 03:44:33.017517 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 30 03:44:33.030267 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 03:44:33.030280 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (450) Apr 30 03:44:33.030288 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (468) Apr 30 03:44:33.030300 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Apr 30 03:44:33.032940 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 30 03:44:33.041598 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 30 03:44:33.046948 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 03:44:33.051325 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 30 03:44:33.051838 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 30 03:44:33.061078 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 03:44:33.065034 disk-uuid[577]: Primary Header is updated. Apr 30 03:44:33.065034 disk-uuid[577]: Secondary Entries is updated. Apr 30 03:44:33.065034 disk-uuid[577]: Secondary Header is updated. Apr 30 03:44:33.068916 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:44:33.073929 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:44:33.077921 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:44:34.079619 disk-uuid[578]: The operation has completed successfully. Apr 30 03:44:34.080501 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:44:34.116958 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 03:44:34.117045 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 03:44:34.131989 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 03:44:34.134620 sh[599]: Success Apr 30 03:44:34.143918 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 30 03:44:34.186272 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 03:44:34.202246 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 03:44:34.203833 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 03:44:34.220922 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 03:44:34.220963 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:44:34.220975 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 03:44:34.224314 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 03:44:34.224363 kernel: BTRFS info (device dm-0): using free space tree Apr 30 03:44:34.232921 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 03:44:34.234229 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 03:44:34.235253 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 03:44:34.241007 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 03:44:34.242561 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 03:44:34.257831 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:44:34.257869 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:44:34.257880 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:44:34.262233 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 03:44:34.262256 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:44:34.270072 kernel: BTRFS info (device sda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:44:34.269828 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 03:44:34.273613 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 03:44:34.279287 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 03:44:34.306073 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:44:34.316026 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:44:34.338317 systemd-networkd[780]: lo: Link UP Apr 30 03:44:34.338324 systemd-networkd[780]: lo: Gained carrier Apr 30 03:44:34.341685 systemd-networkd[780]: Enumeration completed Apr 30 03:44:34.342281 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:44:34.342958 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:44:34.342961 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:44:34.344965 ignition[727]: Ignition 2.19.0 Apr 30 03:44:34.345143 systemd[1]: Reached target network.target - Network. Apr 30 03:44:34.344970 ignition[727]: Stage: fetch-offline Apr 30 03:44:34.345263 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:44:34.344993 ignition[727]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:44:34.345265 systemd-networkd[780]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:44:34.344999 ignition[727]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:44:34.345699 systemd-networkd[780]: eth0: Link UP Apr 30 03:44:34.345056 ignition[727]: parsed url from cmdline: "" Apr 30 03:44:34.345702 systemd-networkd[780]: eth0: Gained carrier Apr 30 03:44:34.345058 ignition[727]: no config URL provided Apr 30 03:44:34.345709 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:44:34.345061 ignition[727]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:44:34.346597 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:44:34.345067 ignition[727]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:44:34.348178 systemd-networkd[780]: eth1: Link UP Apr 30 03:44:34.345070 ignition[727]: failed to fetch config: resource requires networking Apr 30 03:44:34.348181 systemd-networkd[780]: eth1: Gained carrier Apr 30 03:44:34.345198 ignition[727]: Ignition finished successfully Apr 30 03:44:34.348187 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:44:34.352011 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 03:44:34.361321 ignition[789]: Ignition 2.19.0 Apr 30 03:44:34.361332 ignition[789]: Stage: fetch Apr 30 03:44:34.361460 ignition[789]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:44:34.361468 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:44:34.361519 ignition[789]: parsed url from cmdline: "" Apr 30 03:44:34.361521 ignition[789]: no config URL provided Apr 30 03:44:34.361525 ignition[789]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:44:34.361531 ignition[789]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:44:34.361545 ignition[789]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 30 03:44:34.361635 ignition[789]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 30 03:44:34.386959 systemd-networkd[780]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 03:44:34.405933 systemd-networkd[780]: eth0: DHCPv4 address 37.27.250.194/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 03:44:34.561866 ignition[789]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 30 03:44:34.566400 ignition[789]: GET result: OK Apr 30 03:44:34.566535 ignition[789]: parsing config with SHA512: 36756db44a55f2e7f0a6f19cacb62bb292c1f31705f019f13308fce4ad152050ecb0b9c76205a8d5fb388f549efe947c541efcbab935996378f108228d6511b4 Apr 30 03:44:34.572240 unknown[789]: fetched base config from "system" Apr 30 03:44:34.572254 unknown[789]: fetched base config from "system" Apr 30 03:44:34.572841 ignition[789]: fetch: fetch complete Apr 30 03:44:34.572262 unknown[789]: fetched user config from "hetzner" Apr 30 03:44:34.572850 ignition[789]: fetch: fetch passed Apr 30 03:44:34.575147 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 03:44:34.572945 ignition[789]: Ignition finished successfully Apr 30 03:44:34.583223 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 03:44:34.604531 ignition[796]: Ignition 2.19.0 Apr 30 03:44:34.604551 ignition[796]: Stage: kargs Apr 30 03:44:34.604802 ignition[796]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:44:34.604817 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:44:34.609124 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 03:44:34.606498 ignition[796]: kargs: kargs passed Apr 30 03:44:34.606564 ignition[796]: Ignition finished successfully Apr 30 03:44:34.618104 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 03:44:34.634764 ignition[803]: Ignition 2.19.0 Apr 30 03:44:34.634854 ignition[803]: Stage: disks Apr 30 03:44:34.635106 ignition[803]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:44:34.637259 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 03:44:34.635119 ignition[803]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:44:34.638706 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 03:44:34.636216 ignition[803]: disks: disks passed Apr 30 03:44:34.639890 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:44:34.636266 ignition[803]: Ignition finished successfully Apr 30 03:44:34.641603 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:44:34.643469 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:44:34.645313 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:44:34.662105 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 03:44:34.678177 systemd-fsck[812]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 30 03:44:34.681207 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 03:44:34.690004 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 03:44:34.760935 kernel: EXT4-fs (sda9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 03:44:34.761396 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 03:44:34.762194 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 03:44:34.767979 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:44:34.770027 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 03:44:34.772469 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 03:44:34.774344 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 03:44:34.775128 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:44:34.779174 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 03:44:34.790230 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (820) Apr 30 03:44:34.790252 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:44:34.790263 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:44:34.790273 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:44:34.795193 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 03:44:34.795228 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:44:34.798068 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 03:44:34.803350 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:44:34.827504 coreos-metadata[822]: Apr 30 03:44:34.827 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 30 03:44:34.828945 coreos-metadata[822]: Apr 30 03:44:34.828 INFO Fetch successful Apr 30 03:44:34.830538 coreos-metadata[822]: Apr 30 03:44:34.830 INFO wrote hostname ci-4081-3-3-c-b54c1f5c93 to /sysroot/etc/hostname Apr 30 03:44:34.831831 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:44:34.833658 initrd-setup-root[848]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 03:44:34.836951 initrd-setup-root[855]: cut: /sysroot/etc/group: No such file or directory Apr 30 03:44:34.840266 initrd-setup-root[862]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 03:44:34.843435 initrd-setup-root[869]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 03:44:34.901250 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 03:44:34.906025 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 03:44:34.909187 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 03:44:34.913912 kernel: BTRFS info (device sda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:44:34.928537 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 03:44:34.930253 ignition[941]: INFO : Ignition 2.19.0 Apr 30 03:44:34.930253 ignition[941]: INFO : Stage: mount Apr 30 03:44:34.930253 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:44:34.930253 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:44:34.930253 ignition[941]: INFO : mount: mount passed Apr 30 03:44:34.930253 ignition[941]: INFO : Ignition finished successfully Apr 30 03:44:34.931563 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 03:44:34.936005 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 03:44:35.217433 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 03:44:35.222117 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:44:35.233107 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (953) Apr 30 03:44:35.233153 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:44:35.236993 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:44:35.237021 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:44:35.243060 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 03:44:35.243084 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:44:35.245219 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:44:35.261553 ignition[969]: INFO : Ignition 2.19.0 Apr 30 03:44:35.261553 ignition[969]: INFO : Stage: files Apr 30 03:44:35.262629 ignition[969]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:44:35.262629 ignition[969]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:44:35.262629 ignition[969]: DEBUG : files: compiled without relabeling support, skipping Apr 30 03:44:35.264536 ignition[969]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 03:44:35.264536 ignition[969]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 03:44:35.266207 ignition[969]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 03:44:35.266933 ignition[969]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 03:44:35.266933 ignition[969]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 03:44:35.266560 unknown[969]: wrote ssh authorized keys file for user: core Apr 30 03:44:35.269015 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 03:44:35.269015 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 03:44:35.269015 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:44:35.269015 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 03:44:35.478889 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 03:44:35.650114 systemd-networkd[780]: eth0: Gained IPv6LL Apr 30 03:44:35.970040 systemd-networkd[780]: eth1: Gained IPv6LL Apr 30 03:44:37.308614 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:44:37.308614 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 03:44:37.311549 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 03:44:37.311549 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:44:37.311549 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:44:37.311549 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:44:37.311549 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:44:37.311549 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:44:37.311549 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:44:37.311549 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:44:37.311549 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:44:37.311549 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:44:37.311549 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:44:37.311549 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:44:37.311549 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Apr 30 03:44:37.939163 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 03:44:38.078886 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:44:38.078886 ignition[969]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 30 03:44:38.081527 ignition[969]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 03:44:38.081527 ignition[969]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 03:44:38.081527 ignition[969]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 30 03:44:38.081527 ignition[969]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 30 03:44:38.081527 ignition[969]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:44:38.081527 ignition[969]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:44:38.081527 ignition[969]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 30 03:44:38.081527 ignition[969]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Apr 30 03:44:38.081527 ignition[969]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 03:44:38.081527 ignition[969]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 03:44:38.081527 ignition[969]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Apr 30 03:44:38.081527 ignition[969]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 30 03:44:38.081527 ignition[969]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 03:44:38.081527 ignition[969]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:44:38.081527 ignition[969]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:44:38.081527 ignition[969]: INFO : files: files passed Apr 30 03:44:38.081527 ignition[969]: INFO : Ignition finished successfully Apr 30 03:44:38.082578 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 03:44:38.096996 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 03:44:38.100079 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 03:44:38.102204 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 03:44:38.102267 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 03:44:38.111256 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:44:38.111256 initrd-setup-root-after-ignition[999]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:44:38.113180 initrd-setup-root-after-ignition[1003]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:44:38.113419 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:44:38.114789 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 03:44:38.120007 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 03:44:38.134121 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 03:44:38.134197 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 03:44:38.134979 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 03:44:38.135754 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 03:44:38.136874 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 03:44:38.142023 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 03:44:38.152485 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:44:38.166047 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 03:44:38.174606 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:44:38.175483 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:44:38.176702 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 03:44:38.177768 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 03:44:38.177955 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:44:38.179008 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 03:44:38.179706 systemd[1]: Stopped target basic.target - Basic System. Apr 30 03:44:38.180933 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 03:44:38.182009 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:44:38.183069 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 03:44:38.184297 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 03:44:38.185506 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:44:38.186758 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 03:44:38.187951 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 03:44:38.189184 systemd[1]: Stopped target swap.target - Swaps. Apr 30 03:44:38.190305 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 03:44:38.190395 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:44:38.191723 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:44:38.192392 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:44:38.193301 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 03:44:38.193381 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:44:38.194387 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 03:44:38.194466 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 03:44:38.195944 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 03:44:38.196037 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:44:38.197117 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 03:44:38.197229 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 03:44:38.198197 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 03:44:38.198317 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:44:38.205217 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 03:44:38.205667 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 03:44:38.205791 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:44:38.209046 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 03:44:38.209490 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 03:44:38.209611 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:44:38.210307 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 03:44:38.210422 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:44:38.217185 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 03:44:38.217261 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 03:44:38.222573 ignition[1023]: INFO : Ignition 2.19.0 Apr 30 03:44:38.223192 ignition[1023]: INFO : Stage: umount Apr 30 03:44:38.224558 ignition[1023]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:44:38.224558 ignition[1023]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:44:38.224558 ignition[1023]: INFO : umount: umount passed Apr 30 03:44:38.224558 ignition[1023]: INFO : Ignition finished successfully Apr 30 03:44:38.227723 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 03:44:38.227814 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 03:44:38.229763 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 03:44:38.230457 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 03:44:38.230498 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 03:44:38.231014 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 03:44:38.231047 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 03:44:38.233233 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 03:44:38.233295 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 03:44:38.234153 systemd[1]: Stopped target network.target - Network. Apr 30 03:44:38.235000 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 03:44:38.235040 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:44:38.235985 systemd[1]: Stopped target paths.target - Path Units. Apr 30 03:44:38.236831 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 03:44:38.237045 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:44:38.237755 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 03:44:38.238737 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 03:44:38.239784 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 03:44:38.239828 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:44:38.240649 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 03:44:38.240677 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:44:38.241641 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 03:44:38.241673 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 03:44:38.242765 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 03:44:38.242797 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 03:44:38.243775 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 03:44:38.244726 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 03:44:38.246084 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 03:44:38.246151 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 03:44:38.247096 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 03:44:38.247153 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 03:44:38.247935 systemd-networkd[780]: eth0: DHCPv6 lease lost Apr 30 03:44:38.251496 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 03:44:38.251581 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 03:44:38.251947 systemd-networkd[780]: eth1: DHCPv6 lease lost Apr 30 03:44:38.254446 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 03:44:38.254526 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 03:44:38.255827 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 03:44:38.255862 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:44:38.261999 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 03:44:38.262612 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 03:44:38.262651 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:44:38.263142 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:44:38.263171 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:44:38.263634 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 03:44:38.263662 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 03:44:38.264589 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 03:44:38.264622 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:44:38.265726 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:44:38.274133 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 03:44:38.274749 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 03:44:38.275384 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 03:44:38.275483 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:44:38.276656 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 03:44:38.276695 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 03:44:38.277623 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 03:44:38.277648 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:44:38.278560 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 03:44:38.278594 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:44:38.280148 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 03:44:38.280179 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 03:44:38.281202 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:44:38.281234 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:44:38.290035 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 03:44:38.291805 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 03:44:38.291845 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:44:38.292350 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:44:38.292381 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:44:38.294301 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 03:44:38.294382 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 03:44:38.295618 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 03:44:38.298245 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 03:44:38.305982 systemd[1]: Switching root. Apr 30 03:44:38.354155 systemd-journald[188]: Journal stopped Apr 30 03:44:39.165915 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Apr 30 03:44:39.167753 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 03:44:39.167767 kernel: SELinux: policy capability open_perms=1 Apr 30 03:44:39.167775 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 03:44:39.167782 kernel: SELinux: policy capability always_check_network=0 Apr 30 03:44:39.167789 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 03:44:39.167797 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 03:44:39.167809 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 03:44:39.167816 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 03:44:39.167826 kernel: audit: type=1403 audit(1745984678.534:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 03:44:39.167839 systemd[1]: Successfully loaded SELinux policy in 47.788ms. Apr 30 03:44:39.167851 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.027ms. Apr 30 03:44:39.167860 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:44:39.167868 systemd[1]: Detected virtualization kvm. Apr 30 03:44:39.167877 systemd[1]: Detected architecture x86-64. Apr 30 03:44:39.167886 systemd[1]: Detected first boot. Apr 30 03:44:39.167915 systemd[1]: Hostname set to . Apr 30 03:44:39.167923 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:44:39.167932 zram_generator::config[1082]: No configuration found. Apr 30 03:44:39.167944 systemd[1]: Populated /etc with preset unit settings. Apr 30 03:44:39.167953 systemd[1]: Queued start job for default target multi-user.target. Apr 30 03:44:39.167960 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 30 03:44:39.167969 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 03:44:39.167977 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 03:44:39.167986 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 03:44:39.167994 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 03:44:39.168002 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 03:44:39.168010 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 03:44:39.168018 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 03:44:39.168028 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 03:44:39.168036 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:44:39.168044 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:44:39.168054 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 03:44:39.168061 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 03:44:39.168069 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 03:44:39.168077 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:44:39.168085 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 03:44:39.168093 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:44:39.168101 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 03:44:39.168109 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:44:39.168121 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:44:39.168129 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:44:39.168137 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:44:39.168145 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 03:44:39.168153 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 03:44:39.168160 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:44:39.168169 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:44:39.168177 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:44:39.168187 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:44:39.168194 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:44:39.168203 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 03:44:39.168211 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 03:44:39.168222 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 03:44:39.168231 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 03:44:39.168240 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:44:39.168248 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 03:44:39.168265 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 03:44:39.168273 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 03:44:39.168283 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 03:44:39.168291 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:44:39.168299 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:44:39.168307 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 03:44:39.168316 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:44:39.168324 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:44:39.168332 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:44:39.168340 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 03:44:39.168347 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:44:39.168355 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 03:44:39.168363 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 30 03:44:39.168372 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 30 03:44:39.168381 kernel: fuse: init (API version 7.39) Apr 30 03:44:39.168389 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:44:39.168397 kernel: loop: module loaded Apr 30 03:44:39.168404 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:44:39.168412 kernel: ACPI: bus type drm_connector registered Apr 30 03:44:39.168420 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 03:44:39.168427 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 03:44:39.168436 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:44:39.168458 systemd-journald[1184]: Collecting audit messages is disabled. Apr 30 03:44:39.168479 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:44:39.168488 systemd-journald[1184]: Journal started Apr 30 03:44:39.168505 systemd-journald[1184]: Runtime Journal (/run/log/journal/5be4ce21156d49778ccee502572975be) is 4.8M, max 38.4M, 33.6M free. Apr 30 03:44:39.173852 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 03:44:39.173914 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:44:39.178410 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 03:44:39.179096 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 03:44:39.179627 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 03:44:39.180197 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 03:44:39.180792 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 03:44:39.181472 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 03:44:39.182212 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:44:39.183045 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 03:44:39.183272 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 03:44:39.184145 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:44:39.184365 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:44:39.185229 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:44:39.185418 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:44:39.186319 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:44:39.186480 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:44:39.187347 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 03:44:39.187567 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 03:44:39.188274 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:44:39.188422 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:44:39.189369 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:44:39.190214 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 03:44:39.191268 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 03:44:39.199448 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 03:44:39.204956 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 03:44:39.208093 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 03:44:39.208688 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 03:44:39.210789 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 03:44:39.215995 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 03:44:39.216548 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:44:39.218269 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 03:44:39.219718 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:44:39.227884 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:44:39.241242 systemd-journald[1184]: Time spent on flushing to /var/log/journal/5be4ce21156d49778ccee502572975be is 18.144ms for 1118 entries. Apr 30 03:44:39.241242 systemd-journald[1184]: System Journal (/var/log/journal/5be4ce21156d49778ccee502572975be) is 8.0M, max 584.8M, 576.8M free. Apr 30 03:44:39.275981 systemd-journald[1184]: Received client request to flush runtime journal. Apr 30 03:44:39.232956 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:44:39.240049 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 03:44:39.240741 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:44:39.244889 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 03:44:39.250248 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 03:44:39.253004 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 03:44:39.255866 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 03:44:39.263446 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:44:39.277083 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 03:44:39.278278 udevadm[1231]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 03:44:39.285358 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Apr 30 03:44:39.285371 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Apr 30 03:44:39.291019 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:44:39.297051 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 03:44:39.320208 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 03:44:39.327015 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:44:39.337245 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Apr 30 03:44:39.337477 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Apr 30 03:44:39.340442 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:44:39.628804 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 03:44:39.634031 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:44:39.650588 systemd-udevd[1253]: Using default interface naming scheme 'v255'. Apr 30 03:44:39.670635 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:44:39.681012 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:44:39.689993 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 03:44:39.712715 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 30 03:44:39.737179 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 03:44:39.784947 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1264) Apr 30 03:44:39.785325 systemd-networkd[1258]: lo: Link UP Apr 30 03:44:39.785539 systemd-networkd[1258]: lo: Gained carrier Apr 30 03:44:39.788216 systemd-networkd[1258]: Enumeration completed Apr 30 03:44:39.788858 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:44:39.792132 systemd-networkd[1258]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:44:39.792478 systemd-networkd[1258]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:44:39.796199 systemd-networkd[1258]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:44:39.796272 systemd-networkd[1258]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:44:39.797010 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 03:44:39.797412 systemd-networkd[1258]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:44:39.797748 systemd-networkd[1258]: eth0: Link UP Apr 30 03:44:39.797810 systemd-networkd[1258]: eth0: Gained carrier Apr 30 03:44:39.797984 systemd-networkd[1258]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:44:39.803404 systemd-networkd[1258]: eth1: Link UP Apr 30 03:44:39.803790 systemd-networkd[1258]: eth1: Gained carrier Apr 30 03:44:39.803868 systemd-networkd[1258]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:44:39.806910 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 03:44:39.811914 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Apr 30 03:44:39.826918 kernel: ACPI: button: Power Button [PWRF] Apr 30 03:44:39.835432 systemd-networkd[1258]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:44:39.838975 systemd-networkd[1258]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 03:44:39.846123 systemd[1]: Condition check resulted in dev-vport2p1.device - /dev/vport2p1 being skipped. Apr 30 03:44:39.846141 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 30 03:44:39.846181 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:44:39.846284 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:44:39.853559 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:44:39.855448 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:44:39.856992 systemd-networkd[1258]: eth0: DHCPv4 address 37.27.250.194/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 03:44:39.863259 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:44:39.864810 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 03:44:39.864844 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 03:44:39.864875 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:44:39.865408 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:44:39.865520 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:44:39.866637 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:44:39.866967 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:44:39.873327 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 03:44:39.874667 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:44:39.874809 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:44:39.876835 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:44:39.878119 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:44:39.885949 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Apr 30 03:44:39.890620 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 30 03:44:39.890779 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 30 03:44:39.890880 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 30 03:44:39.902499 kernel: EDAC MC: Ver: 3.0.0 Apr 30 03:44:39.915081 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:44:39.923935 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Apr 30 03:44:39.923967 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Apr 30 03:44:39.928258 kernel: Console: switching to colour dummy device 80x25 Apr 30 03:44:39.929852 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 30 03:44:39.929885 kernel: [drm] features: -context_init Apr 30 03:44:39.930941 kernel: [drm] number of scanouts: 1 Apr 30 03:44:39.930965 kernel: [drm] number of cap sets: 0 Apr 30 03:44:39.934182 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 30 03:44:39.934207 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Apr 30 03:44:39.934218 kernel: Console: switching to colour frame buffer device 160x50 Apr 30 03:44:39.931563 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:44:39.931743 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:44:39.944175 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:44:39.946918 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 30 03:44:39.953543 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:44:39.953697 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:44:39.962026 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:44:39.999175 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:44:40.079306 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 03:44:40.090095 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 03:44:40.101593 lvm[1324]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:44:40.129726 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 03:44:40.130380 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:44:40.135016 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 03:44:40.138663 lvm[1327]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:44:40.165580 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 03:44:40.166053 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:44:40.166156 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 03:44:40.166181 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:44:40.166279 systemd[1]: Reached target machines.target - Containers. Apr 30 03:44:40.167708 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 03:44:40.175049 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 03:44:40.176596 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 03:44:40.176800 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:44:40.179941 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 03:44:40.186425 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 03:44:40.188008 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 03:44:40.190659 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 03:44:40.201922 kernel: loop0: detected capacity change from 0 to 142488 Apr 30 03:44:40.209105 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 03:44:40.209512 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 03:44:40.212960 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 03:44:40.242384 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 03:44:40.263260 kernel: loop1: detected capacity change from 0 to 210664 Apr 30 03:44:40.302921 kernel: loop2: detected capacity change from 0 to 140768 Apr 30 03:44:40.345364 kernel: loop3: detected capacity change from 0 to 8 Apr 30 03:44:40.363929 kernel: loop4: detected capacity change from 0 to 142488 Apr 30 03:44:40.378942 kernel: loop5: detected capacity change from 0 to 210664 Apr 30 03:44:40.396914 kernel: loop6: detected capacity change from 0 to 140768 Apr 30 03:44:40.410936 kernel: loop7: detected capacity change from 0 to 8 Apr 30 03:44:40.411971 (sd-merge)[1348]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 30 03:44:40.412300 (sd-merge)[1348]: Merged extensions into '/usr'. Apr 30 03:44:40.415774 systemd[1]: Reloading requested from client PID 1335 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 03:44:40.415789 systemd[1]: Reloading... Apr 30 03:44:40.460919 zram_generator::config[1372]: No configuration found. Apr 30 03:44:40.550077 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:44:40.553036 ldconfig[1331]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 03:44:40.597358 systemd[1]: Reloading finished in 180 ms. Apr 30 03:44:40.610958 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 03:44:40.614482 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 03:44:40.631220 systemd[1]: Starting ensure-sysext.service... Apr 30 03:44:40.635983 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:44:40.643298 systemd[1]: Reloading requested from client PID 1426 ('systemctl') (unit ensure-sysext.service)... Apr 30 03:44:40.643375 systemd[1]: Reloading... Apr 30 03:44:40.648015 systemd-tmpfiles[1427]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 03:44:40.648253 systemd-tmpfiles[1427]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 03:44:40.648805 systemd-tmpfiles[1427]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 03:44:40.649021 systemd-tmpfiles[1427]: ACLs are not supported, ignoring. Apr 30 03:44:40.649072 systemd-tmpfiles[1427]: ACLs are not supported, ignoring. Apr 30 03:44:40.651169 systemd-tmpfiles[1427]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:44:40.651180 systemd-tmpfiles[1427]: Skipping /boot Apr 30 03:44:40.656464 systemd-tmpfiles[1427]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:44:40.656474 systemd-tmpfiles[1427]: Skipping /boot Apr 30 03:44:40.686930 zram_generator::config[1457]: No configuration found. Apr 30 03:44:40.764845 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:44:40.812201 systemd[1]: Reloading finished in 168 ms. Apr 30 03:44:40.827130 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:44:40.851691 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:44:40.852985 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:44:40.875083 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 03:44:40.875671 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:44:40.879115 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:44:40.885929 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:44:40.893371 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:44:40.893803 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:44:40.897135 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 03:44:40.902990 systemd-networkd[1258]: eth1: Gained IPv6LL Apr 30 03:44:40.904144 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:44:40.913953 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 03:44:40.914327 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:44:40.915355 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 03:44:40.916817 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:44:40.916997 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:44:40.920567 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:44:40.920682 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:44:40.921287 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:44:40.921436 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:44:40.933057 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 03:44:40.934671 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:44:40.934834 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:44:40.937069 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:44:40.944159 augenrules[1542]: No rules Apr 30 03:44:40.947410 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:44:40.953447 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:44:40.955416 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:44:40.955502 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:44:40.957340 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:44:40.961006 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:44:40.961119 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:44:40.964167 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:44:40.964289 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:44:40.972194 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 03:44:40.974882 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:44:40.978036 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:44:40.982559 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:44:40.993136 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:44:40.997793 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:44:40.998370 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:44:41.007082 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 03:44:41.007434 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:44:41.008198 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:44:41.008326 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:44:41.010889 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:44:41.011089 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:44:41.013702 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:44:41.013815 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:44:41.014449 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:44:41.014554 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:44:41.019821 systemd[1]: Finished ensure-sysext.service. Apr 30 03:44:41.023830 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 03:44:41.028635 systemd-resolved[1522]: Positive Trust Anchors: Apr 30 03:44:41.029155 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:44:41.029192 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:44:41.029608 systemd-resolved[1522]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:44:41.029634 systemd-resolved[1522]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:44:41.032915 systemd-resolved[1522]: Using system hostname 'ci-4081-3-3-c-b54c1f5c93'. Apr 30 03:44:41.035010 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 03:44:41.038453 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:44:41.038913 systemd[1]: Reached target network.target - Network. Apr 30 03:44:41.039300 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 03:44:41.039597 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:44:41.048090 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 03:44:41.048797 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:44:41.077957 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 03:44:41.078435 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:44:41.078828 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 03:44:41.079199 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 03:44:41.079558 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 03:44:41.080629 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 03:44:41.080659 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:44:41.082247 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 03:44:41.082684 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 03:44:41.083084 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 03:44:41.083399 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:44:41.088631 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 03:44:41.091753 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 03:44:41.096211 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 03:44:41.096936 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 03:44:41.097271 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:44:41.097561 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:44:41.098955 systemd[1]: System is tainted: cgroupsv1 Apr 30 03:44:41.099047 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:44:41.099122 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:44:41.099988 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 03:44:41.104021 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 03:44:41.108038 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 03:44:41.116960 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 03:44:41.120652 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 03:44:41.121062 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 03:44:41.128972 jq[1586]: false Apr 30 03:44:41.125990 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:44:41.132723 coreos-metadata[1583]: Apr 30 03:44:41.132 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 30 03:44:41.136952 coreos-metadata[1583]: Apr 30 03:44:41.133 INFO Fetch successful Apr 30 03:44:41.136952 coreos-metadata[1583]: Apr 30 03:44:41.133 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 30 03:44:41.136952 coreos-metadata[1583]: Apr 30 03:44:41.133 INFO Fetch successful Apr 30 03:44:41.134031 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 03:44:41.139999 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 03:44:41.143163 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 03:44:41.148674 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 30 03:44:41.150737 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 03:44:41.160976 extend-filesystems[1589]: Found loop4 Apr 30 03:44:41.160976 extend-filesystems[1589]: Found loop5 Apr 30 03:44:41.160976 extend-filesystems[1589]: Found loop6 Apr 30 03:44:41.160976 extend-filesystems[1589]: Found loop7 Apr 30 03:44:41.160976 extend-filesystems[1589]: Found sda Apr 30 03:44:41.160976 extend-filesystems[1589]: Found sda1 Apr 30 03:44:41.160976 extend-filesystems[1589]: Found sda2 Apr 30 03:44:41.160976 extend-filesystems[1589]: Found sda3 Apr 30 03:44:41.160976 extend-filesystems[1589]: Found usr Apr 30 03:44:41.160976 extend-filesystems[1589]: Found sda4 Apr 30 03:44:41.160976 extend-filesystems[1589]: Found sda6 Apr 30 03:44:41.160976 extend-filesystems[1589]: Found sda7 Apr 30 03:44:41.160976 extend-filesystems[1589]: Found sda9 Apr 30 03:44:41.160976 extend-filesystems[1589]: Checking size of /dev/sda9 Apr 30 03:44:41.158206 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 03:44:41.169420 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 03:44:41.169979 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 03:44:41.182338 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 03:44:41.194504 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 03:44:41.205244 dbus-daemon[1585]: [system] SELinux support is enabled Apr 30 03:44:41.212043 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 03:44:41.216246 extend-filesystems[1589]: Resized partition /dev/sda9 Apr 30 03:44:41.242251 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1260) Apr 30 03:44:41.242327 extend-filesystems[1630]: resize2fs 1.47.1 (20-May-2024) Apr 30 03:44:41.255505 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 30 03:44:41.224387 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 03:44:41.255576 update_engine[1613]: I20250430 03:44:41.233980 1613 main.cc:92] Flatcar Update Engine starting Apr 30 03:44:41.255760 jq[1624]: true Apr 30 03:44:41.224555 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 03:44:41.231166 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 03:44:41.263248 update_engine[1613]: I20250430 03:44:41.262351 1613 update_check_scheduler.cc:74] Next update check in 4m10s Apr 30 03:44:41.231368 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 03:44:41.235087 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 03:44:41.235999 systemd-timesyncd[1576]: Contacted time server 185.252.140.125:123 (0.flatcar.pool.ntp.org). Apr 30 03:44:41.236033 systemd-timesyncd[1576]: Initial clock synchronization to Wed 2025-04-30 03:44:41.397492 UTC. Apr 30 03:44:41.242614 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 03:44:41.242778 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 03:44:41.269206 (ntainerd)[1635]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 03:44:41.273643 systemd-logind[1611]: New seat seat0. Apr 30 03:44:41.285787 systemd[1]: Started update-engine.service - Update Engine. Apr 30 03:44:41.286586 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 03:44:41.286609 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 03:44:41.289987 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 03:44:41.290004 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 03:44:41.291683 systemd-logind[1611]: Watching system buttons on /dev/input/event2 (Power Button) Apr 30 03:44:41.291696 systemd-logind[1611]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 03:44:41.298981 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 03:44:41.303998 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 03:44:41.304440 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 03:44:41.318674 jq[1642]: true Apr 30 03:44:41.340108 tar[1634]: linux-amd64/helm Apr 30 03:44:41.379234 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 03:44:41.379883 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 03:44:41.417592 locksmithd[1653]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 03:44:41.432722 bash[1678]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:44:41.437290 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 03:44:41.449903 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 30 03:44:41.451767 systemd[1]: Starting sshkeys.service... Apr 30 03:44:41.466852 extend-filesystems[1630]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 30 03:44:41.466852 extend-filesystems[1630]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 30 03:44:41.466852 extend-filesystems[1630]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 30 03:44:41.470435 extend-filesystems[1589]: Resized filesystem in /dev/sda9 Apr 30 03:44:41.480084 extend-filesystems[1589]: Found sr0 Apr 30 03:44:41.474119 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 03:44:41.474314 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 03:44:41.489298 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 03:44:41.497112 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 03:44:41.505943 sshd_keygen[1625]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 03:44:41.537038 containerd[1635]: time="2025-04-30T03:44:41.536979727Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 03:44:41.540666 coreos-metadata[1695]: Apr 30 03:44:41.539 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 30 03:44:41.541561 coreos-metadata[1695]: Apr 30 03:44:41.541 INFO Fetch successful Apr 30 03:44:41.543990 unknown[1695]: wrote ssh authorized keys file for user: core Apr 30 03:44:41.557073 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 03:44:41.567089 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 03:44:41.576553 update-ssh-keys[1708]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:44:41.579268 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 03:44:41.582796 systemd[1]: Finished sshkeys.service. Apr 30 03:44:41.591023 containerd[1635]: time="2025-04-30T03:44:41.590714286Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:44:41.594114 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 03:44:41.594293 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 03:44:41.598154 containerd[1635]: time="2025-04-30T03:44:41.598121078Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:44:41.598193 containerd[1635]: time="2025-04-30T03:44:41.598154471Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 03:44:41.598193 containerd[1635]: time="2025-04-30T03:44:41.598169999Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 03:44:41.598319 containerd[1635]: time="2025-04-30T03:44:41.598302428Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 03:44:41.598362 containerd[1635]: time="2025-04-30T03:44:41.598322265Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 03:44:41.599696 containerd[1635]: time="2025-04-30T03:44:41.598399891Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:44:41.599696 containerd[1635]: time="2025-04-30T03:44:41.598415049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:44:41.599696 containerd[1635]: time="2025-04-30T03:44:41.598572314Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:44:41.599696 containerd[1635]: time="2025-04-30T03:44:41.598586821Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 03:44:41.599696 containerd[1635]: time="2025-04-30T03:44:41.598597592Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:44:41.599696 containerd[1635]: time="2025-04-30T03:44:41.598605576Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 03:44:41.599696 containerd[1635]: time="2025-04-30T03:44:41.598666030Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:44:41.599696 containerd[1635]: time="2025-04-30T03:44:41.598880412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:44:41.599696 containerd[1635]: time="2025-04-30T03:44:41.599175897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:44:41.599696 containerd[1635]: time="2025-04-30T03:44:41.599189952Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 03:44:41.599696 containerd[1635]: time="2025-04-30T03:44:41.599294038Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 03:44:41.600303 containerd[1635]: time="2025-04-30T03:44:41.599354962Z" level=info msg="metadata content store policy set" policy=shared Apr 30 03:44:41.602093 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 03:44:41.607746 containerd[1635]: time="2025-04-30T03:44:41.607723026Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 03:44:41.607821 containerd[1635]: time="2025-04-30T03:44:41.607799230Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 03:44:41.607845 containerd[1635]: time="2025-04-30T03:44:41.607825829Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 03:44:41.607881 containerd[1635]: time="2025-04-30T03:44:41.607867608Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 03:44:41.607913 containerd[1635]: time="2025-04-30T03:44:41.607886834Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 03:44:41.608007 containerd[1635]: time="2025-04-30T03:44:41.607990188Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 03:44:41.608304 containerd[1635]: time="2025-04-30T03:44:41.608286113Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 03:44:41.608385 containerd[1635]: time="2025-04-30T03:44:41.608368998Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 03:44:41.608404 containerd[1635]: time="2025-04-30T03:44:41.608388404Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 03:44:41.608404 containerd[1635]: time="2025-04-30T03:44:41.608399205Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 03:44:41.608429 containerd[1635]: time="2025-04-30T03:44:41.608408933Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 03:44:41.608429 containerd[1635]: time="2025-04-30T03:44:41.608425605Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 03:44:41.608459 containerd[1635]: time="2025-04-30T03:44:41.608434862Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 03:44:41.608459 containerd[1635]: time="2025-04-30T03:44:41.608444419Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 03:44:41.608459 containerd[1635]: time="2025-04-30T03:44:41.608455961Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 03:44:41.608497 containerd[1635]: time="2025-04-30T03:44:41.608465699Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 03:44:41.608497 containerd[1635]: time="2025-04-30T03:44:41.608474917Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 03:44:41.608497 containerd[1635]: time="2025-04-30T03:44:41.608486118Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 03:44:41.608536 containerd[1635]: time="2025-04-30T03:44:41.608501557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 03:44:41.608536 containerd[1635]: time="2025-04-30T03:44:41.608513018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 03:44:41.608536 containerd[1635]: time="2025-04-30T03:44:41.608521945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 03:44:41.608536 containerd[1635]: time="2025-04-30T03:44:41.608531603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 03:44:41.608592 containerd[1635]: time="2025-04-30T03:44:41.608540560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 03:44:41.608592 containerd[1635]: time="2025-04-30T03:44:41.608549747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 03:44:41.608592 containerd[1635]: time="2025-04-30T03:44:41.608558794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 03:44:41.608592 containerd[1635]: time="2025-04-30T03:44:41.608568032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 03:44:41.608592 containerd[1635]: time="2025-04-30T03:44:41.608577318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 03:44:41.608592 containerd[1635]: time="2025-04-30T03:44:41.608588099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 03:44:41.608678 containerd[1635]: time="2025-04-30T03:44:41.608597086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 03:44:41.608678 containerd[1635]: time="2025-04-30T03:44:41.608605983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 03:44:41.608678 containerd[1635]: time="2025-04-30T03:44:41.608614909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 03:44:41.608678 containerd[1635]: time="2025-04-30T03:44:41.608626541Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 03:44:41.608678 containerd[1635]: time="2025-04-30T03:44:41.608641839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 03:44:41.608678 containerd[1635]: time="2025-04-30T03:44:41.608650356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 03:44:41.608678 containerd[1635]: time="2025-04-30T03:44:41.608657780Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 03:44:41.608766 containerd[1635]: time="2025-04-30T03:44:41.608689519Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 03:44:41.608766 containerd[1635]: time="2025-04-30T03:44:41.608701822Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 03:44:41.608766 containerd[1635]: time="2025-04-30T03:44:41.608710138Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 03:44:41.608766 containerd[1635]: time="2025-04-30T03:44:41.608719415Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 03:44:41.608766 containerd[1635]: time="2025-04-30T03:44:41.608726258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 03:44:41.608766 containerd[1635]: time="2025-04-30T03:44:41.608737298Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 03:44:41.608766 containerd[1635]: time="2025-04-30T03:44:41.608745053Z" level=info msg="NRI interface is disabled by configuration." Apr 30 03:44:41.608766 containerd[1635]: time="2025-04-30T03:44:41.608752086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 03:44:41.609022 containerd[1635]: time="2025-04-30T03:44:41.608972660Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 03:44:41.609115 containerd[1635]: time="2025-04-30T03:44:41.609025610Z" level=info msg="Connect containerd service" Apr 30 03:44:41.609115 containerd[1635]: time="2025-04-30T03:44:41.609055355Z" level=info msg="using legacy CRI server" Apr 30 03:44:41.609115 containerd[1635]: time="2025-04-30T03:44:41.609060505Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 03:44:41.609164 containerd[1635]: time="2025-04-30T03:44:41.609127350Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 03:44:41.612005 containerd[1635]: time="2025-04-30T03:44:41.611983157Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:44:41.612140 containerd[1635]: time="2025-04-30T03:44:41.612113551Z" level=info msg="Start subscribing containerd event" Apr 30 03:44:41.612168 containerd[1635]: time="2025-04-30T03:44:41.612152805Z" level=info msg="Start recovering state" Apr 30 03:44:41.612235 containerd[1635]: time="2025-04-30T03:44:41.612198590Z" level=info msg="Start event monitor" Apr 30 03:44:41.612235 containerd[1635]: time="2025-04-30T03:44:41.612229648Z" level=info msg="Start snapshots syncer" Apr 30 03:44:41.612277 containerd[1635]: time="2025-04-30T03:44:41.612237604Z" level=info msg="Start cni network conf syncer for default" Apr 30 03:44:41.612277 containerd[1635]: time="2025-04-30T03:44:41.612243214Z" level=info msg="Start streaming server" Apr 30 03:44:41.613306 containerd[1635]: time="2025-04-30T03:44:41.613166165Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 03:44:41.613976 containerd[1635]: time="2025-04-30T03:44:41.613959343Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 03:44:41.616360 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 03:44:41.622464 containerd[1635]: time="2025-04-30T03:44:41.616069501Z" level=info msg="containerd successfully booted in 0.080415s" Apr 30 03:44:41.624176 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 03:44:41.634838 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 03:44:41.645168 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 03:44:41.645652 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 03:44:41.794106 systemd-networkd[1258]: eth0: Gained IPv6LL Apr 30 03:44:41.851505 tar[1634]: linux-amd64/LICENSE Apr 30 03:44:41.851505 tar[1634]: linux-amd64/README.md Apr 30 03:44:41.859985 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 03:44:42.206080 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:44:42.207153 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 03:44:42.209621 (kubelet)[1744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:44:42.210014 systemd[1]: Startup finished in 8.011s (kernel) + 3.722s (userspace) = 11.734s. Apr 30 03:44:42.770443 kubelet[1744]: E0430 03:44:42.770367 1744 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:44:42.772709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:44:42.772866 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:44:52.818865 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 03:44:52.824315 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:44:52.902005 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:44:52.916175 (kubelet)[1769]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:44:52.957577 kubelet[1769]: E0430 03:44:52.957532 1769 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:44:52.960884 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:44:52.961059 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:45:03.068796 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 03:45:03.074055 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:45:03.161275 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:45:03.164565 (kubelet)[1791]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:45:03.197468 kubelet[1791]: E0430 03:45:03.197402 1791 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:45:03.199210 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:45:03.199394 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:45:13.318776 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 03:45:13.324042 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:45:13.403865 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:45:13.406657 (kubelet)[1812]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:45:13.442197 kubelet[1812]: E0430 03:45:13.442128 1812 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:45:13.444460 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:45:13.444648 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:45:23.568804 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 30 03:45:23.574084 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:45:23.654390 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:45:23.657365 (kubelet)[1833]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:45:23.691296 kubelet[1833]: E0430 03:45:23.691246 1833 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:45:23.693184 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:45:23.693349 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:45:26.509632 update_engine[1613]: I20250430 03:45:26.509509 1613 update_attempter.cc:509] Updating boot flags... Apr 30 03:45:26.575975 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1851) Apr 30 03:45:26.614316 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1852) Apr 30 03:45:26.651945 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1852) Apr 30 03:45:33.818570 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 30 03:45:33.828045 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:45:33.904212 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:45:33.907307 (kubelet)[1875]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:45:33.941338 kubelet[1875]: E0430 03:45:33.941294 1875 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:45:33.944448 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:45:33.944614 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:45:44.068628 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 30 03:45:44.074178 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:45:44.158005 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:45:44.158171 (kubelet)[1896]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:45:44.189242 kubelet[1896]: E0430 03:45:44.189163 1896 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:45:44.190889 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:45:44.191074 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:45:54.318756 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 30 03:45:54.324235 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:45:54.405021 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:45:54.407489 (kubelet)[1918]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:45:54.438506 kubelet[1918]: E0430 03:45:54.438425 1918 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:45:54.440407 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:45:54.440544 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:46:04.568610 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 30 03:46:04.575040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:46:04.658448 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:46:04.661201 (kubelet)[1939]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:46:04.692344 kubelet[1939]: E0430 03:46:04.692304 1939 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:46:04.693987 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:46:04.694126 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:46:14.818676 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 30 03:46:14.824028 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:46:14.907215 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:46:14.910148 (kubelet)[1960]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:46:14.940728 kubelet[1960]: E0430 03:46:14.940688 1960 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:46:14.942331 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:46:14.942466 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:46:25.068634 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 30 03:46:25.074043 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:46:25.149384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:46:25.152055 (kubelet)[1981]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:46:25.183738 kubelet[1981]: E0430 03:46:25.183696 1981 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:46:25.185586 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:46:25.185725 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:46:29.714122 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 03:46:29.719276 systemd[1]: Started sshd@0-37.27.250.194:22-139.178.68.195:35060.service - OpenSSH per-connection server daemon (139.178.68.195:35060). Apr 30 03:46:30.714942 sshd[1990]: Accepted publickey for core from 139.178.68.195 port 35060 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:46:30.716493 sshd[1990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:46:30.723435 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 03:46:30.728080 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 03:46:30.730211 systemd-logind[1611]: New session 1 of user core. Apr 30 03:46:30.739454 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 03:46:30.745127 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 03:46:30.747846 (systemd)[1996]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 03:46:30.823275 systemd[1996]: Queued start job for default target default.target. Apr 30 03:46:30.823534 systemd[1996]: Created slice app.slice - User Application Slice. Apr 30 03:46:30.823551 systemd[1996]: Reached target paths.target - Paths. Apr 30 03:46:30.823561 systemd[1996]: Reached target timers.target - Timers. Apr 30 03:46:30.833949 systemd[1996]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 03:46:30.839605 systemd[1996]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 03:46:30.839650 systemd[1996]: Reached target sockets.target - Sockets. Apr 30 03:46:30.839661 systemd[1996]: Reached target basic.target - Basic System. Apr 30 03:46:30.839693 systemd[1996]: Reached target default.target - Main User Target. Apr 30 03:46:30.839713 systemd[1996]: Startup finished in 87ms. Apr 30 03:46:30.839824 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 03:46:30.843067 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 03:46:31.558098 systemd[1]: Started sshd@1-37.27.250.194:22-139.178.68.195:35076.service - OpenSSH per-connection server daemon (139.178.68.195:35076). Apr 30 03:46:32.521019 sshd[2008]: Accepted publickey for core from 139.178.68.195 port 35076 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:46:32.522188 sshd[2008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:46:32.526396 systemd-logind[1611]: New session 2 of user core. Apr 30 03:46:32.532270 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 03:46:33.194066 sshd[2008]: pam_unix(sshd:session): session closed for user core Apr 30 03:46:33.196389 systemd[1]: sshd@1-37.27.250.194:22-139.178.68.195:35076.service: Deactivated successfully. Apr 30 03:46:33.199396 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 03:46:33.199425 systemd-logind[1611]: Session 2 logged out. Waiting for processes to exit. Apr 30 03:46:33.201285 systemd-logind[1611]: Removed session 2. Apr 30 03:46:33.359213 systemd[1]: Started sshd@2-37.27.250.194:22-139.178.68.195:35080.service - OpenSSH per-connection server daemon (139.178.68.195:35080). Apr 30 03:46:34.332255 sshd[2016]: Accepted publickey for core from 139.178.68.195 port 35080 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:46:34.333432 sshd[2016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:46:34.337713 systemd-logind[1611]: New session 3 of user core. Apr 30 03:46:34.348118 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 03:46:35.004795 sshd[2016]: pam_unix(sshd:session): session closed for user core Apr 30 03:46:35.008235 systemd[1]: sshd@2-37.27.250.194:22-139.178.68.195:35080.service: Deactivated successfully. Apr 30 03:46:35.010886 systemd-logind[1611]: Session 3 logged out. Waiting for processes to exit. Apr 30 03:46:35.011385 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 03:46:35.012345 systemd-logind[1611]: Removed session 3. Apr 30 03:46:35.170144 systemd[1]: Started sshd@3-37.27.250.194:22-139.178.68.195:35096.service - OpenSSH per-connection server daemon (139.178.68.195:35096). Apr 30 03:46:35.318712 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 30 03:46:35.324291 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:46:35.428589 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:46:35.432376 (kubelet)[2038]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:46:35.465176 kubelet[2038]: E0430 03:46:35.465139 2038 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:46:35.467429 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:46:35.467584 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:46:36.135772 sshd[2024]: Accepted publickey for core from 139.178.68.195 port 35096 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:46:36.137036 sshd[2024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:46:36.141576 systemd-logind[1611]: New session 4 of user core. Apr 30 03:46:36.148184 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 03:46:36.811341 sshd[2024]: pam_unix(sshd:session): session closed for user core Apr 30 03:46:36.815347 systemd[1]: sshd@3-37.27.250.194:22-139.178.68.195:35096.service: Deactivated successfully. Apr 30 03:46:36.816564 systemd-logind[1611]: Session 4 logged out. Waiting for processes to exit. Apr 30 03:46:36.818112 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 03:46:36.818975 systemd-logind[1611]: Removed session 4. Apr 30 03:46:36.975373 systemd[1]: Started sshd@4-37.27.250.194:22-139.178.68.195:34878.service - OpenSSH per-connection server daemon (139.178.68.195:34878). Apr 30 03:46:37.939772 sshd[2054]: Accepted publickey for core from 139.178.68.195 port 34878 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:46:37.941004 sshd[2054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:46:37.945200 systemd-logind[1611]: New session 5 of user core. Apr 30 03:46:37.955124 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 03:46:38.466278 sudo[2058]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 03:46:38.466689 sudo[2058]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:46:38.484411 sudo[2058]: pam_unix(sudo:session): session closed for user root Apr 30 03:46:38.641949 sshd[2054]: pam_unix(sshd:session): session closed for user core Apr 30 03:46:38.644619 systemd[1]: sshd@4-37.27.250.194:22-139.178.68.195:34878.service: Deactivated successfully. Apr 30 03:46:38.647116 systemd-logind[1611]: Session 5 logged out. Waiting for processes to exit. Apr 30 03:46:38.648374 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 03:46:38.649224 systemd-logind[1611]: Removed session 5. Apr 30 03:46:38.804071 systemd[1]: Started sshd@5-37.27.250.194:22-139.178.68.195:34894.service - OpenSSH per-connection server daemon (139.178.68.195:34894). Apr 30 03:46:39.770612 sshd[2063]: Accepted publickey for core from 139.178.68.195 port 34894 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:46:39.771942 sshd[2063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:46:39.776036 systemd-logind[1611]: New session 6 of user core. Apr 30 03:46:39.782128 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 03:46:40.290140 sudo[2068]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 03:46:40.290425 sudo[2068]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:46:40.294141 sudo[2068]: pam_unix(sudo:session): session closed for user root Apr 30 03:46:40.299867 sudo[2067]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 03:46:40.300270 sudo[2067]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:46:40.320458 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 03:46:40.321651 auditctl[2071]: No rules Apr 30 03:46:40.322067 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 03:46:40.322268 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 03:46:40.327350 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:46:40.349928 augenrules[2090]: No rules Apr 30 03:46:40.351091 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:46:40.353069 sudo[2067]: pam_unix(sudo:session): session closed for user root Apr 30 03:46:40.511474 sshd[2063]: pam_unix(sshd:session): session closed for user core Apr 30 03:46:40.515446 systemd[1]: sshd@5-37.27.250.194:22-139.178.68.195:34894.service: Deactivated successfully. Apr 30 03:46:40.517089 systemd-logind[1611]: Session 6 logged out. Waiting for processes to exit. Apr 30 03:46:40.518445 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 03:46:40.519508 systemd-logind[1611]: Removed session 6. Apr 30 03:46:40.674298 systemd[1]: Started sshd@6-37.27.250.194:22-139.178.68.195:34910.service - OpenSSH per-connection server daemon (139.178.68.195:34910). Apr 30 03:46:41.649669 sshd[2099]: Accepted publickey for core from 139.178.68.195 port 34910 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:46:41.650881 sshd[2099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:46:41.655045 systemd-logind[1611]: New session 7 of user core. Apr 30 03:46:41.661101 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 03:46:42.167557 sudo[2103]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 03:46:42.167812 sudo[2103]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:46:42.399042 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 03:46:42.399867 (dockerd)[2119]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 03:46:42.609543 dockerd[2119]: time="2025-04-30T03:46:42.609495020Z" level=info msg="Starting up" Apr 30 03:46:42.661167 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3371765823-merged.mount: Deactivated successfully. Apr 30 03:46:42.685355 dockerd[2119]: time="2025-04-30T03:46:42.685217727Z" level=info msg="Loading containers: start." Apr 30 03:46:42.766940 kernel: Initializing XFRM netlink socket Apr 30 03:46:42.830198 systemd-networkd[1258]: docker0: Link UP Apr 30 03:46:42.842811 dockerd[2119]: time="2025-04-30T03:46:42.842771827Z" level=info msg="Loading containers: done." Apr 30 03:46:42.855279 dockerd[2119]: time="2025-04-30T03:46:42.855245837Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 03:46:42.855361 dockerd[2119]: time="2025-04-30T03:46:42.855324867Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 03:46:42.855435 dockerd[2119]: time="2025-04-30T03:46:42.855410600Z" level=info msg="Daemon has completed initialization" Apr 30 03:46:42.879952 dockerd[2119]: time="2025-04-30T03:46:42.879845936Z" level=info msg="API listen on /run/docker.sock" Apr 30 03:46:42.880312 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 03:46:44.030359 containerd[1635]: time="2025-04-30T03:46:44.030279480Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 03:46:44.574231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount376509092.mount: Deactivated successfully. Apr 30 03:46:45.568519 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 30 03:46:45.575071 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:46:45.656090 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:46:45.666163 (kubelet)[2329]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:46:45.698133 kubelet[2329]: E0430 03:46:45.698059 2329 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:46:45.699962 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:46:45.700091 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:46:45.972656 containerd[1635]: time="2025-04-30T03:46:45.972559397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:45.973692 containerd[1635]: time="2025-04-30T03:46:45.973489597Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674967" Apr 30 03:46:45.974467 containerd[1635]: time="2025-04-30T03:46:45.974430248Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:45.976451 containerd[1635]: time="2025-04-30T03:46:45.976420634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:45.977455 containerd[1635]: time="2025-04-30T03:46:45.977265872Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 1.946943661s" Apr 30 03:46:45.977455 containerd[1635]: time="2025-04-30T03:46:45.977293044Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" Apr 30 03:46:45.991570 containerd[1635]: time="2025-04-30T03:46:45.991539853Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 03:46:47.740610 containerd[1635]: time="2025-04-30T03:46:47.740549156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:47.741608 containerd[1635]: time="2025-04-30T03:46:47.741396278Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617556" Apr 30 03:46:47.742399 containerd[1635]: time="2025-04-30T03:46:47.742364661Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:47.744297 containerd[1635]: time="2025-04-30T03:46:47.744253254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:47.745266 containerd[1635]: time="2025-04-30T03:46:47.745096969Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 1.753531438s" Apr 30 03:46:47.745266 containerd[1635]: time="2025-04-30T03:46:47.745121706Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" Apr 30 03:46:47.761099 containerd[1635]: time="2025-04-30T03:46:47.761068946Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 03:46:48.996382 containerd[1635]: time="2025-04-30T03:46:48.996334265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:48.997272 containerd[1635]: time="2025-04-30T03:46:48.997228756Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903704" Apr 30 03:46:48.998206 containerd[1635]: time="2025-04-30T03:46:48.998173404Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:49.003065 containerd[1635]: time="2025-04-30T03:46:49.002907259Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.241793567s" Apr 30 03:46:49.003065 containerd[1635]: time="2025-04-30T03:46:49.002933699Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" Apr 30 03:46:49.003194 containerd[1635]: time="2025-04-30T03:46:49.003177132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:49.018100 containerd[1635]: time="2025-04-30T03:46:49.018081854Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 03:46:49.993704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3989141288.mount: Deactivated successfully. Apr 30 03:46:50.235312 containerd[1635]: time="2025-04-30T03:46:50.235238318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:50.236116 containerd[1635]: time="2025-04-30T03:46:50.236030064Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185845" Apr 30 03:46:50.236788 containerd[1635]: time="2025-04-30T03:46:50.236750115Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:50.238219 containerd[1635]: time="2025-04-30T03:46:50.238186046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:50.238606 containerd[1635]: time="2025-04-30T03:46:50.238581127Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.22038693s" Apr 30 03:46:50.238649 containerd[1635]: time="2025-04-30T03:46:50.238608850Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" Apr 30 03:46:50.253130 containerd[1635]: time="2025-04-30T03:46:50.252966981Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 03:46:50.743685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount795039148.mount: Deactivated successfully. Apr 30 03:46:51.481729 containerd[1635]: time="2025-04-30T03:46:51.481684609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:51.482482 containerd[1635]: time="2025-04-30T03:46:51.482440466Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185843" Apr 30 03:46:51.483387 containerd[1635]: time="2025-04-30T03:46:51.483350637Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:51.485376 containerd[1635]: time="2025-04-30T03:46:51.485357595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:51.486348 containerd[1635]: time="2025-04-30T03:46:51.486217490Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.23322934s" Apr 30 03:46:51.486348 containerd[1635]: time="2025-04-30T03:46:51.486242809Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 30 03:46:51.502504 containerd[1635]: time="2025-04-30T03:46:51.502477116Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 03:46:51.932120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1263595124.mount: Deactivated successfully. Apr 30 03:46:51.938234 containerd[1635]: time="2025-04-30T03:46:51.938146799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:51.939397 containerd[1635]: time="2025-04-30T03:46:51.939323777Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322312" Apr 30 03:46:51.940324 containerd[1635]: time="2025-04-30T03:46:51.940214041Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:51.943709 containerd[1635]: time="2025-04-30T03:46:51.943619978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:51.945560 containerd[1635]: time="2025-04-30T03:46:51.944887860Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 442.373534ms" Apr 30 03:46:51.945560 containerd[1635]: time="2025-04-30T03:46:51.944971660Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Apr 30 03:46:51.967181 containerd[1635]: time="2025-04-30T03:46:51.967090139Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 03:46:52.456433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount832329084.mount: Deactivated successfully. Apr 30 03:46:53.690553 containerd[1635]: time="2025-04-30T03:46:53.690499018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:53.691497 containerd[1635]: time="2025-04-30T03:46:53.691452140Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238653" Apr 30 03:46:53.692315 containerd[1635]: time="2025-04-30T03:46:53.692258113Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:53.694443 containerd[1635]: time="2025-04-30T03:46:53.694409404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:46:53.695656 containerd[1635]: time="2025-04-30T03:46:53.695614636Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 1.728485272s" Apr 30 03:46:53.695698 containerd[1635]: time="2025-04-30T03:46:53.695656146Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Apr 30 03:46:55.818642 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Apr 30 03:46:55.830102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:46:55.935190 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:46:55.936541 (kubelet)[2550]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:46:55.977747 kubelet[2550]: E0430 03:46:55.977628 2550 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:46:55.979137 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:46:55.979262 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:46:56.192026 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:46:56.202175 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:46:56.215834 systemd[1]: Reloading requested from client PID 2567 ('systemctl') (unit session-7.scope)... Apr 30 03:46:56.215852 systemd[1]: Reloading... Apr 30 03:46:56.287176 zram_generator::config[2608]: No configuration found. Apr 30 03:46:56.364796 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:46:56.424444 systemd[1]: Reloading finished in 208 ms. Apr 30 03:46:56.460527 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:46:56.462454 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:46:56.462641 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:46:56.471591 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:46:56.540464 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:46:56.543175 (kubelet)[2676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:46:56.574772 kubelet[2676]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:46:56.574772 kubelet[2676]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:46:56.574772 kubelet[2676]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:46:56.575813 kubelet[2676]: I0430 03:46:56.575767 2676 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:46:56.766444 kubelet[2676]: I0430 03:46:56.766347 2676 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 03:46:56.766444 kubelet[2676]: I0430 03:46:56.766371 2676 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:46:56.766912 kubelet[2676]: I0430 03:46:56.766868 2676 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 03:46:56.783687 kubelet[2676]: E0430 03:46:56.783370 2676 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://37.27.250.194:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 37.27.250.194:6443: connect: connection refused Apr 30 03:46:56.783687 kubelet[2676]: I0430 03:46:56.783651 2676 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:46:56.804750 kubelet[2676]: I0430 03:46:56.804709 2676 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:46:56.806146 kubelet[2676]: I0430 03:46:56.806106 2676 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:46:56.806300 kubelet[2676]: I0430 03:46:56.806140 2676 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-c-b54c1f5c93","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 03:46:56.806381 kubelet[2676]: I0430 03:46:56.806305 2676 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:46:56.806381 kubelet[2676]: I0430 03:46:56.806331 2676 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 03:46:56.806459 kubelet[2676]: I0430 03:46:56.806435 2676 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:46:56.807267 kubelet[2676]: I0430 03:46:56.807140 2676 kubelet.go:400] "Attempting to sync node with API server" Apr 30 03:46:56.807267 kubelet[2676]: I0430 03:46:56.807155 2676 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:46:56.807267 kubelet[2676]: I0430 03:46:56.807171 2676 kubelet.go:312] "Adding apiserver pod source" Apr 30 03:46:56.807267 kubelet[2676]: I0430 03:46:56.807184 2676 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:46:56.808728 kubelet[2676]: W0430 03:46:56.808251 2676 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://37.27.250.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-c-b54c1f5c93&limit=500&resourceVersion=0": dial tcp 37.27.250.194:6443: connect: connection refused Apr 30 03:46:56.808728 kubelet[2676]: E0430 03:46:56.808305 2676 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://37.27.250.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-c-b54c1f5c93&limit=500&resourceVersion=0": dial tcp 37.27.250.194:6443: connect: connection refused Apr 30 03:46:56.810356 kubelet[2676]: I0430 03:46:56.810282 2676 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:46:56.812149 kubelet[2676]: I0430 03:46:56.811587 2676 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:46:56.812149 kubelet[2676]: W0430 03:46:56.811643 2676 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 03:46:56.812149 kubelet[2676]: I0430 03:46:56.812145 2676 server.go:1264] "Started kubelet" Apr 30 03:46:56.812260 kubelet[2676]: W0430 03:46:56.812223 2676 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://37.27.250.194:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 37.27.250.194:6443: connect: connection refused Apr 30 03:46:56.812283 kubelet[2676]: E0430 03:46:56.812260 2676 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://37.27.250.194:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 37.27.250.194:6443: connect: connection refused Apr 30 03:46:56.817439 kubelet[2676]: E0430 03:46:56.817315 2676 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://37.27.250.194:6443/api/v1/namespaces/default/events\": dial tcp 37.27.250.194:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-c-b54c1f5c93.183afbf0b54ffcc5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-c-b54c1f5c93,UID:ci-4081-3-3-c-b54c1f5c93,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-c-b54c1f5c93,},FirstTimestamp:2025-04-30 03:46:56.812129477 +0000 UTC m=+0.265850904,LastTimestamp:2025-04-30 03:46:56.812129477 +0000 UTC m=+0.265850904,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-c-b54c1f5c93,}" Apr 30 03:46:56.818273 kubelet[2676]: I0430 03:46:56.817557 2676 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:46:56.818273 kubelet[2676]: I0430 03:46:56.817808 2676 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:46:56.819228 kubelet[2676]: I0430 03:46:56.818739 2676 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:46:56.821801 kubelet[2676]: I0430 03:46:56.821545 2676 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:46:56.823139 kubelet[2676]: I0430 03:46:56.822562 2676 server.go:455] "Adding debug handlers to kubelet server" Apr 30 03:46:56.823970 kubelet[2676]: I0430 03:46:56.823958 2676 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 03:46:56.824670 kubelet[2676]: E0430 03:46:56.824604 2676 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.250.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-c-b54c1f5c93?timeout=10s\": dial tcp 37.27.250.194:6443: connect: connection refused" interval="200ms" Apr 30 03:46:56.824824 kubelet[2676]: I0430 03:46:56.824804 2676 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:46:56.824886 kubelet[2676]: I0430 03:46:56.824866 2676 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:46:56.826467 kubelet[2676]: I0430 03:46:56.826441 2676 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:46:56.827465 kubelet[2676]: I0430 03:46:56.827346 2676 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:46:56.827465 kubelet[2676]: I0430 03:46:56.827393 2676 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:46:56.833119 kubelet[2676]: W0430 03:46:56.833087 2676 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://37.27.250.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 37.27.250.194:6443: connect: connection refused Apr 30 03:46:56.834355 kubelet[2676]: E0430 03:46:56.833964 2676 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://37.27.250.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 37.27.250.194:6443: connect: connection refused Apr 30 03:46:56.839070 kubelet[2676]: I0430 03:46:56.839039 2676 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:46:56.839927 kubelet[2676]: I0430 03:46:56.839914 2676 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:46:56.840024 kubelet[2676]: I0430 03:46:56.840015 2676 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:46:56.840083 kubelet[2676]: I0430 03:46:56.840076 2676 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 03:46:56.840150 kubelet[2676]: E0430 03:46:56.840137 2676 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:46:56.847408 kubelet[2676]: W0430 03:46:56.847380 2676 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://37.27.250.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 37.27.250.194:6443: connect: connection refused Apr 30 03:46:56.847565 kubelet[2676]: E0430 03:46:56.847478 2676 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://37.27.250.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 37.27.250.194:6443: connect: connection refused Apr 30 03:46:56.861388 kubelet[2676]: E0430 03:46:56.861371 2676 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:46:56.865610 kubelet[2676]: I0430 03:46:56.865582 2676 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:46:56.865610 kubelet[2676]: I0430 03:46:56.865593 2676 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:46:56.865697 kubelet[2676]: I0430 03:46:56.865627 2676 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:46:56.867347 kubelet[2676]: I0430 03:46:56.867327 2676 policy_none.go:49] "None policy: Start" Apr 30 03:46:56.867935 kubelet[2676]: I0430 03:46:56.867787 2676 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:46:56.867935 kubelet[2676]: I0430 03:46:56.867822 2676 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:46:56.871951 kubelet[2676]: I0430 03:46:56.871938 2676 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:46:56.872911 kubelet[2676]: I0430 03:46:56.872125 2676 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:46:56.872911 kubelet[2676]: I0430 03:46:56.872201 2676 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:46:56.874778 kubelet[2676]: E0430 03:46:56.874767 2676 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-3-c-b54c1f5c93\" not found" Apr 30 03:46:56.926474 kubelet[2676]: I0430 03:46:56.926434 2676 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:46:56.926794 kubelet[2676]: E0430 03:46:56.926766 2676 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://37.27.250.194:6443/api/v1/nodes\": dial tcp 37.27.250.194:6443: connect: connection refused" node="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:46:56.941075 kubelet[2676]: I0430 03:46:56.941038 2676 topology_manager.go:215] "Topology Admit Handler" podUID="c7f4c5ca5cefb8490c99284dbc8fc161" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-3-c-b54c1f5c93" Apr 30 03:46:56.942383 kubelet[2676]: I0430 03:46:56.942354 2676 topology_manager.go:215] "Topology Admit Handler" podUID="9a5800ae657061cc9071be7482352f98" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-3-c-b54c1f5c93" Apr 30 03:46:56.943577 kubelet[2676]: I0430 03:46:56.943554 2676 topology_manager.go:215] "Topology Admit Handler" podUID="ef6b2d728590ca9ff5f25f208090d280" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-3-c-b54c1f5c93" Apr 30 03:46:57.025245 kubelet[2676]: E0430 03:46:57.025108 2676 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.250.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-c-b54c1f5c93?timeout=10s\": dial tcp 37.27.250.194:6443: connect: connection refused" interval="400ms" Apr 30 03:46:57.128963 kubelet[2676]: I0430 03:46:57.128635 2676 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a5800ae657061cc9071be7482352f98-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-c-b54c1f5c93\" (UID: \"9a5800ae657061cc9071be7482352f98\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-c-b54c1f5c93" Apr 30 03:46:57.128963 kubelet[2676]: I0430 03:46:57.128700 2676 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef6b2d728590ca9ff5f25f208090d280-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-c-b54c1f5c93\" (UID: \"ef6b2d728590ca9ff5f25f208090d280\") " pod="kube-system/kube-scheduler-ci-4081-3-3-c-b54c1f5c93" Apr 30 03:46:57.128963 kubelet[2676]: I0430 03:46:57.128721 2676 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a5800ae657061cc9071be7482352f98-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-c-b54c1f5c93\" (UID: \"9a5800ae657061cc9071be7482352f98\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-c-b54c1f5c93" Apr 30 03:46:57.128963 kubelet[2676]: I0430 03:46:57.128761 2676 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9a5800ae657061cc9071be7482352f98-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-c-b54c1f5c93\" (UID: \"9a5800ae657061cc9071be7482352f98\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-c-b54c1f5c93" Apr 30 03:46:57.128963 kubelet[2676]: I0430 03:46:57.128779 2676 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a5800ae657061cc9071be7482352f98-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-c-b54c1f5c93\" (UID: \"9a5800ae657061cc9071be7482352f98\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-c-b54c1f5c93" Apr 30 03:46:57.129179 kubelet[2676]: I0430 03:46:57.128794 2676 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a5800ae657061cc9071be7482352f98-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-c-b54c1f5c93\" (UID: \"9a5800ae657061cc9071be7482352f98\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-c-b54c1f5c93" Apr 30 03:46:57.129179 kubelet[2676]: I0430 03:46:57.128808 2676 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c7f4c5ca5cefb8490c99284dbc8fc161-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-c-b54c1f5c93\" (UID: \"c7f4c5ca5cefb8490c99284dbc8fc161\") " pod="kube-system/kube-apiserver-ci-4081-3-3-c-b54c1f5c93" Apr 30 03:46:57.129179 kubelet[2676]: I0430 03:46:57.128843 2676 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c7f4c5ca5cefb8490c99284dbc8fc161-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-c-b54c1f5c93\" (UID: \"c7f4c5ca5cefb8490c99284dbc8fc161\") " pod="kube-system/kube-apiserver-ci-4081-3-3-c-b54c1f5c93" Apr 30 03:46:57.129179 kubelet[2676]: I0430 03:46:57.128879 2676 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c7f4c5ca5cefb8490c99284dbc8fc161-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-c-b54c1f5c93\" (UID: \"c7f4c5ca5cefb8490c99284dbc8fc161\") " pod="kube-system/kube-apiserver-ci-4081-3-3-c-b54c1f5c93" Apr 30 03:46:57.129179 kubelet[2676]: I0430 03:46:57.129015 2676 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:46:57.129420 kubelet[2676]: E0430 03:46:57.129388 2676 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://37.27.250.194:6443/api/v1/nodes\": dial tcp 37.27.250.194:6443: connect: connection refused" node="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:46:57.247958 containerd[1635]: time="2025-04-30T03:46:57.247837874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-c-b54c1f5c93,Uid:c7f4c5ca5cefb8490c99284dbc8fc161,Namespace:kube-system,Attempt:0,}" Apr 30 03:46:57.248486 containerd[1635]: time="2025-04-30T03:46:57.247843604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-c-b54c1f5c93,Uid:9a5800ae657061cc9071be7482352f98,Namespace:kube-system,Attempt:0,}" Apr 30 03:46:57.251874 containerd[1635]: time="2025-04-30T03:46:57.251832649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-c-b54c1f5c93,Uid:ef6b2d728590ca9ff5f25f208090d280,Namespace:kube-system,Attempt:0,}" Apr 30 03:46:57.426079 kubelet[2676]: E0430 03:46:57.426010 2676 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.250.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-c-b54c1f5c93?timeout=10s\": dial tcp 37.27.250.194:6443: connect: connection refused" interval="800ms" Apr 30 03:46:57.531918 kubelet[2676]: I0430 03:46:57.531863 2676 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:46:57.532441 kubelet[2676]: E0430 03:46:57.532197 2676 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://37.27.250.194:6443/api/v1/nodes\": dial tcp 37.27.250.194:6443: connect: connection refused" node="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:46:57.674188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2383381361.mount: Deactivated successfully. Apr 30 03:46:57.680326 containerd[1635]: time="2025-04-30T03:46:57.680213966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:46:57.682010 containerd[1635]: time="2025-04-30T03:46:57.681749957Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:46:57.682629 containerd[1635]: time="2025-04-30T03:46:57.682602047Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:46:57.683647 containerd[1635]: time="2025-04-30T03:46:57.683617007Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:46:57.685145 containerd[1635]: time="2025-04-30T03:46:57.685114323Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:46:57.686411 containerd[1635]: time="2025-04-30T03:46:57.686351836Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Apr 30 03:46:57.687058 containerd[1635]: time="2025-04-30T03:46:57.686987324Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:46:57.689928 containerd[1635]: time="2025-04-30T03:46:57.688914038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:46:57.690976 containerd[1635]: time="2025-04-30T03:46:57.690930402Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 442.841059ms" Apr 30 03:46:57.693172 containerd[1635]: time="2025-04-30T03:46:57.692775680Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 444.838758ms" Apr 30 03:46:57.697829 containerd[1635]: time="2025-04-30T03:46:57.697787670Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 445.898683ms" Apr 30 03:46:57.738434 kubelet[2676]: W0430 03:46:57.737716 2676 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://37.27.250.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-c-b54c1f5c93&limit=500&resourceVersion=0": dial tcp 37.27.250.194:6443: connect: connection refused Apr 30 03:46:57.738434 kubelet[2676]: E0430 03:46:57.737771 2676 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://37.27.250.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-c-b54c1f5c93&limit=500&resourceVersion=0": dial tcp 37.27.250.194:6443: connect: connection refused Apr 30 03:46:57.802147 containerd[1635]: time="2025-04-30T03:46:57.801948537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:46:57.802147 containerd[1635]: time="2025-04-30T03:46:57.801994754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:46:57.802147 containerd[1635]: time="2025-04-30T03:46:57.802005114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:46:57.802147 containerd[1635]: time="2025-04-30T03:46:57.802082791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:46:57.804612 containerd[1635]: time="2025-04-30T03:46:57.804049962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:46:57.804612 containerd[1635]: time="2025-04-30T03:46:57.804095709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:46:57.804612 containerd[1635]: time="2025-04-30T03:46:57.804109826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:46:57.804612 containerd[1635]: time="2025-04-30T03:46:57.804168788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:46:57.806474 containerd[1635]: time="2025-04-30T03:46:57.806416052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:46:57.807478 containerd[1635]: time="2025-04-30T03:46:57.807451399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:46:57.807729 containerd[1635]: time="2025-04-30T03:46:57.807605522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:46:57.808118 containerd[1635]: time="2025-04-30T03:46:57.808058313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:46:57.848832 kubelet[2676]: W0430 03:46:57.848774 2676 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://37.27.250.194:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 37.27.250.194:6443: connect: connection refused Apr 30 03:46:57.848832 kubelet[2676]: E0430 03:46:57.848830 2676 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://37.27.250.194:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 37.27.250.194:6443: connect: connection refused Apr 30 03:46:57.883041 containerd[1635]: time="2025-04-30T03:46:57.882272281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-c-b54c1f5c93,Uid:9a5800ae657061cc9071be7482352f98,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ca51c990713367cfdbe8d6e5f5240c63340bc804ae4baee12e46d85947729d5\"" Apr 30 03:46:57.885942 containerd[1635]: time="2025-04-30T03:46:57.884997533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-c-b54c1f5c93,Uid:ef6b2d728590ca9ff5f25f208090d280,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d2fe42cc14aabba9877db327872b55ae94d8cadd9a2730afcc739341b1d49da\"" Apr 30 03:46:57.887786 containerd[1635]: time="2025-04-30T03:46:57.887749946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-c-b54c1f5c93,Uid:c7f4c5ca5cefb8490c99284dbc8fc161,Namespace:kube-system,Attempt:0,} returns sandbox id \"d87e3f23c9de5ee67e1c2450fd8c3c6e2850a642a8c44e28d343c526d239daa5\"" Apr 30 03:46:57.888072 containerd[1635]: time="2025-04-30T03:46:57.888053443Z" level=info msg="CreateContainer within sandbox \"9ca51c990713367cfdbe8d6e5f5240c63340bc804ae4baee12e46d85947729d5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 03:46:57.891493 containerd[1635]: time="2025-04-30T03:46:57.891474998Z" level=info msg="CreateContainer within sandbox \"0d2fe42cc14aabba9877db327872b55ae94d8cadd9a2730afcc739341b1d49da\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 03:46:57.892408 containerd[1635]: time="2025-04-30T03:46:57.892390068Z" level=info msg="CreateContainer within sandbox \"d87e3f23c9de5ee67e1c2450fd8c3c6e2850a642a8c44e28d343c526d239daa5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 03:46:57.908428 containerd[1635]: time="2025-04-30T03:46:57.908409137Z" level=info msg="CreateContainer within sandbox \"9ca51c990713367cfdbe8d6e5f5240c63340bc804ae4baee12e46d85947729d5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7d0bff9a08b3e9a1e7a8fd21975e5b801ac7e31936e16d58a51fe9c34979a229\"" Apr 30 03:46:57.909272 containerd[1635]: time="2025-04-30T03:46:57.909237281Z" level=info msg="StartContainer for \"7d0bff9a08b3e9a1e7a8fd21975e5b801ac7e31936e16d58a51fe9c34979a229\"" Apr 30 03:46:57.911324 containerd[1635]: time="2025-04-30T03:46:57.911177068Z" level=info msg="CreateContainer within sandbox \"0d2fe42cc14aabba9877db327872b55ae94d8cadd9a2730afcc739341b1d49da\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9ed94da080fbbcaf22b805c6c4cc39a071db303f296acd880c42d4b8f0c68be9\"" Apr 30 03:46:57.911901 containerd[1635]: time="2025-04-30T03:46:57.911875798Z" level=info msg="StartContainer for \"9ed94da080fbbcaf22b805c6c4cc39a071db303f296acd880c42d4b8f0c68be9\"" Apr 30 03:46:57.915387 containerd[1635]: time="2025-04-30T03:46:57.915359201Z" level=info msg="CreateContainer within sandbox \"d87e3f23c9de5ee67e1c2450fd8c3c6e2850a642a8c44e28d343c526d239daa5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"86bd5f9ae3d7a440e9b3ed5013bf3e3ac06ef1224206e8deebbd96f8c0937b07\"" Apr 30 03:46:57.915936 containerd[1635]: time="2025-04-30T03:46:57.915715498Z" level=info msg="StartContainer for \"86bd5f9ae3d7a440e9b3ed5013bf3e3ac06ef1224206e8deebbd96f8c0937b07\"" Apr 30 03:46:57.988017 containerd[1635]: time="2025-04-30T03:46:57.987931316Z" level=info msg="StartContainer for \"7d0bff9a08b3e9a1e7a8fd21975e5b801ac7e31936e16d58a51fe9c34979a229\" returns successfully" Apr 30 03:46:58.014114 containerd[1635]: time="2025-04-30T03:46:58.014084892Z" level=info msg="StartContainer for \"9ed94da080fbbcaf22b805c6c4cc39a071db303f296acd880c42d4b8f0c68be9\" returns successfully" Apr 30 03:46:58.017092 containerd[1635]: time="2025-04-30T03:46:58.017053046Z" level=info msg="StartContainer for \"86bd5f9ae3d7a440e9b3ed5013bf3e3ac06ef1224206e8deebbd96f8c0937b07\" returns successfully" Apr 30 03:46:58.143923 kubelet[2676]: W0430 03:46:58.143669 2676 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://37.27.250.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 37.27.250.194:6443: connect: connection refused Apr 30 03:46:58.143923 kubelet[2676]: E0430 03:46:58.143728 2676 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://37.27.250.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 37.27.250.194:6443: connect: connection refused Apr 30 03:46:58.227638 kubelet[2676]: E0430 03:46:58.227324 2676 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.250.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-c-b54c1f5c93?timeout=10s\": dial tcp 37.27.250.194:6443: connect: connection refused" interval="1.6s" Apr 30 03:46:58.335915 kubelet[2676]: I0430 03:46:58.335159 2676 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:46:59.609146 kubelet[2676]: I0430 03:46:59.609083 2676 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:46:59.619673 kubelet[2676]: E0430 03:46:59.619638 2676 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-3-c-b54c1f5c93\" not found" Apr 30 03:46:59.720140 kubelet[2676]: E0430 03:46:59.720084 2676 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-3-c-b54c1f5c93\" not found" Apr 30 03:46:59.821298 kubelet[2676]: E0430 03:46:59.821206 2676 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-3-c-b54c1f5c93\" not found" Apr 30 03:46:59.922399 kubelet[2676]: E0430 03:46:59.922248 2676 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-3-c-b54c1f5c93\" not found" Apr 30 03:47:00.022947 kubelet[2676]: E0430 03:47:00.022876 2676 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-3-c-b54c1f5c93\" not found" Apr 30 03:47:00.123533 kubelet[2676]: E0430 03:47:00.123488 2676 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-3-c-b54c1f5c93\" not found" Apr 30 03:47:00.224517 kubelet[2676]: E0430 03:47:00.224339 2676 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-3-c-b54c1f5c93\" not found" Apr 30 03:47:00.810718 kubelet[2676]: I0430 03:47:00.810662 2676 apiserver.go:52] "Watching apiserver" Apr 30 03:47:00.827723 kubelet[2676]: I0430 03:47:00.827688 2676 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:47:01.475757 systemd[1]: Reloading requested from client PID 2945 ('systemctl') (unit session-7.scope)... Apr 30 03:47:01.475779 systemd[1]: Reloading... Apr 30 03:47:01.533948 zram_generator::config[2985]: No configuration found. Apr 30 03:47:01.616842 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:47:01.701379 systemd[1]: Reloading finished in 225 ms. Apr 30 03:47:01.724943 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:47:01.725716 kubelet[2676]: E0430 03:47:01.725094 2676 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4081-3-3-c-b54c1f5c93.183afbf0b54ffcc5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-c-b54c1f5c93,UID:ci-4081-3-3-c-b54c1f5c93,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-c-b54c1f5c93,},FirstTimestamp:2025-04-30 03:46:56.812129477 +0000 UTC m=+0.265850904,LastTimestamp:2025-04-30 03:46:56.812129477 +0000 UTC m=+0.265850904,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-c-b54c1f5c93,}" Apr 30 03:47:01.743231 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:47:01.743450 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:47:01.752087 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:47:01.821006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:47:01.824676 (kubelet)[3046]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:47:01.855916 kubelet[3046]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:47:01.855916 kubelet[3046]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:47:01.855916 kubelet[3046]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:47:01.855916 kubelet[3046]: I0430 03:47:01.855276 3046 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:47:01.860042 kubelet[3046]: I0430 03:47:01.859957 3046 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 03:47:01.860125 kubelet[3046]: I0430 03:47:01.860117 3046 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:47:01.860310 kubelet[3046]: I0430 03:47:01.860300 3046 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 03:47:01.861378 kubelet[3046]: I0430 03:47:01.861351 3046 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 03:47:01.866914 kubelet[3046]: I0430 03:47:01.865012 3046 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:47:01.874652 kubelet[3046]: I0430 03:47:01.874564 3046 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:47:01.875135 kubelet[3046]: I0430 03:47:01.875100 3046 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:47:01.875293 kubelet[3046]: I0430 03:47:01.875140 3046 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-c-b54c1f5c93","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 03:47:01.875383 kubelet[3046]: I0430 03:47:01.875304 3046 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:47:01.875383 kubelet[3046]: I0430 03:47:01.875318 3046 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 03:47:01.878381 kubelet[3046]: I0430 03:47:01.878337 3046 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:47:01.878725 kubelet[3046]: I0430 03:47:01.878461 3046 kubelet.go:400] "Attempting to sync node with API server" Apr 30 03:47:01.878725 kubelet[3046]: I0430 03:47:01.878479 3046 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:47:01.878725 kubelet[3046]: I0430 03:47:01.878496 3046 kubelet.go:312] "Adding apiserver pod source" Apr 30 03:47:01.879264 kubelet[3046]: I0430 03:47:01.879090 3046 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:47:01.886515 kubelet[3046]: I0430 03:47:01.886274 3046 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:47:01.887920 kubelet[3046]: I0430 03:47:01.887727 3046 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:47:01.888218 kubelet[3046]: I0430 03:47:01.888200 3046 server.go:1264] "Started kubelet" Apr 30 03:47:01.888648 kubelet[3046]: I0430 03:47:01.888611 3046 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:47:01.889726 kubelet[3046]: I0430 03:47:01.888831 3046 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:47:01.889726 kubelet[3046]: I0430 03:47:01.888862 3046 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:47:01.889726 kubelet[3046]: I0430 03:47:01.889180 3046 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:47:01.889726 kubelet[3046]: I0430 03:47:01.889526 3046 server.go:455] "Adding debug handlers to kubelet server" Apr 30 03:47:01.897461 kubelet[3046]: I0430 03:47:01.897437 3046 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 03:47:01.898765 kubelet[3046]: I0430 03:47:01.898734 3046 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:47:01.898994 kubelet[3046]: I0430 03:47:01.898867 3046 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:47:01.899063 kubelet[3046]: E0430 03:47:01.899011 3046 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:47:01.900269 kubelet[3046]: I0430 03:47:01.900248 3046 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:47:01.903114 kubelet[3046]: I0430 03:47:01.903073 3046 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:47:01.903823 kubelet[3046]: I0430 03:47:01.903782 3046 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:47:01.903823 kubelet[3046]: I0430 03:47:01.903792 3046 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:47:01.904265 kubelet[3046]: I0430 03:47:01.904217 3046 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:47:01.904265 kubelet[3046]: I0430 03:47:01.904235 3046 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:47:01.904348 kubelet[3046]: I0430 03:47:01.904320 3046 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 03:47:01.904513 kubelet[3046]: E0430 03:47:01.904435 3046 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:47:01.958842 kubelet[3046]: I0430 03:47:01.958816 3046 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:47:01.958842 kubelet[3046]: I0430 03:47:01.958831 3046 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:47:01.958842 kubelet[3046]: I0430 03:47:01.958845 3046 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:47:01.959042 kubelet[3046]: I0430 03:47:01.958980 3046 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 03:47:01.959042 kubelet[3046]: I0430 03:47:01.958989 3046 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 03:47:01.959042 kubelet[3046]: I0430 03:47:01.959004 3046 policy_none.go:49] "None policy: Start" Apr 30 03:47:01.961418 kubelet[3046]: I0430 03:47:01.961405 3046 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:47:01.961466 kubelet[3046]: I0430 03:47:01.961421 3046 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:47:01.963292 kubelet[3046]: I0430 03:47:01.961624 3046 state_mem.go:75] "Updated machine memory state" Apr 30 03:47:01.963292 kubelet[3046]: I0430 03:47:01.962545 3046 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:47:01.963292 kubelet[3046]: I0430 03:47:01.962673 3046 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:47:01.963395 kubelet[3046]: I0430 03:47:01.963382 3046 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:47:02.000239 kubelet[3046]: I0430 03:47:02.000152 3046 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:02.005109 kubelet[3046]: I0430 03:47:02.005065 3046 topology_manager.go:215] "Topology Admit Handler" podUID="c7f4c5ca5cefb8490c99284dbc8fc161" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:02.005189 kubelet[3046]: I0430 03:47:02.005144 3046 topology_manager.go:215] "Topology Admit Handler" podUID="9a5800ae657061cc9071be7482352f98" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:02.005228 kubelet[3046]: I0430 03:47:02.005195 3046 topology_manager.go:215] "Topology Admit Handler" podUID="ef6b2d728590ca9ff5f25f208090d280" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:02.014133 kubelet[3046]: E0430 03:47:02.013946 3046 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-3-c-b54c1f5c93\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:02.014133 kubelet[3046]: E0430 03:47:02.013948 3046 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-3-3-c-b54c1f5c93\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:02.014133 kubelet[3046]: I0430 03:47:02.014015 3046 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:02.014133 kubelet[3046]: I0430 03:47:02.014078 3046 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:02.099332 kubelet[3046]: I0430 03:47:02.099285 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9a5800ae657061cc9071be7482352f98-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-c-b54c1f5c93\" (UID: \"9a5800ae657061cc9071be7482352f98\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:02.099332 kubelet[3046]: I0430 03:47:02.099333 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a5800ae657061cc9071be7482352f98-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-c-b54c1f5c93\" (UID: \"9a5800ae657061cc9071be7482352f98\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:02.099332 kubelet[3046]: I0430 03:47:02.099355 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c7f4c5ca5cefb8490c99284dbc8fc161-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-c-b54c1f5c93\" (UID: \"c7f4c5ca5cefb8490c99284dbc8fc161\") " pod="kube-system/kube-apiserver-ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:02.099613 kubelet[3046]: I0430 03:47:02.099382 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c7f4c5ca5cefb8490c99284dbc8fc161-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-c-b54c1f5c93\" (UID: \"c7f4c5ca5cefb8490c99284dbc8fc161\") " pod="kube-system/kube-apiserver-ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:02.099613 kubelet[3046]: I0430 03:47:02.099402 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a5800ae657061cc9071be7482352f98-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-c-b54c1f5c93\" (UID: \"9a5800ae657061cc9071be7482352f98\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:02.099613 kubelet[3046]: I0430 03:47:02.099418 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef6b2d728590ca9ff5f25f208090d280-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-c-b54c1f5c93\" (UID: \"ef6b2d728590ca9ff5f25f208090d280\") " pod="kube-system/kube-scheduler-ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:02.099613 kubelet[3046]: I0430 03:47:02.099437 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c7f4c5ca5cefb8490c99284dbc8fc161-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-c-b54c1f5c93\" (UID: \"c7f4c5ca5cefb8490c99284dbc8fc161\") " pod="kube-system/kube-apiserver-ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:02.099613 kubelet[3046]: I0430 03:47:02.099451 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a5800ae657061cc9071be7482352f98-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-c-b54c1f5c93\" (UID: \"9a5800ae657061cc9071be7482352f98\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:02.099723 kubelet[3046]: I0430 03:47:02.099466 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a5800ae657061cc9071be7482352f98-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-c-b54c1f5c93\" (UID: \"9a5800ae657061cc9071be7482352f98\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:02.881672 kubelet[3046]: I0430 03:47:02.880458 3046 apiserver.go:52] "Watching apiserver" Apr 30 03:47:02.899659 kubelet[3046]: I0430 03:47:02.899633 3046 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:47:02.946442 kubelet[3046]: E0430 03:47:02.945845 3046 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-3-c-b54c1f5c93\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:02.960292 kubelet[3046]: I0430 03:47:02.960177 3046 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-3-c-b54c1f5c93" podStartSLOduration=1.9601604799999999 podStartE2EDuration="1.96016048s" podCreationTimestamp="2025-04-30 03:47:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:47:02.959645662 +0000 UTC m=+1.131177553" watchObservedRunningTime="2025-04-30 03:47:02.96016048 +0000 UTC m=+1.131692361" Apr 30 03:47:02.968797 kubelet[3046]: I0430 03:47:02.968754 3046 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-3-c-b54c1f5c93" podStartSLOduration=0.968742926 podStartE2EDuration="968.742926ms" podCreationTimestamp="2025-04-30 03:47:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:47:02.968517046 +0000 UTC m=+1.140048937" watchObservedRunningTime="2025-04-30 03:47:02.968742926 +0000 UTC m=+1.140274806" Apr 30 03:47:02.977133 kubelet[3046]: I0430 03:47:02.977082 3046 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-3-c-b54c1f5c93" podStartSLOduration=2.977073251 podStartE2EDuration="2.977073251s" podCreationTimestamp="2025-04-30 03:47:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:47:02.976313717 +0000 UTC m=+1.147845608" watchObservedRunningTime="2025-04-30 03:47:02.977073251 +0000 UTC m=+1.148605142" Apr 30 03:47:06.764846 sudo[2103]: pam_unix(sudo:session): session closed for user root Apr 30 03:47:06.923651 sshd[2099]: pam_unix(sshd:session): session closed for user core Apr 30 03:47:06.925959 systemd[1]: sshd@6-37.27.250.194:22-139.178.68.195:34910.service: Deactivated successfully. Apr 30 03:47:06.928215 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 03:47:06.929133 systemd-logind[1611]: Session 7 logged out. Waiting for processes to exit. Apr 30 03:47:06.930348 systemd-logind[1611]: Removed session 7. Apr 30 03:47:15.824772 kubelet[3046]: I0430 03:47:15.824738 3046 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 03:47:15.827605 containerd[1635]: time="2025-04-30T03:47:15.827559555Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 03:47:15.827859 kubelet[3046]: I0430 03:47:15.827729 3046 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 03:47:16.689544 kubelet[3046]: I0430 03:47:16.688961 3046 topology_manager.go:215] "Topology Admit Handler" podUID="03d6ef07-9f44-4cef-aeb3-cb073de8dcc3" podNamespace="kube-system" podName="kube-proxy-xg5b2" Apr 30 03:47:16.788285 kubelet[3046]: I0430 03:47:16.788193 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wmmv\" (UniqueName: \"kubernetes.io/projected/03d6ef07-9f44-4cef-aeb3-cb073de8dcc3-kube-api-access-4wmmv\") pod \"kube-proxy-xg5b2\" (UID: \"03d6ef07-9f44-4cef-aeb3-cb073de8dcc3\") " pod="kube-system/kube-proxy-xg5b2" Apr 30 03:47:16.788285 kubelet[3046]: I0430 03:47:16.788271 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03d6ef07-9f44-4cef-aeb3-cb073de8dcc3-lib-modules\") pod \"kube-proxy-xg5b2\" (UID: \"03d6ef07-9f44-4cef-aeb3-cb073de8dcc3\") " pod="kube-system/kube-proxy-xg5b2" Apr 30 03:47:16.788562 kubelet[3046]: I0430 03:47:16.788324 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/03d6ef07-9f44-4cef-aeb3-cb073de8dcc3-kube-proxy\") pod \"kube-proxy-xg5b2\" (UID: \"03d6ef07-9f44-4cef-aeb3-cb073de8dcc3\") " pod="kube-system/kube-proxy-xg5b2" Apr 30 03:47:16.788562 kubelet[3046]: I0430 03:47:16.788351 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03d6ef07-9f44-4cef-aeb3-cb073de8dcc3-xtables-lock\") pod \"kube-proxy-xg5b2\" (UID: \"03d6ef07-9f44-4cef-aeb3-cb073de8dcc3\") " pod="kube-system/kube-proxy-xg5b2" Apr 30 03:47:16.919056 kubelet[3046]: I0430 03:47:16.918492 3046 topology_manager.go:215] "Topology Admit Handler" podUID="2adfaa66-65af-4055-85fa-b72df951b0d1" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-4scsk" Apr 30 03:47:16.989555 kubelet[3046]: I0430 03:47:16.989359 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j79b6\" (UniqueName: \"kubernetes.io/projected/2adfaa66-65af-4055-85fa-b72df951b0d1-kube-api-access-j79b6\") pod \"tigera-operator-797db67f8-4scsk\" (UID: \"2adfaa66-65af-4055-85fa-b72df951b0d1\") " pod="tigera-operator/tigera-operator-797db67f8-4scsk" Apr 30 03:47:16.989555 kubelet[3046]: I0430 03:47:16.989415 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2adfaa66-65af-4055-85fa-b72df951b0d1-var-lib-calico\") pod \"tigera-operator-797db67f8-4scsk\" (UID: \"2adfaa66-65af-4055-85fa-b72df951b0d1\") " pod="tigera-operator/tigera-operator-797db67f8-4scsk" Apr 30 03:47:16.996857 containerd[1635]: time="2025-04-30T03:47:16.996750598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xg5b2,Uid:03d6ef07-9f44-4cef-aeb3-cb073de8dcc3,Namespace:kube-system,Attempt:0,}" Apr 30 03:47:17.033509 containerd[1635]: time="2025-04-30T03:47:17.033385352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:47:17.033701 containerd[1635]: time="2025-04-30T03:47:17.033646918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:47:17.033801 containerd[1635]: time="2025-04-30T03:47:17.033728864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:17.034555 containerd[1635]: time="2025-04-30T03:47:17.034498687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:17.065716 containerd[1635]: time="2025-04-30T03:47:17.065667176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xg5b2,Uid:03d6ef07-9f44-4cef-aeb3-cb073de8dcc3,Namespace:kube-system,Attempt:0,} returns sandbox id \"09d5957cd493918d674fc35dbce7f5fb5d93b4b3e9f7ef16b8bf523872fac79d\"" Apr 30 03:47:17.067848 containerd[1635]: time="2025-04-30T03:47:17.067752556Z" level=info msg="CreateContainer within sandbox \"09d5957cd493918d674fc35dbce7f5fb5d93b4b3e9f7ef16b8bf523872fac79d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 03:47:17.079446 containerd[1635]: time="2025-04-30T03:47:17.079408462Z" level=info msg="CreateContainer within sandbox \"09d5957cd493918d674fc35dbce7f5fb5d93b4b3e9f7ef16b8bf523872fac79d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"829c9e0194a60368b718ea8cba2898b7565555b0b99e36e435e0f29de50b5fe7\"" Apr 30 03:47:17.080140 containerd[1635]: time="2025-04-30T03:47:17.080113432Z" level=info msg="StartContainer for \"829c9e0194a60368b718ea8cba2898b7565555b0b99e36e435e0f29de50b5fe7\"" Apr 30 03:47:17.117935 containerd[1635]: time="2025-04-30T03:47:17.117885666Z" level=info msg="StartContainer for \"829c9e0194a60368b718ea8cba2898b7565555b0b99e36e435e0f29de50b5fe7\" returns successfully" Apr 30 03:47:17.225398 containerd[1635]: time="2025-04-30T03:47:17.225355360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-4scsk,Uid:2adfaa66-65af-4055-85fa-b72df951b0d1,Namespace:tigera-operator,Attempt:0,}" Apr 30 03:47:17.250750 containerd[1635]: time="2025-04-30T03:47:17.250376710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:47:17.250750 containerd[1635]: time="2025-04-30T03:47:17.250432165Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:47:17.250750 containerd[1635]: time="2025-04-30T03:47:17.250452703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:17.251471 containerd[1635]: time="2025-04-30T03:47:17.251366580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:17.296760 containerd[1635]: time="2025-04-30T03:47:17.296732551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-4scsk,Uid:2adfaa66-65af-4055-85fa-b72df951b0d1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"87f9eb28eaddebbf0ef14060b7a4bb9f5f8fdbc890e5e5367737391d44e98700\"" Apr 30 03:47:17.298382 containerd[1635]: time="2025-04-30T03:47:17.298348622Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" Apr 30 03:47:17.979729 kubelet[3046]: I0430 03:47:17.979683 3046 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xg5b2" podStartSLOduration=1.9796699709999999 podStartE2EDuration="1.979669971s" podCreationTimestamp="2025-04-30 03:47:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:47:17.979319414 +0000 UTC m=+16.150851296" watchObservedRunningTime="2025-04-30 03:47:17.979669971 +0000 UTC m=+16.151201852" Apr 30 03:47:19.094565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3173104933.mount: Deactivated successfully. Apr 30 03:47:19.425426 containerd[1635]: time="2025-04-30T03:47:19.425319993Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:19.426301 containerd[1635]: time="2025-04-30T03:47:19.426170750Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" Apr 30 03:47:19.429682 containerd[1635]: time="2025-04-30T03:47:19.429643868Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:19.431459 containerd[1635]: time="2025-04-30T03:47:19.431398070Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:19.431919 containerd[1635]: time="2025-04-30T03:47:19.431859438Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.133484366s" Apr 30 03:47:19.431919 containerd[1635]: time="2025-04-30T03:47:19.431913560Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" Apr 30 03:47:19.447155 containerd[1635]: time="2025-04-30T03:47:19.447111413Z" level=info msg="CreateContainer within sandbox \"87f9eb28eaddebbf0ef14060b7a4bb9f5f8fdbc890e5e5367737391d44e98700\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 30 03:47:19.457874 containerd[1635]: time="2025-04-30T03:47:19.457838714Z" level=info msg="CreateContainer within sandbox \"87f9eb28eaddebbf0ef14060b7a4bb9f5f8fdbc890e5e5367737391d44e98700\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"61f1daf7e4df85451c85e133df0735024cee20d490eca5138befcfae9fba1ab3\"" Apr 30 03:47:19.459189 containerd[1635]: time="2025-04-30T03:47:19.458733092Z" level=info msg="StartContainer for \"61f1daf7e4df85451c85e133df0735024cee20d490eca5138befcfae9fba1ab3\"" Apr 30 03:47:19.527292 containerd[1635]: time="2025-04-30T03:47:19.527250793Z" level=info msg="StartContainer for \"61f1daf7e4df85451c85e133df0735024cee20d490eca5138befcfae9fba1ab3\" returns successfully" Apr 30 03:47:19.993302 kubelet[3046]: I0430 03:47:19.993156 3046 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-4scsk" podStartSLOduration=1.850960559 podStartE2EDuration="3.993138169s" podCreationTimestamp="2025-04-30 03:47:16 +0000 UTC" firstStartedPulling="2025-04-30 03:47:17.297640717 +0000 UTC m=+15.469172599" lastFinishedPulling="2025-04-30 03:47:19.439818328 +0000 UTC m=+17.611350209" observedRunningTime="2025-04-30 03:47:19.992771793 +0000 UTC m=+18.164303684" watchObservedRunningTime="2025-04-30 03:47:19.993138169 +0000 UTC m=+18.164670060" Apr 30 03:47:22.746347 kubelet[3046]: I0430 03:47:22.746288 3046 topology_manager.go:215] "Topology Admit Handler" podUID="4a9a1592-fe0c-4cb6-b483-aa790fddb06a" podNamespace="calico-system" podName="calico-typha-5548588bdb-5sdm9" Apr 30 03:47:22.818312 kubelet[3046]: I0430 03:47:22.817020 3046 topology_manager.go:215] "Topology Admit Handler" podUID="c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2" podNamespace="calico-system" podName="calico-node-l8lvq" Apr 30 03:47:22.829697 kubelet[3046]: I0430 03:47:22.829602 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-var-run-calico\") pod \"calico-node-l8lvq\" (UID: \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\") " pod="calico-system/calico-node-l8lvq" Apr 30 03:47:22.829834 kubelet[3046]: I0430 03:47:22.829823 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-cni-net-dir\") pod \"calico-node-l8lvq\" (UID: \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\") " pod="calico-system/calico-node-l8lvq" Apr 30 03:47:22.830516 kubelet[3046]: I0430 03:47:22.830474 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4a9a1592-fe0c-4cb6-b483-aa790fddb06a-typha-certs\") pod \"calico-typha-5548588bdb-5sdm9\" (UID: \"4a9a1592-fe0c-4cb6-b483-aa790fddb06a\") " pod="calico-system/calico-typha-5548588bdb-5sdm9" Apr 30 03:47:22.830617 kubelet[3046]: I0430 03:47:22.830535 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a9a1592-fe0c-4cb6-b483-aa790fddb06a-tigera-ca-bundle\") pod \"calico-typha-5548588bdb-5sdm9\" (UID: \"4a9a1592-fe0c-4cb6-b483-aa790fddb06a\") " pod="calico-system/calico-typha-5548588bdb-5sdm9" Apr 30 03:47:22.830617 kubelet[3046]: I0430 03:47:22.830555 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-cni-log-dir\") pod \"calico-node-l8lvq\" (UID: \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\") " pod="calico-system/calico-node-l8lvq" Apr 30 03:47:22.830617 kubelet[3046]: I0430 03:47:22.830568 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-flexvol-driver-host\") pod \"calico-node-l8lvq\" (UID: \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\") " pod="calico-system/calico-node-l8lvq" Apr 30 03:47:22.830617 kubelet[3046]: I0430 03:47:22.830581 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sst5\" (UniqueName: \"kubernetes.io/projected/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-kube-api-access-5sst5\") pod \"calico-node-l8lvq\" (UID: \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\") " pod="calico-system/calico-node-l8lvq" Apr 30 03:47:22.830617 kubelet[3046]: I0430 03:47:22.830592 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-policysync\") pod \"calico-node-l8lvq\" (UID: \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\") " pod="calico-system/calico-node-l8lvq" Apr 30 03:47:22.830716 kubelet[3046]: I0430 03:47:22.830622 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-node-certs\") pod \"calico-node-l8lvq\" (UID: \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\") " pod="calico-system/calico-node-l8lvq" Apr 30 03:47:22.830716 kubelet[3046]: I0430 03:47:22.830647 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7dmp\" (UniqueName: \"kubernetes.io/projected/4a9a1592-fe0c-4cb6-b483-aa790fddb06a-kube-api-access-r7dmp\") pod \"calico-typha-5548588bdb-5sdm9\" (UID: \"4a9a1592-fe0c-4cb6-b483-aa790fddb06a\") " pod="calico-system/calico-typha-5548588bdb-5sdm9" Apr 30 03:47:22.830716 kubelet[3046]: I0430 03:47:22.830663 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-lib-modules\") pod \"calico-node-l8lvq\" (UID: \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\") " pod="calico-system/calico-node-l8lvq" Apr 30 03:47:22.830716 kubelet[3046]: I0430 03:47:22.830676 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-xtables-lock\") pod \"calico-node-l8lvq\" (UID: \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\") " pod="calico-system/calico-node-l8lvq" Apr 30 03:47:22.830716 kubelet[3046]: I0430 03:47:22.830688 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-tigera-ca-bundle\") pod \"calico-node-l8lvq\" (UID: \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\") " pod="calico-system/calico-node-l8lvq" Apr 30 03:47:22.831301 kubelet[3046]: I0430 03:47:22.830700 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-var-lib-calico\") pod \"calico-node-l8lvq\" (UID: \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\") " pod="calico-system/calico-node-l8lvq" Apr 30 03:47:22.831301 kubelet[3046]: I0430 03:47:22.830712 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-cni-bin-dir\") pod \"calico-node-l8lvq\" (UID: \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\") " pod="calico-system/calico-node-l8lvq" Apr 30 03:47:22.942448 kubelet[3046]: E0430 03:47:22.940498 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:22.942448 kubelet[3046]: W0430 03:47:22.940516 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:22.942448 kubelet[3046]: E0430 03:47:22.940545 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:22.942448 kubelet[3046]: E0430 03:47:22.940812 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:22.942448 kubelet[3046]: W0430 03:47:22.940820 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:22.942448 kubelet[3046]: E0430 03:47:22.940828 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:22.942448 kubelet[3046]: E0430 03:47:22.942242 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:22.942448 kubelet[3046]: W0430 03:47:22.942251 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:22.942448 kubelet[3046]: E0430 03:47:22.942260 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:22.946402 kubelet[3046]: E0430 03:47:22.942490 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:22.946402 kubelet[3046]: W0430 03:47:22.942501 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:22.946402 kubelet[3046]: E0430 03:47:22.942513 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:22.946402 kubelet[3046]: I0430 03:47:22.942944 3046 topology_manager.go:215] "Topology Admit Handler" podUID="77fda787-d7b6-4fd3-822b-fc38fd6f240c" podNamespace="calico-system" podName="csi-node-driver-q77vc" Apr 30 03:47:22.946402 kubelet[3046]: E0430 03:47:22.943135 3046 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q77vc" podUID="77fda787-d7b6-4fd3-822b-fc38fd6f240c" Apr 30 03:47:22.947032 kubelet[3046]: E0430 03:47:22.946951 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:22.947032 kubelet[3046]: W0430 03:47:22.946967 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:22.947032 kubelet[3046]: E0430 03:47:22.946976 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:22.948347 kubelet[3046]: E0430 03:47:22.948317 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:22.948347 kubelet[3046]: W0430 03:47:22.948334 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:22.948347 kubelet[3046]: E0430 03:47:22.948342 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:22.949399 kubelet[3046]: E0430 03:47:22.949046 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:22.949399 kubelet[3046]: W0430 03:47:22.949057 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:22.949399 kubelet[3046]: E0430 03:47:22.949064 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:22.949399 kubelet[3046]: E0430 03:47:22.949226 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:22.949399 kubelet[3046]: W0430 03:47:22.949233 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:22.949399 kubelet[3046]: E0430 03:47:22.949241 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:22.955245 kubelet[3046]: E0430 03:47:22.954284 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:22.955245 kubelet[3046]: W0430 03:47:22.954297 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:22.955245 kubelet[3046]: E0430 03:47:22.954305 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:22.961915 kubelet[3046]: E0430 03:47:22.960517 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:22.961915 kubelet[3046]: W0430 03:47:22.960542 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:22.961915 kubelet[3046]: E0430 03:47:22.960552 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:22.966411 kubelet[3046]: E0430 03:47:22.966241 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:22.966411 kubelet[3046]: W0430 03:47:22.966253 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:22.966411 kubelet[3046]: E0430 03:47:22.966263 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.021932 kubelet[3046]: E0430 03:47:23.021759 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.021932 kubelet[3046]: W0430 03:47:23.021792 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.021932 kubelet[3046]: E0430 03:47:23.021811 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.023250 kubelet[3046]: E0430 03:47:23.022930 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.023250 kubelet[3046]: W0430 03:47:23.022943 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.023250 kubelet[3046]: E0430 03:47:23.022957 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.023250 kubelet[3046]: E0430 03:47:23.023140 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.023250 kubelet[3046]: W0430 03:47:23.023149 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.023250 kubelet[3046]: E0430 03:47:23.023185 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.023717 kubelet[3046]: E0430 03:47:23.023690 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.023717 kubelet[3046]: W0430 03:47:23.023706 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.023717 kubelet[3046]: E0430 03:47:23.023718 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.024405 kubelet[3046]: E0430 03:47:23.024305 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.024405 kubelet[3046]: W0430 03:47:23.024340 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.024405 kubelet[3046]: E0430 03:47:23.024349 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.024790 kubelet[3046]: E0430 03:47:23.024510 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.024790 kubelet[3046]: W0430 03:47:23.024517 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.024790 kubelet[3046]: E0430 03:47:23.024538 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.024790 kubelet[3046]: E0430 03:47:23.024771 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.024790 kubelet[3046]: W0430 03:47:23.024781 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.024790 kubelet[3046]: E0430 03:47:23.024788 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.025365 kubelet[3046]: E0430 03:47:23.024965 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.025365 kubelet[3046]: W0430 03:47:23.024972 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.025365 kubelet[3046]: E0430 03:47:23.024979 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.025365 kubelet[3046]: E0430 03:47:23.025147 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.025365 kubelet[3046]: W0430 03:47:23.025154 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.025365 kubelet[3046]: E0430 03:47:23.025188 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.025365 kubelet[3046]: E0430 03:47:23.025329 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.025365 kubelet[3046]: W0430 03:47:23.025362 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.025365 kubelet[3046]: E0430 03:47:23.025370 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.025581 kubelet[3046]: E0430 03:47:23.025552 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.025581 kubelet[3046]: W0430 03:47:23.025563 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.025581 kubelet[3046]: E0430 03:47:23.025571 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.025921 kubelet[3046]: E0430 03:47:23.025880 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.025921 kubelet[3046]: W0430 03:47:23.025916 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.026411 kubelet[3046]: E0430 03:47:23.025924 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.026411 kubelet[3046]: E0430 03:47:23.026076 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.026411 kubelet[3046]: W0430 03:47:23.026082 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.026411 kubelet[3046]: E0430 03:47:23.026089 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.026411 kubelet[3046]: E0430 03:47:23.026219 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.026411 kubelet[3046]: W0430 03:47:23.026254 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.026411 kubelet[3046]: E0430 03:47:23.026261 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.026411 kubelet[3046]: E0430 03:47:23.026390 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.026589 kubelet[3046]: W0430 03:47:23.026418 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.026589 kubelet[3046]: E0430 03:47:23.026427 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.026626 kubelet[3046]: E0430 03:47:23.026589 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.026626 kubelet[3046]: W0430 03:47:23.026597 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.026626 kubelet[3046]: E0430 03:47:23.026604 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.027439 kubelet[3046]: E0430 03:47:23.026788 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.027439 kubelet[3046]: W0430 03:47:23.026799 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.027439 kubelet[3046]: E0430 03:47:23.026805 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.027439 kubelet[3046]: E0430 03:47:23.026957 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.027439 kubelet[3046]: W0430 03:47:23.026963 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.027439 kubelet[3046]: E0430 03:47:23.026972 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.027439 kubelet[3046]: E0430 03:47:23.027151 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.027439 kubelet[3046]: W0430 03:47:23.027159 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.027439 kubelet[3046]: E0430 03:47:23.027166 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.027439 kubelet[3046]: E0430 03:47:23.027344 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.027855 kubelet[3046]: W0430 03:47:23.027352 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.027855 kubelet[3046]: E0430 03:47:23.027359 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.032977 kubelet[3046]: E0430 03:47:23.032942 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.032977 kubelet[3046]: W0430 03:47:23.032953 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.032977 kubelet[3046]: E0430 03:47:23.032962 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.033168 kubelet[3046]: I0430 03:47:23.033084 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42jvz\" (UniqueName: \"kubernetes.io/projected/77fda787-d7b6-4fd3-822b-fc38fd6f240c-kube-api-access-42jvz\") pod \"csi-node-driver-q77vc\" (UID: \"77fda787-d7b6-4fd3-822b-fc38fd6f240c\") " pod="calico-system/csi-node-driver-q77vc" Apr 30 03:47:23.033488 kubelet[3046]: E0430 03:47:23.033363 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.033488 kubelet[3046]: W0430 03:47:23.033377 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.033488 kubelet[3046]: E0430 03:47:23.033391 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.033488 kubelet[3046]: I0430 03:47:23.033413 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/77fda787-d7b6-4fd3-822b-fc38fd6f240c-varrun\") pod \"csi-node-driver-q77vc\" (UID: \"77fda787-d7b6-4fd3-822b-fc38fd6f240c\") " pod="calico-system/csi-node-driver-q77vc" Apr 30 03:47:23.033973 kubelet[3046]: E0430 03:47:23.033808 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.033973 kubelet[3046]: W0430 03:47:23.033823 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.033973 kubelet[3046]: E0430 03:47:23.033835 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.033973 kubelet[3046]: I0430 03:47:23.033855 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/77fda787-d7b6-4fd3-822b-fc38fd6f240c-registration-dir\") pod \"csi-node-driver-q77vc\" (UID: \"77fda787-d7b6-4fd3-822b-fc38fd6f240c\") " pod="calico-system/csi-node-driver-q77vc" Apr 30 03:47:23.034125 kubelet[3046]: E0430 03:47:23.034113 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.034206 kubelet[3046]: W0430 03:47:23.034194 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.034317 kubelet[3046]: E0430 03:47:23.034260 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.034317 kubelet[3046]: I0430 03:47:23.034278 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77fda787-d7b6-4fd3-822b-fc38fd6f240c-kubelet-dir\") pod \"csi-node-driver-q77vc\" (UID: \"77fda787-d7b6-4fd3-822b-fc38fd6f240c\") " pod="calico-system/csi-node-driver-q77vc" Apr 30 03:47:23.034619 kubelet[3046]: E0430 03:47:23.034549 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.034619 kubelet[3046]: W0430 03:47:23.034560 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.034886 kubelet[3046]: E0430 03:47:23.034648 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.034886 kubelet[3046]: I0430 03:47:23.034678 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/77fda787-d7b6-4fd3-822b-fc38fd6f240c-socket-dir\") pod \"csi-node-driver-q77vc\" (UID: \"77fda787-d7b6-4fd3-822b-fc38fd6f240c\") " pod="calico-system/csi-node-driver-q77vc" Apr 30 03:47:23.035170 kubelet[3046]: E0430 03:47:23.035103 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.035170 kubelet[3046]: W0430 03:47:23.035113 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.035269 kubelet[3046]: E0430 03:47:23.035204 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.035508 kubelet[3046]: E0430 03:47:23.035439 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.035508 kubelet[3046]: W0430 03:47:23.035449 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.035655 kubelet[3046]: E0430 03:47:23.035604 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.035787 kubelet[3046]: E0430 03:47:23.035775 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.035916 kubelet[3046]: W0430 03:47:23.035851 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.036053 kubelet[3046]: E0430 03:47:23.035975 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.036281 kubelet[3046]: E0430 03:47:23.036163 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.036281 kubelet[3046]: W0430 03:47:23.036172 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.036470 kubelet[3046]: E0430 03:47:23.036450 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.036616 kubelet[3046]: E0430 03:47:23.036558 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.036616 kubelet[3046]: W0430 03:47:23.036567 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.037027 kubelet[3046]: E0430 03:47:23.036660 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.037264 kubelet[3046]: E0430 03:47:23.037187 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.037264 kubelet[3046]: W0430 03:47:23.037197 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.037264 kubelet[3046]: E0430 03:47:23.037206 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.037579 kubelet[3046]: E0430 03:47:23.037569 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.037653 kubelet[3046]: W0430 03:47:23.037622 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.037653 kubelet[3046]: E0430 03:47:23.037635 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.037875 kubelet[3046]: E0430 03:47:23.037866 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.038026 kubelet[3046]: W0430 03:47:23.037963 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.038026 kubelet[3046]: E0430 03:47:23.037977 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.038274 kubelet[3046]: E0430 03:47:23.038210 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.038274 kubelet[3046]: W0430 03:47:23.038219 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.038274 kubelet[3046]: E0430 03:47:23.038228 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.038498 kubelet[3046]: E0430 03:47:23.038462 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.038498 kubelet[3046]: W0430 03:47:23.038471 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.038498 kubelet[3046]: E0430 03:47:23.038479 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.054399 containerd[1635]: time="2025-04-30T03:47:23.054119568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5548588bdb-5sdm9,Uid:4a9a1592-fe0c-4cb6-b483-aa790fddb06a,Namespace:calico-system,Attempt:0,}" Apr 30 03:47:23.081353 containerd[1635]: time="2025-04-30T03:47:23.080738846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:47:23.081644 containerd[1635]: time="2025-04-30T03:47:23.081519860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:47:23.082369 containerd[1635]: time="2025-04-30T03:47:23.082101434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:23.082369 containerd[1635]: time="2025-04-30T03:47:23.082253743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:23.124291 containerd[1635]: time="2025-04-30T03:47:23.124045313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l8lvq,Uid:c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2,Namespace:calico-system,Attempt:0,}" Apr 30 03:47:23.136612 kubelet[3046]: E0430 03:47:23.136409 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.136612 kubelet[3046]: W0430 03:47:23.136429 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.136612 kubelet[3046]: E0430 03:47:23.136446 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.137212 kubelet[3046]: E0430 03:47:23.136934 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.137212 kubelet[3046]: W0430 03:47:23.136959 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.137343 kubelet[3046]: E0430 03:47:23.137332 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.137732 kubelet[3046]: E0430 03:47:23.137432 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.137732 kubelet[3046]: W0430 03:47:23.137577 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.137732 kubelet[3046]: E0430 03:47:23.137586 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.138202 kubelet[3046]: E0430 03:47:23.138057 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.138202 kubelet[3046]: W0430 03:47:23.138066 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.138202 kubelet[3046]: E0430 03:47:23.138170 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.138913 kubelet[3046]: E0430 03:47:23.138750 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.138913 kubelet[3046]: W0430 03:47:23.138760 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.138913 kubelet[3046]: E0430 03:47:23.138779 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.139168 kubelet[3046]: E0430 03:47:23.139028 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.139168 kubelet[3046]: W0430 03:47:23.139036 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.139364 kubelet[3046]: E0430 03:47:23.139292 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.139645 kubelet[3046]: E0430 03:47:23.139558 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.139645 kubelet[3046]: W0430 03:47:23.139567 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.140091 kubelet[3046]: E0430 03:47:23.139920 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.141266 kubelet[3046]: E0430 03:47:23.140168 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.141266 kubelet[3046]: W0430 03:47:23.140179 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.141266 kubelet[3046]: E0430 03:47:23.140194 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.141266 kubelet[3046]: E0430 03:47:23.141068 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.141266 kubelet[3046]: W0430 03:47:23.141076 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.141266 kubelet[3046]: E0430 03:47:23.141084 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.141266 kubelet[3046]: E0430 03:47:23.141241 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.141266 kubelet[3046]: W0430 03:47:23.141249 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.141547 kubelet[3046]: E0430 03:47:23.141536 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.141658 kubelet[3046]: E0430 03:47:23.141648 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.141706 kubelet[3046]: W0430 03:47:23.141698 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.141841 kubelet[3046]: E0430 03:47:23.141832 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.141937 kubelet[3046]: W0430 03:47:23.141916 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.142119 kubelet[3046]: E0430 03:47:23.142108 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.142396 kubelet[3046]: E0430 03:47:23.142040 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.142735 kubelet[3046]: E0430 03:47:23.142718 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.142937 kubelet[3046]: W0430 03:47:23.142786 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.142937 kubelet[3046]: E0430 03:47:23.142800 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.143305 kubelet[3046]: E0430 03:47:23.143234 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.143305 kubelet[3046]: W0430 03:47:23.143243 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.143305 kubelet[3046]: E0430 03:47:23.143260 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.143833 kubelet[3046]: E0430 03:47:23.143670 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.143833 kubelet[3046]: W0430 03:47:23.143679 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.144086 kubelet[3046]: E0430 03:47:23.143920 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.144366 kubelet[3046]: E0430 03:47:23.144203 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.144366 kubelet[3046]: W0430 03:47:23.144212 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.144588 kubelet[3046]: E0430 03:47:23.144544 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.144927 kubelet[3046]: E0430 03:47:23.144798 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.144927 kubelet[3046]: W0430 03:47:23.144809 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.145210 kubelet[3046]: E0430 03:47:23.145008 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.145289 kubelet[3046]: E0430 03:47:23.145280 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.145771 kubelet[3046]: W0430 03:47:23.145328 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.145771 kubelet[3046]: E0430 03:47:23.145343 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.146173 kubelet[3046]: E0430 03:47:23.146162 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.146242 kubelet[3046]: W0430 03:47:23.146233 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.146309 kubelet[3046]: E0430 03:47:23.146300 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.146759 kubelet[3046]: E0430 03:47:23.146650 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.146759 kubelet[3046]: W0430 03:47:23.146661 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.146856 kubelet[3046]: E0430 03:47:23.146845 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.147309 kubelet[3046]: E0430 03:47:23.147161 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.147309 kubelet[3046]: W0430 03:47:23.147170 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.147309 kubelet[3046]: E0430 03:47:23.147275 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.147986 kubelet[3046]: E0430 03:47:23.147721 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.147986 kubelet[3046]: W0430 03:47:23.147730 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.147986 kubelet[3046]: E0430 03:47:23.147748 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.148401 kubelet[3046]: E0430 03:47:23.148237 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.148401 kubelet[3046]: W0430 03:47:23.148246 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.148401 kubelet[3046]: E0430 03:47:23.148258 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.148879 kubelet[3046]: E0430 03:47:23.148718 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.148879 kubelet[3046]: W0430 03:47:23.148728 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.148879 kubelet[3046]: E0430 03:47:23.148737 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.150112 kubelet[3046]: E0430 03:47:23.149966 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.150112 kubelet[3046]: W0430 03:47:23.149978 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.150112 kubelet[3046]: E0430 03:47:23.149988 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.151236 containerd[1635]: time="2025-04-30T03:47:23.151144782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:47:23.152298 containerd[1635]: time="2025-04-30T03:47:23.152182573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:47:23.152298 containerd[1635]: time="2025-04-30T03:47:23.152262735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:23.153617 containerd[1635]: time="2025-04-30T03:47:23.152447878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:23.157388 kubelet[3046]: E0430 03:47:23.157350 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:23.157388 kubelet[3046]: W0430 03:47:23.157360 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:23.157388 kubelet[3046]: E0430 03:47:23.157370 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:23.172824 containerd[1635]: time="2025-04-30T03:47:23.172793397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5548588bdb-5sdm9,Uid:4a9a1592-fe0c-4cb6-b483-aa790fddb06a,Namespace:calico-system,Attempt:0,} returns sandbox id \"434baaf6f9a446755eecb790a24ee4dc9d591085dfa417a8bc681cd83857649e\"" Apr 30 03:47:23.174406 containerd[1635]: time="2025-04-30T03:47:23.174358229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" Apr 30 03:47:23.209710 containerd[1635]: time="2025-04-30T03:47:23.209683405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l8lvq,Uid:c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2,Namespace:calico-system,Attempt:0,} returns sandbox id \"95bc9fc4b74d97da37cd866088304e53a66c69dd9e4f4a854c310b97c0fa1981\"" Apr 30 03:47:24.905509 kubelet[3046]: E0430 03:47:24.905438 3046 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q77vc" podUID="77fda787-d7b6-4fd3-822b-fc38fd6f240c" Apr 30 03:47:25.934874 containerd[1635]: time="2025-04-30T03:47:25.934740393Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:25.935934 containerd[1635]: time="2025-04-30T03:47:25.935865501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" Apr 30 03:47:25.936591 containerd[1635]: time="2025-04-30T03:47:25.936524241Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:25.938228 containerd[1635]: time="2025-04-30T03:47:25.938127137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:25.938573 containerd[1635]: time="2025-04-30T03:47:25.938536835Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.764158187s" Apr 30 03:47:25.938610 containerd[1635]: time="2025-04-30T03:47:25.938575328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" Apr 30 03:47:25.939984 containerd[1635]: time="2025-04-30T03:47:25.939968724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 03:47:25.950140 containerd[1635]: time="2025-04-30T03:47:25.950094910Z" level=info msg="CreateContainer within sandbox \"434baaf6f9a446755eecb790a24ee4dc9d591085dfa417a8bc681cd83857649e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 03:47:25.965052 containerd[1635]: time="2025-04-30T03:47:25.965019590Z" level=info msg="CreateContainer within sandbox \"434baaf6f9a446755eecb790a24ee4dc9d591085dfa417a8bc681cd83857649e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4dd040ca34c5d11459991d3eea3c83ddea76b05b367a23e1f149d61755732fff\"" Apr 30 03:47:25.965432 containerd[1635]: time="2025-04-30T03:47:25.965417056Z" level=info msg="StartContainer for \"4dd040ca34c5d11459991d3eea3c83ddea76b05b367a23e1f149d61755732fff\"" Apr 30 03:47:26.023429 containerd[1635]: time="2025-04-30T03:47:26.023400658Z" level=info msg="StartContainer for \"4dd040ca34c5d11459991d3eea3c83ddea76b05b367a23e1f149d61755732fff\" returns successfully" Apr 30 03:47:26.904753 kubelet[3046]: E0430 03:47:26.904626 3046 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q77vc" podUID="77fda787-d7b6-4fd3-822b-fc38fd6f240c" Apr 30 03:47:27.006463 kubelet[3046]: I0430 03:47:27.006074 3046 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5548588bdb-5sdm9" podStartSLOduration=2.240536381 podStartE2EDuration="5.005962917s" podCreationTimestamp="2025-04-30 03:47:22 +0000 UTC" firstStartedPulling="2025-04-30 03:47:23.173996352 +0000 UTC m=+21.345528233" lastFinishedPulling="2025-04-30 03:47:25.939422887 +0000 UTC m=+24.110954769" observedRunningTime="2025-04-30 03:47:27.005771663 +0000 UTC m=+25.177303554" watchObservedRunningTime="2025-04-30 03:47:27.005962917 +0000 UTC m=+25.177494798" Apr 30 03:47:27.056507 kubelet[3046]: E0430 03:47:27.056445 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.056507 kubelet[3046]: W0430 03:47:27.056477 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.056507 kubelet[3046]: E0430 03:47:27.056501 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.057083 kubelet[3046]: E0430 03:47:27.056825 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.057083 kubelet[3046]: W0430 03:47:27.056860 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.057083 kubelet[3046]: E0430 03:47:27.056874 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.057388 kubelet[3046]: E0430 03:47:27.057310 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.057388 kubelet[3046]: W0430 03:47:27.057346 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.057388 kubelet[3046]: E0430 03:47:27.057374 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.057732 kubelet[3046]: E0430 03:47:27.057710 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.057732 kubelet[3046]: W0430 03:47:27.057725 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.057816 kubelet[3046]: E0430 03:47:27.057742 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.058088 kubelet[3046]: E0430 03:47:27.058050 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.058088 kubelet[3046]: W0430 03:47:27.058076 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.058234 kubelet[3046]: E0430 03:47:27.058102 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.058581 kubelet[3046]: E0430 03:47:27.058532 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.058581 kubelet[3046]: W0430 03:47:27.058574 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.058704 kubelet[3046]: E0430 03:47:27.058592 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.059258 kubelet[3046]: E0430 03:47:27.059045 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.059258 kubelet[3046]: W0430 03:47:27.059065 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.059258 kubelet[3046]: E0430 03:47:27.059086 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.059546 kubelet[3046]: E0430 03:47:27.059508 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.059546 kubelet[3046]: W0430 03:47:27.059535 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.059546 kubelet[3046]: E0430 03:47:27.059571 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.059927 kubelet[3046]: E0430 03:47:27.059830 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.059927 kubelet[3046]: W0430 03:47:27.059845 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.059927 kubelet[3046]: E0430 03:47:27.059859 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.060661 kubelet[3046]: E0430 03:47:27.060127 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.060661 kubelet[3046]: W0430 03:47:27.060148 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.060661 kubelet[3046]: E0430 03:47:27.060169 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.060661 kubelet[3046]: E0430 03:47:27.060541 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.060661 kubelet[3046]: W0430 03:47:27.060576 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.060661 kubelet[3046]: E0430 03:47:27.060591 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.061045 kubelet[3046]: E0430 03:47:27.060998 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.061045 kubelet[3046]: W0430 03:47:27.061040 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.061195 kubelet[3046]: E0430 03:47:27.061066 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.061343 kubelet[3046]: E0430 03:47:27.061320 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.061343 kubelet[3046]: W0430 03:47:27.061345 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.061343 kubelet[3046]: E0430 03:47:27.061368 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.061706 kubelet[3046]: E0430 03:47:27.061672 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.061706 kubelet[3046]: W0430 03:47:27.061692 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.061954 kubelet[3046]: E0430 03:47:27.061729 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.062038 kubelet[3046]: E0430 03:47:27.062016 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.062038 kubelet[3046]: W0430 03:47:27.062030 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.062115 kubelet[3046]: E0430 03:47:27.062043 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.072489 kubelet[3046]: E0430 03:47:27.072451 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.072601 kubelet[3046]: W0430 03:47:27.072500 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.072601 kubelet[3046]: E0430 03:47:27.072521 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.072936 kubelet[3046]: E0430 03:47:27.072875 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.072936 kubelet[3046]: W0430 03:47:27.072932 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.073103 kubelet[3046]: E0430 03:47:27.072955 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.073284 kubelet[3046]: E0430 03:47:27.073232 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.073284 kubelet[3046]: W0430 03:47:27.073267 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.073284 kubelet[3046]: E0430 03:47:27.073288 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.073607 kubelet[3046]: E0430 03:47:27.073569 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.073607 kubelet[3046]: W0430 03:47:27.073589 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.073771 kubelet[3046]: E0430 03:47:27.073612 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.073884 kubelet[3046]: E0430 03:47:27.073867 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.073884 kubelet[3046]: W0430 03:47:27.073883 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.073884 kubelet[3046]: E0430 03:47:27.073930 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.074232 kubelet[3046]: E0430 03:47:27.074183 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.074232 kubelet[3046]: W0430 03:47:27.074197 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.074496 kubelet[3046]: E0430 03:47:27.074291 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.074496 kubelet[3046]: E0430 03:47:27.074405 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.074496 kubelet[3046]: W0430 03:47:27.074417 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.074745 kubelet[3046]: E0430 03:47:27.074694 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.074745 kubelet[3046]: W0430 03:47:27.074741 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.075006 kubelet[3046]: E0430 03:47:27.074695 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.075006 kubelet[3046]: E0430 03:47:27.074868 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.075082 kubelet[3046]: E0430 03:47:27.075055 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.075082 kubelet[3046]: W0430 03:47:27.075069 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.075349 kubelet[3046]: E0430 03:47:27.075097 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.075954 kubelet[3046]: E0430 03:47:27.075929 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.075954 kubelet[3046]: W0430 03:47:27.075950 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.076142 kubelet[3046]: E0430 03:47:27.075974 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.076312 kubelet[3046]: E0430 03:47:27.076271 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.076312 kubelet[3046]: W0430 03:47:27.076305 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.076500 kubelet[3046]: E0430 03:47:27.076414 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.076624 kubelet[3046]: E0430 03:47:27.076579 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.076624 kubelet[3046]: W0430 03:47:27.076609 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.076784 kubelet[3046]: E0430 03:47:27.076688 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.076850 kubelet[3046]: E0430 03:47:27.076823 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.076850 kubelet[3046]: W0430 03:47:27.076844 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.077096 kubelet[3046]: E0430 03:47:27.077044 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.077255 kubelet[3046]: E0430 03:47:27.077220 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.077255 kubelet[3046]: W0430 03:47:27.077249 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.077351 kubelet[3046]: E0430 03:47:27.077287 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.077718 kubelet[3046]: E0430 03:47:27.077693 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.077718 kubelet[3046]: W0430 03:47:27.077714 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.077968 kubelet[3046]: E0430 03:47:27.077735 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.078093 kubelet[3046]: E0430 03:47:27.078057 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.078093 kubelet[3046]: W0430 03:47:27.078081 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.078187 kubelet[3046]: E0430 03:47:27.078104 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.078514 kubelet[3046]: E0430 03:47:27.078479 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.078514 kubelet[3046]: W0430 03:47:27.078504 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.078697 kubelet[3046]: E0430 03:47:27.078525 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.078831 kubelet[3046]: E0430 03:47:27.078800 3046 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:47:27.078831 kubelet[3046]: W0430 03:47:27.078821 3046 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:47:27.078983 kubelet[3046]: E0430 03:47:27.078835 3046 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:47:27.567991 containerd[1635]: time="2025-04-30T03:47:27.567948720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:27.568889 containerd[1635]: time="2025-04-30T03:47:27.568828602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" Apr 30 03:47:27.569815 containerd[1635]: time="2025-04-30T03:47:27.569763748Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:27.571778 containerd[1635]: time="2025-04-30T03:47:27.571738198Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:27.572512 containerd[1635]: time="2025-04-30T03:47:27.572147946Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.632089041s" Apr 30 03:47:27.572512 containerd[1635]: time="2025-04-30T03:47:27.572173996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" Apr 30 03:47:27.573888 containerd[1635]: time="2025-04-30T03:47:27.573793452Z" level=info msg="CreateContainer within sandbox \"95bc9fc4b74d97da37cd866088304e53a66c69dd9e4f4a854c310b97c0fa1981\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 03:47:27.590548 containerd[1635]: time="2025-04-30T03:47:27.590518099Z" level=info msg="CreateContainer within sandbox \"95bc9fc4b74d97da37cd866088304e53a66c69dd9e4f4a854c310b97c0fa1981\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"523feefa3d1856551b8bcdc80d22c7f1851dce7cb8a5337f2d4ab6fd4f353ace\"" Apr 30 03:47:27.590954 containerd[1635]: time="2025-04-30T03:47:27.590877703Z" level=info msg="StartContainer for \"523feefa3d1856551b8bcdc80d22c7f1851dce7cb8a5337f2d4ab6fd4f353ace\"" Apr 30 03:47:27.637003 containerd[1635]: time="2025-04-30T03:47:27.636955778Z" level=info msg="StartContainer for \"523feefa3d1856551b8bcdc80d22c7f1851dce7cb8a5337f2d4ab6fd4f353ace\" returns successfully" Apr 30 03:47:27.667123 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-523feefa3d1856551b8bcdc80d22c7f1851dce7cb8a5337f2d4ab6fd4f353ace-rootfs.mount: Deactivated successfully. Apr 30 03:47:27.697215 containerd[1635]: time="2025-04-30T03:47:27.675399521Z" level=info msg="shim disconnected" id=523feefa3d1856551b8bcdc80d22c7f1851dce7cb8a5337f2d4ab6fd4f353ace namespace=k8s.io Apr 30 03:47:27.697215 containerd[1635]: time="2025-04-30T03:47:27.697210892Z" level=warning msg="cleaning up after shim disconnected" id=523feefa3d1856551b8bcdc80d22c7f1851dce7cb8a5337f2d4ab6fd4f353ace namespace=k8s.io Apr 30 03:47:27.697446 containerd[1635]: time="2025-04-30T03:47:27.697223856Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:47:28.002304 kubelet[3046]: I0430 03:47:28.002008 3046 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:47:28.007530 containerd[1635]: time="2025-04-30T03:47:28.006753583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 03:47:28.905323 kubelet[3046]: E0430 03:47:28.905284 3046 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q77vc" podUID="77fda787-d7b6-4fd3-822b-fc38fd6f240c" Apr 30 03:47:30.904688 kubelet[3046]: E0430 03:47:30.904623 3046 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q77vc" podUID="77fda787-d7b6-4fd3-822b-fc38fd6f240c" Apr 30 03:47:32.905411 kubelet[3046]: E0430 03:47:32.905370 3046 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q77vc" podUID="77fda787-d7b6-4fd3-822b-fc38fd6f240c" Apr 30 03:47:33.143346 containerd[1635]: time="2025-04-30T03:47:33.143287590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:33.144276 containerd[1635]: time="2025-04-30T03:47:33.144230570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" Apr 30 03:47:33.145079 containerd[1635]: time="2025-04-30T03:47:33.145037603Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:33.147425 containerd[1635]: time="2025-04-30T03:47:33.147377327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:33.148366 containerd[1635]: time="2025-04-30T03:47:33.148320479Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 5.140952869s" Apr 30 03:47:33.148366 containerd[1635]: time="2025-04-30T03:47:33.148356136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" Apr 30 03:47:33.151489 containerd[1635]: time="2025-04-30T03:47:33.151458669Z" level=info msg="CreateContainer within sandbox \"95bc9fc4b74d97da37cd866088304e53a66c69dd9e4f4a854c310b97c0fa1981\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 03:47:33.168006 containerd[1635]: time="2025-04-30T03:47:33.167916226Z" level=info msg="CreateContainer within sandbox \"95bc9fc4b74d97da37cd866088304e53a66c69dd9e4f4a854c310b97c0fa1981\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6107d0ef9475aefcee285683d508ffae69294f6563a51a79018b1b2cf7195d61\"" Apr 30 03:47:33.169617 containerd[1635]: time="2025-04-30T03:47:33.168499735Z" level=info msg="StartContainer for \"6107d0ef9475aefcee285683d508ffae69294f6563a51a79018b1b2cf7195d61\"" Apr 30 03:47:33.209580 systemd[1]: run-containerd-runc-k8s.io-6107d0ef9475aefcee285683d508ffae69294f6563a51a79018b1b2cf7195d61-runc.Ix6XXT.mount: Deactivated successfully. Apr 30 03:47:33.238847 containerd[1635]: time="2025-04-30T03:47:33.238785236Z" level=info msg="StartContainer for \"6107d0ef9475aefcee285683d508ffae69294f6563a51a79018b1b2cf7195d61\" returns successfully" Apr 30 03:47:33.595863 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6107d0ef9475aefcee285683d508ffae69294f6563a51a79018b1b2cf7195d61-rootfs.mount: Deactivated successfully. Apr 30 03:47:33.598907 containerd[1635]: time="2025-04-30T03:47:33.598586821Z" level=info msg="shim disconnected" id=6107d0ef9475aefcee285683d508ffae69294f6563a51a79018b1b2cf7195d61 namespace=k8s.io Apr 30 03:47:33.598907 containerd[1635]: time="2025-04-30T03:47:33.598646414Z" level=warning msg="cleaning up after shim disconnected" id=6107d0ef9475aefcee285683d508ffae69294f6563a51a79018b1b2cf7195d61 namespace=k8s.io Apr 30 03:47:33.598907 containerd[1635]: time="2025-04-30T03:47:33.598653958Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:47:33.606210 kubelet[3046]: I0430 03:47:33.606027 3046 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 03:47:33.635800 kubelet[3046]: I0430 03:47:33.634955 3046 topology_manager.go:215] "Topology Admit Handler" podUID="c2c1ef43-1732-4ee2-984a-7c96357acb4c" podNamespace="calico-system" podName="calico-kube-controllers-548667d5d7-rpfg6" Apr 30 03:47:33.639604 kubelet[3046]: I0430 03:47:33.639537 3046 topology_manager.go:215] "Topology Admit Handler" podUID="7758e0dd-8e52-4b84-ad62-173247f00bf9" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fn9tg" Apr 30 03:47:33.641127 kubelet[3046]: I0430 03:47:33.641110 3046 topology_manager.go:215] "Topology Admit Handler" podUID="94898af1-d8d4-4a31-ac96-01740beca0cc" podNamespace="calico-apiserver" podName="calico-apiserver-855c9c9d54-6r7fb" Apr 30 03:47:33.644344 kubelet[3046]: I0430 03:47:33.642322 3046 topology_manager.go:215] "Topology Admit Handler" podUID="517ddd0e-51ba-4be9-9cd1-8ef6b20c0eec" podNamespace="calico-apiserver" podName="calico-apiserver-d6dd767f5-l6gfz" Apr 30 03:47:33.647617 kubelet[3046]: I0430 03:47:33.647585 3046 topology_manager.go:215] "Topology Admit Handler" podUID="8ac99651-c6e8-4435-af79-6d59e81f514c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zsrkz" Apr 30 03:47:33.647854 kubelet[3046]: I0430 03:47:33.647832 3046 topology_manager.go:215] "Topology Admit Handler" podUID="e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86" podNamespace="calico-apiserver" podName="calico-apiserver-855c9c9d54-2tcgf" Apr 30 03:47:33.718252 kubelet[3046]: I0430 03:47:33.718190 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/94898af1-d8d4-4a31-ac96-01740beca0cc-calico-apiserver-certs\") pod \"calico-apiserver-855c9c9d54-6r7fb\" (UID: \"94898af1-d8d4-4a31-ac96-01740beca0cc\") " pod="calico-apiserver/calico-apiserver-855c9c9d54-6r7fb" Apr 30 03:47:33.718252 kubelet[3046]: I0430 03:47:33.718254 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbh9t\" (UniqueName: \"kubernetes.io/projected/517ddd0e-51ba-4be9-9cd1-8ef6b20c0eec-kube-api-access-kbh9t\") pod \"calico-apiserver-d6dd767f5-l6gfz\" (UID: \"517ddd0e-51ba-4be9-9cd1-8ef6b20c0eec\") " pod="calico-apiserver/calico-apiserver-d6dd767f5-l6gfz" Apr 30 03:47:33.718252 kubelet[3046]: I0430 03:47:33.718274 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gz8l\" (UniqueName: \"kubernetes.io/projected/8ac99651-c6e8-4435-af79-6d59e81f514c-kube-api-access-7gz8l\") pod \"coredns-7db6d8ff4d-zsrkz\" (UID: \"8ac99651-c6e8-4435-af79-6d59e81f514c\") " pod="kube-system/coredns-7db6d8ff4d-zsrkz" Apr 30 03:47:33.718953 kubelet[3046]: I0430 03:47:33.718288 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2c1ef43-1732-4ee2-984a-7c96357acb4c-tigera-ca-bundle\") pod \"calico-kube-controllers-548667d5d7-rpfg6\" (UID: \"c2c1ef43-1732-4ee2-984a-7c96357acb4c\") " pod="calico-system/calico-kube-controllers-548667d5d7-rpfg6" Apr 30 03:47:33.718953 kubelet[3046]: I0430 03:47:33.718301 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn8lf\" (UniqueName: \"kubernetes.io/projected/e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86-kube-api-access-sn8lf\") pod \"calico-apiserver-855c9c9d54-2tcgf\" (UID: \"e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86\") " pod="calico-apiserver/calico-apiserver-855c9c9d54-2tcgf" Apr 30 03:47:33.718953 kubelet[3046]: I0430 03:47:33.718315 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbvb6\" (UniqueName: \"kubernetes.io/projected/94898af1-d8d4-4a31-ac96-01740beca0cc-kube-api-access-nbvb6\") pod \"calico-apiserver-855c9c9d54-6r7fb\" (UID: \"94898af1-d8d4-4a31-ac96-01740beca0cc\") " pod="calico-apiserver/calico-apiserver-855c9c9d54-6r7fb" Apr 30 03:47:33.718953 kubelet[3046]: I0430 03:47:33.718328 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7758e0dd-8e52-4b84-ad62-173247f00bf9-config-volume\") pod \"coredns-7db6d8ff4d-fn9tg\" (UID: \"7758e0dd-8e52-4b84-ad62-173247f00bf9\") " pod="kube-system/coredns-7db6d8ff4d-fn9tg" Apr 30 03:47:33.718953 kubelet[3046]: I0430 03:47:33.718342 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ac99651-c6e8-4435-af79-6d59e81f514c-config-volume\") pod \"coredns-7db6d8ff4d-zsrkz\" (UID: \"8ac99651-c6e8-4435-af79-6d59e81f514c\") " pod="kube-system/coredns-7db6d8ff4d-zsrkz" Apr 30 03:47:33.719317 kubelet[3046]: I0430 03:47:33.718355 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d947f\" (UniqueName: \"kubernetes.io/projected/7758e0dd-8e52-4b84-ad62-173247f00bf9-kube-api-access-d947f\") pod \"coredns-7db6d8ff4d-fn9tg\" (UID: \"7758e0dd-8e52-4b84-ad62-173247f00bf9\") " pod="kube-system/coredns-7db6d8ff4d-fn9tg" Apr 30 03:47:33.719317 kubelet[3046]: I0430 03:47:33.718367 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86-calico-apiserver-certs\") pod \"calico-apiserver-855c9c9d54-2tcgf\" (UID: \"e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86\") " pod="calico-apiserver/calico-apiserver-855c9c9d54-2tcgf" Apr 30 03:47:33.719317 kubelet[3046]: I0430 03:47:33.718379 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn2nf\" (UniqueName: \"kubernetes.io/projected/c2c1ef43-1732-4ee2-984a-7c96357acb4c-kube-api-access-hn2nf\") pod \"calico-kube-controllers-548667d5d7-rpfg6\" (UID: \"c2c1ef43-1732-4ee2-984a-7c96357acb4c\") " pod="calico-system/calico-kube-controllers-548667d5d7-rpfg6" Apr 30 03:47:33.719317 kubelet[3046]: I0430 03:47:33.718393 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/517ddd0e-51ba-4be9-9cd1-8ef6b20c0eec-calico-apiserver-certs\") pod \"calico-apiserver-d6dd767f5-l6gfz\" (UID: \"517ddd0e-51ba-4be9-9cd1-8ef6b20c0eec\") " pod="calico-apiserver/calico-apiserver-d6dd767f5-l6gfz" Apr 30 03:47:33.951986 containerd[1635]: time="2025-04-30T03:47:33.951783130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-548667d5d7-rpfg6,Uid:c2c1ef43-1732-4ee2-984a-7c96357acb4c,Namespace:calico-system,Attempt:0,}" Apr 30 03:47:33.952839 containerd[1635]: time="2025-04-30T03:47:33.951800893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855c9c9d54-6r7fb,Uid:94898af1-d8d4-4a31-ac96-01740beca0cc,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:47:33.953163 containerd[1635]: time="2025-04-30T03:47:33.953116632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855c9c9d54-2tcgf,Uid:e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:47:33.957935 containerd[1635]: time="2025-04-30T03:47:33.957853489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fn9tg,Uid:7758e0dd-8e52-4b84-ad62-173247f00bf9,Namespace:kube-system,Attempt:0,}" Apr 30 03:47:33.960017 containerd[1635]: time="2025-04-30T03:47:33.959963166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d6dd767f5-l6gfz,Uid:517ddd0e-51ba-4be9-9cd1-8ef6b20c0eec,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:47:33.962388 containerd[1635]: time="2025-04-30T03:47:33.961801036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zsrkz,Uid:8ac99651-c6e8-4435-af79-6d59e81f514c,Namespace:kube-system,Attempt:0,}" Apr 30 03:47:34.132921 containerd[1635]: time="2025-04-30T03:47:34.132396841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 03:47:34.262673 containerd[1635]: time="2025-04-30T03:47:34.258725501Z" level=error msg="Failed to destroy network for sandbox \"1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.260775 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611-shm.mount: Deactivated successfully. Apr 30 03:47:34.263938 containerd[1635]: time="2025-04-30T03:47:34.263493717Z" level=error msg="Failed to destroy network for sandbox \"916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.266457 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2-shm.mount: Deactivated successfully. Apr 30 03:47:34.270443 containerd[1635]: time="2025-04-30T03:47:34.270416656Z" level=error msg="encountered an error cleaning up failed sandbox \"916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.270542 containerd[1635]: time="2025-04-30T03:47:34.270426705Z" level=error msg="encountered an error cleaning up failed sandbox \"1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.275242 containerd[1635]: time="2025-04-30T03:47:34.275056167Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fn9tg,Uid:7758e0dd-8e52-4b84-ad62-173247f00bf9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.277734 containerd[1635]: time="2025-04-30T03:47:34.277007885Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zsrkz,Uid:8ac99651-c6e8-4435-af79-6d59e81f514c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.279800 containerd[1635]: time="2025-04-30T03:47:34.279757697Z" level=error msg="Failed to destroy network for sandbox \"94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.283690 containerd[1635]: time="2025-04-30T03:47:34.282253336Z" level=error msg="encountered an error cleaning up failed sandbox \"94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.283690 containerd[1635]: time="2025-04-30T03:47:34.282319332Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d6dd767f5-l6gfz,Uid:517ddd0e-51ba-4be9-9cd1-8ef6b20c0eec,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.284692 kubelet[3046]: E0430 03:47:34.284178 3046 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.284692 kubelet[3046]: E0430 03:47:34.284242 3046 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-fn9tg" Apr 30 03:47:34.284692 kubelet[3046]: E0430 03:47:34.284631 3046 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.284692 kubelet[3046]: E0430 03:47:34.284662 3046 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d6dd767f5-l6gfz" Apr 30 03:47:34.285450 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f-shm.mount: Deactivated successfully. Apr 30 03:47:34.286700 kubelet[3046]: E0430 03:47:34.286680 3046 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d6dd767f5-l6gfz" Apr 30 03:47:34.286878 kubelet[3046]: E0430 03:47:34.286835 3046 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d6dd767f5-l6gfz_calico-apiserver(517ddd0e-51ba-4be9-9cd1-8ef6b20c0eec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d6dd767f5-l6gfz_calico-apiserver(517ddd0e-51ba-4be9-9cd1-8ef6b20c0eec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d6dd767f5-l6gfz" podUID="517ddd0e-51ba-4be9-9cd1-8ef6b20c0eec" Apr 30 03:47:34.287036 kubelet[3046]: E0430 03:47:34.286774 3046 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-fn9tg" Apr 30 03:47:34.287845 kubelet[3046]: E0430 03:47:34.287755 3046 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-fn9tg_kube-system(7758e0dd-8e52-4b84-ad62-173247f00bf9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-fn9tg_kube-system(7758e0dd-8e52-4b84-ad62-173247f00bf9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-fn9tg" podUID="7758e0dd-8e52-4b84-ad62-173247f00bf9" Apr 30 03:47:34.287845 kubelet[3046]: E0430 03:47:34.284945 3046 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.287845 kubelet[3046]: E0430 03:47:34.287793 3046 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zsrkz" Apr 30 03:47:34.288623 kubelet[3046]: E0430 03:47:34.287805 3046 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zsrkz" Apr 30 03:47:34.288623 kubelet[3046]: E0430 03:47:34.287825 3046 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zsrkz_kube-system(8ac99651-c6e8-4435-af79-6d59e81f514c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zsrkz_kube-system(8ac99651-c6e8-4435-af79-6d59e81f514c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zsrkz" podUID="8ac99651-c6e8-4435-af79-6d59e81f514c" Apr 30 03:47:34.297669 containerd[1635]: time="2025-04-30T03:47:34.297320533Z" level=error msg="Failed to destroy network for sandbox \"0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.299925 containerd[1635]: time="2025-04-30T03:47:34.298138827Z" level=error msg="encountered an error cleaning up failed sandbox \"0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.299925 containerd[1635]: time="2025-04-30T03:47:34.298202298Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855c9c9d54-6r7fb,Uid:94898af1-d8d4-4a31-ac96-01740beca0cc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.300021 kubelet[3046]: E0430 03:47:34.298572 3046 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.300021 kubelet[3046]: E0430 03:47:34.298613 3046 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-855c9c9d54-6r7fb" Apr 30 03:47:34.300021 kubelet[3046]: E0430 03:47:34.298628 3046 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-855c9c9d54-6r7fb" Apr 30 03:47:34.300085 kubelet[3046]: E0430 03:47:34.298673 3046 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-855c9c9d54-6r7fb_calico-apiserver(94898af1-d8d4-4a31-ac96-01740beca0cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-855c9c9d54-6r7fb_calico-apiserver(94898af1-d8d4-4a31-ac96-01740beca0cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-855c9c9d54-6r7fb" podUID="94898af1-d8d4-4a31-ac96-01740beca0cc" Apr 30 03:47:34.302318 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847-shm.mount: Deactivated successfully. Apr 30 03:47:34.306717 containerd[1635]: time="2025-04-30T03:47:34.306676432Z" level=error msg="Failed to destroy network for sandbox \"5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.306981 containerd[1635]: time="2025-04-30T03:47:34.306954641Z" level=error msg="encountered an error cleaning up failed sandbox \"5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.307030 containerd[1635]: time="2025-04-30T03:47:34.306993505Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855c9c9d54-2tcgf,Uid:e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.307135 kubelet[3046]: E0430 03:47:34.307108 3046 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.307200 kubelet[3046]: E0430 03:47:34.307140 3046 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-855c9c9d54-2tcgf" Apr 30 03:47:34.307200 kubelet[3046]: E0430 03:47:34.307155 3046 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-855c9c9d54-2tcgf" Apr 30 03:47:34.307299 containerd[1635]: time="2025-04-30T03:47:34.307232638Z" level=error msg="Failed to destroy network for sandbox \"a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.307321 kubelet[3046]: E0430 03:47:34.307184 3046 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-855c9c9d54-2tcgf_calico-apiserver(e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-855c9c9d54-2tcgf_calico-apiserver(e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-855c9c9d54-2tcgf" podUID="e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86" Apr 30 03:47:34.307545 containerd[1635]: time="2025-04-30T03:47:34.307504174Z" level=error msg="encountered an error cleaning up failed sandbox \"a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.307586 containerd[1635]: time="2025-04-30T03:47:34.307542678Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-548667d5d7-rpfg6,Uid:c2c1ef43-1732-4ee2-984a-7c96357acb4c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.308205 kubelet[3046]: E0430 03:47:34.308156 3046 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.308205 kubelet[3046]: E0430 03:47:34.308187 3046 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-548667d5d7-rpfg6" Apr 30 03:47:34.308205 kubelet[3046]: E0430 03:47:34.308200 3046 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-548667d5d7-rpfg6" Apr 30 03:47:34.308698 kubelet[3046]: E0430 03:47:34.308223 3046 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-548667d5d7-rpfg6_calico-system(c2c1ef43-1732-4ee2-984a-7c96357acb4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-548667d5d7-rpfg6_calico-system(c2c1ef43-1732-4ee2-984a-7c96357acb4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-548667d5d7-rpfg6" podUID="c2c1ef43-1732-4ee2-984a-7c96357acb4c" Apr 30 03:47:34.908815 containerd[1635]: time="2025-04-30T03:47:34.908492441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q77vc,Uid:77fda787-d7b6-4fd3-822b-fc38fd6f240c,Namespace:calico-system,Attempt:0,}" Apr 30 03:47:34.987420 containerd[1635]: time="2025-04-30T03:47:34.987357645Z" level=error msg="Failed to destroy network for sandbox \"c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.987961 containerd[1635]: time="2025-04-30T03:47:34.987871983Z" level=error msg="encountered an error cleaning up failed sandbox \"c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.988085 containerd[1635]: time="2025-04-30T03:47:34.987994034Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q77vc,Uid:77fda787-d7b6-4fd3-822b-fc38fd6f240c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.988316 kubelet[3046]: E0430 03:47:34.988259 3046 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:34.988376 kubelet[3046]: E0430 03:47:34.988343 3046 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q77vc" Apr 30 03:47:34.988414 kubelet[3046]: E0430 03:47:34.988377 3046 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q77vc" Apr 30 03:47:34.988497 kubelet[3046]: E0430 03:47:34.988453 3046 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-q77vc_calico-system(77fda787-d7b6-4fd3-822b-fc38fd6f240c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-q77vc_calico-system(77fda787-d7b6-4fd3-822b-fc38fd6f240c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q77vc" podUID="77fda787-d7b6-4fd3-822b-fc38fd6f240c" Apr 30 03:47:35.123460 kubelet[3046]: I0430 03:47:35.123391 3046 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" Apr 30 03:47:35.124941 kubelet[3046]: I0430 03:47:35.124623 3046 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" Apr 30 03:47:35.130732 containerd[1635]: time="2025-04-30T03:47:35.129834270Z" level=info msg="StopPodSandbox for \"5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f\"" Apr 30 03:47:35.130918 containerd[1635]: time="2025-04-30T03:47:35.130878313Z" level=info msg="StopPodSandbox for \"94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f\"" Apr 30 03:47:35.131919 containerd[1635]: time="2025-04-30T03:47:35.131888302Z" level=info msg="Ensure that sandbox 94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f in task-service has been cleanup successfully" Apr 30 03:47:35.132183 containerd[1635]: time="2025-04-30T03:47:35.131909301Z" level=info msg="Ensure that sandbox 5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f in task-service has been cleanup successfully" Apr 30 03:47:35.140936 kubelet[3046]: I0430 03:47:35.140883 3046 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" Apr 30 03:47:35.141994 containerd[1635]: time="2025-04-30T03:47:35.141914283Z" level=info msg="StopPodSandbox for \"1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611\"" Apr 30 03:47:35.142225 containerd[1635]: time="2025-04-30T03:47:35.142150522Z" level=info msg="Ensure that sandbox 1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611 in task-service has been cleanup successfully" Apr 30 03:47:35.144028 kubelet[3046]: I0430 03:47:35.143937 3046 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" Apr 30 03:47:35.145465 containerd[1635]: time="2025-04-30T03:47:35.145362882Z" level=info msg="StopPodSandbox for \"a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca\"" Apr 30 03:47:35.149277 containerd[1635]: time="2025-04-30T03:47:35.149259794Z" level=info msg="Ensure that sandbox a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca in task-service has been cleanup successfully" Apr 30 03:47:35.151238 kubelet[3046]: I0430 03:47:35.151119 3046 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" Apr 30 03:47:35.153166 containerd[1635]: time="2025-04-30T03:47:35.153111409Z" level=info msg="StopPodSandbox for \"0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847\"" Apr 30 03:47:35.153265 containerd[1635]: time="2025-04-30T03:47:35.153243540Z" level=info msg="Ensure that sandbox 0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847 in task-service has been cleanup successfully" Apr 30 03:47:35.158476 kubelet[3046]: I0430 03:47:35.158460 3046 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" Apr 30 03:47:35.161663 containerd[1635]: time="2025-04-30T03:47:35.161284050Z" level=info msg="StopPodSandbox for \"c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63\"" Apr 30 03:47:35.161663 containerd[1635]: time="2025-04-30T03:47:35.161406092Z" level=info msg="Ensure that sandbox c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63 in task-service has been cleanup successfully" Apr 30 03:47:35.163337 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca-shm.mount: Deactivated successfully. Apr 30 03:47:35.163446 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f-shm.mount: Deactivated successfully. Apr 30 03:47:35.167338 kubelet[3046]: I0430 03:47:35.167312 3046 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" Apr 30 03:47:35.168769 containerd[1635]: time="2025-04-30T03:47:35.168648466Z" level=info msg="StopPodSandbox for \"916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2\"" Apr 30 03:47:35.168919 containerd[1635]: time="2025-04-30T03:47:35.168877011Z" level=info msg="Ensure that sandbox 916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2 in task-service has been cleanup successfully" Apr 30 03:47:35.223741 containerd[1635]: time="2025-04-30T03:47:35.223353564Z" level=error msg="StopPodSandbox for \"5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f\" failed" error="failed to destroy network for sandbox \"5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:35.223838 kubelet[3046]: E0430 03:47:35.223558 3046 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" Apr 30 03:47:35.223838 kubelet[3046]: E0430 03:47:35.223618 3046 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f"} Apr 30 03:47:35.223838 kubelet[3046]: E0430 03:47:35.223672 3046 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:47:35.223838 kubelet[3046]: E0430 03:47:35.223693 3046 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-855c9c9d54-2tcgf" podUID="e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86" Apr 30 03:47:35.225334 containerd[1635]: time="2025-04-30T03:47:35.225303427Z" level=error msg="StopPodSandbox for \"94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f\" failed" error="failed to destroy network for sandbox \"94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:35.225536 kubelet[3046]: E0430 03:47:35.225493 3046 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" Apr 30 03:47:35.225536 kubelet[3046]: E0430 03:47:35.225515 3046 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f"} Apr 30 03:47:35.225769 kubelet[3046]: E0430 03:47:35.225647 3046 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"517ddd0e-51ba-4be9-9cd1-8ef6b20c0eec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:47:35.225769 kubelet[3046]: E0430 03:47:35.225668 3046 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"517ddd0e-51ba-4be9-9cd1-8ef6b20c0eec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d6dd767f5-l6gfz" podUID="517ddd0e-51ba-4be9-9cd1-8ef6b20c0eec" Apr 30 03:47:35.228166 containerd[1635]: time="2025-04-30T03:47:35.228132720Z" level=error msg="StopPodSandbox for \"a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca\" failed" error="failed to destroy network for sandbox \"a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:35.228477 containerd[1635]: time="2025-04-30T03:47:35.228250384Z" level=error msg="StopPodSandbox for \"1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611\" failed" error="failed to destroy network for sandbox \"1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:35.231174 kubelet[3046]: E0430 03:47:35.231012 3046 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" Apr 30 03:47:35.231174 kubelet[3046]: E0430 03:47:35.231036 3046 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca"} Apr 30 03:47:35.231174 kubelet[3046]: E0430 03:47:35.231056 3046 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c2c1ef43-1732-4ee2-984a-7c96357acb4c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:47:35.231174 kubelet[3046]: E0430 03:47:35.231072 3046 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c2c1ef43-1732-4ee2-984a-7c96357acb4c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-548667d5d7-rpfg6" podUID="c2c1ef43-1732-4ee2-984a-7c96357acb4c" Apr 30 03:47:35.231347 kubelet[3046]: E0430 03:47:35.230970 3046 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" Apr 30 03:47:35.231347 kubelet[3046]: E0430 03:47:35.231110 3046 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611"} Apr 30 03:47:35.231347 kubelet[3046]: E0430 03:47:35.231127 3046 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7758e0dd-8e52-4b84-ad62-173247f00bf9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:47:35.231347 kubelet[3046]: E0430 03:47:35.231148 3046 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7758e0dd-8e52-4b84-ad62-173247f00bf9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-fn9tg" podUID="7758e0dd-8e52-4b84-ad62-173247f00bf9" Apr 30 03:47:35.234696 containerd[1635]: time="2025-04-30T03:47:35.234647023Z" level=error msg="StopPodSandbox for \"0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847\" failed" error="failed to destroy network for sandbox \"0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:35.234989 kubelet[3046]: E0430 03:47:35.234883 3046 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" Apr 30 03:47:35.234989 kubelet[3046]: E0430 03:47:35.234935 3046 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847"} Apr 30 03:47:35.234989 kubelet[3046]: E0430 03:47:35.234957 3046 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"94898af1-d8d4-4a31-ac96-01740beca0cc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:47:35.234989 kubelet[3046]: E0430 03:47:35.234973 3046 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"94898af1-d8d4-4a31-ac96-01740beca0cc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-855c9c9d54-6r7fb" podUID="94898af1-d8d4-4a31-ac96-01740beca0cc" Apr 30 03:47:35.236289 containerd[1635]: time="2025-04-30T03:47:35.236268312Z" level=error msg="StopPodSandbox for \"c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63\" failed" error="failed to destroy network for sandbox \"c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:35.236482 kubelet[3046]: E0430 03:47:35.236463 3046 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" Apr 30 03:47:35.236560 kubelet[3046]: E0430 03:47:35.236485 3046 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63"} Apr 30 03:47:35.236560 kubelet[3046]: E0430 03:47:35.236502 3046 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"77fda787-d7b6-4fd3-822b-fc38fd6f240c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:47:35.236560 kubelet[3046]: E0430 03:47:35.236517 3046 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"77fda787-d7b6-4fd3-822b-fc38fd6f240c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q77vc" podUID="77fda787-d7b6-4fd3-822b-fc38fd6f240c" Apr 30 03:47:35.237503 containerd[1635]: time="2025-04-30T03:47:35.237474943Z" level=error msg="StopPodSandbox for \"916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2\" failed" error="failed to destroy network for sandbox \"916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:47:35.237597 kubelet[3046]: E0430 03:47:35.237581 3046 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" Apr 30 03:47:35.237667 kubelet[3046]: E0430 03:47:35.237602 3046 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2"} Apr 30 03:47:35.237667 kubelet[3046]: E0430 03:47:35.237631 3046 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8ac99651-c6e8-4435-af79-6d59e81f514c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:47:35.237667 kubelet[3046]: E0430 03:47:35.237647 3046 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8ac99651-c6e8-4435-af79-6d59e81f514c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zsrkz" podUID="8ac99651-c6e8-4435-af79-6d59e81f514c" Apr 30 03:47:41.089383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount790368315.mount: Deactivated successfully. Apr 30 03:47:41.133192 containerd[1635]: time="2025-04-30T03:47:41.128273489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:41.133192 containerd[1635]: time="2025-04-30T03:47:41.133151593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" Apr 30 03:47:41.140674 containerd[1635]: time="2025-04-30T03:47:41.140624314Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:41.141228 containerd[1635]: time="2025-04-30T03:47:41.141203544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:41.145173 containerd[1635]: time="2025-04-30T03:47:41.145140771Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 7.009044142s" Apr 30 03:47:41.145173 containerd[1635]: time="2025-04-30T03:47:41.145167842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" Apr 30 03:47:41.221106 containerd[1635]: time="2025-04-30T03:47:41.221066679Z" level=info msg="CreateContainer within sandbox \"95bc9fc4b74d97da37cd866088304e53a66c69dd9e4f4a854c310b97c0fa1981\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 03:47:41.265490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3180399289.mount: Deactivated successfully. Apr 30 03:47:41.283631 containerd[1635]: time="2025-04-30T03:47:41.283594944Z" level=info msg="CreateContainer within sandbox \"95bc9fc4b74d97da37cd866088304e53a66c69dd9e4f4a854c310b97c0fa1981\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c3e5110b459900bd3808e96cd4b12da2b79a16e074a4c30327dd51e085adbf39\"" Apr 30 03:47:41.290522 containerd[1635]: time="2025-04-30T03:47:41.290474809Z" level=info msg="StartContainer for \"c3e5110b459900bd3808e96cd4b12da2b79a16e074a4c30327dd51e085adbf39\"" Apr 30 03:47:41.369166 containerd[1635]: time="2025-04-30T03:47:41.368696527Z" level=info msg="StartContainer for \"c3e5110b459900bd3808e96cd4b12da2b79a16e074a4c30327dd51e085adbf39\" returns successfully" Apr 30 03:47:41.440749 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 03:47:41.443443 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 03:47:42.278368 kubelet[3046]: I0430 03:47:42.269440 3046 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-l8lvq" podStartSLOduration=2.319429355 podStartE2EDuration="20.258189132s" podCreationTimestamp="2025-04-30 03:47:22 +0000 UTC" firstStartedPulling="2025-04-30 03:47:23.210785628 +0000 UTC m=+21.382317509" lastFinishedPulling="2025-04-30 03:47:41.149545405 +0000 UTC m=+39.321077286" observedRunningTime="2025-04-30 03:47:42.226460649 +0000 UTC m=+40.397992550" watchObservedRunningTime="2025-04-30 03:47:42.258189132 +0000 UTC m=+40.429721013" Apr 30 03:47:42.659051 systemd-resolved[1522]: Under memory pressure, flushing caches. Apr 30 03:47:42.663585 systemd-journald[1184]: Under memory pressure, flushing caches. Apr 30 03:47:42.660502 systemd-resolved[1522]: Flushed all caches. Apr 30 03:47:43.231680 systemd[1]: run-containerd-runc-k8s.io-c3e5110b459900bd3808e96cd4b12da2b79a16e074a4c30327dd51e085adbf39-runc.fgKs3U.mount: Deactivated successfully. Apr 30 03:47:43.958488 kubelet[3046]: I0430 03:47:43.958451 3046 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:47:44.229568 systemd[1]: run-containerd-runc-k8s.io-c3e5110b459900bd3808e96cd4b12da2b79a16e074a4c30327dd51e085adbf39-runc.YPXMhT.mount: Deactivated successfully. Apr 30 03:47:44.706120 systemd-resolved[1522]: Under memory pressure, flushing caches. Apr 30 03:47:44.709286 systemd-journald[1184]: Under memory pressure, flushing caches. Apr 30 03:47:44.706136 systemd-resolved[1522]: Flushed all caches. Apr 30 03:47:45.123032 kernel: bpftool[4459]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 03:47:45.309461 systemd-networkd[1258]: vxlan.calico: Link UP Apr 30 03:47:45.309468 systemd-networkd[1258]: vxlan.calico: Gained carrier Apr 30 03:47:46.370217 systemd-networkd[1258]: vxlan.calico: Gained IPv6LL Apr 30 03:47:46.907846 containerd[1635]: time="2025-04-30T03:47:46.907303566Z" level=info msg="StopPodSandbox for \"1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611\"" Apr 30 03:47:46.909982 containerd[1635]: time="2025-04-30T03:47:46.908656406Z" level=info msg="StopPodSandbox for \"916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2\"" Apr 30 03:47:46.913678 containerd[1635]: time="2025-04-30T03:47:46.913235711Z" level=info msg="StopPodSandbox for \"a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca\"" Apr 30 03:47:46.915488 containerd[1635]: time="2025-04-30T03:47:46.915418886Z" level=info msg="StopPodSandbox for \"0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847\"" Apr 30 03:47:46.920613 containerd[1635]: time="2025-04-30T03:47:46.919385510Z" level=info msg="StopPodSandbox for \"94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f\"" Apr 30 03:47:46.922620 containerd[1635]: time="2025-04-30T03:47:46.922544118Z" level=info msg="StopPodSandbox for \"c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63\"" Apr 30 03:47:47.243938 containerd[1635]: 2025-04-30 03:47:47.076 [INFO][4611] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" Apr 30 03:47:47.243938 containerd[1635]: 2025-04-30 03:47:47.076 [INFO][4611] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" iface="eth0" netns="/var/run/netns/cni-3edf16b4-b3b2-7fb0-03be-c2e2adb9526e" Apr 30 03:47:47.243938 containerd[1635]: 2025-04-30 03:47:47.077 [INFO][4611] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" iface="eth0" netns="/var/run/netns/cni-3edf16b4-b3b2-7fb0-03be-c2e2adb9526e" Apr 30 03:47:47.243938 containerd[1635]: 2025-04-30 03:47:47.077 [INFO][4611] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" iface="eth0" netns="/var/run/netns/cni-3edf16b4-b3b2-7fb0-03be-c2e2adb9526e" Apr 30 03:47:47.243938 containerd[1635]: 2025-04-30 03:47:47.077 [INFO][4611] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" Apr 30 03:47:47.243938 containerd[1635]: 2025-04-30 03:47:47.077 [INFO][4611] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" Apr 30 03:47:47.243938 containerd[1635]: 2025-04-30 03:47:47.228 [INFO][4666] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" HandleID="k8s-pod-network.1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--fn9tg-eth0" Apr 30 03:47:47.243938 containerd[1635]: 2025-04-30 03:47:47.229 [INFO][4666] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:47:47.243938 containerd[1635]: 2025-04-30 03:47:47.230 [INFO][4666] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:47:47.243938 containerd[1635]: 2025-04-30 03:47:47.238 [WARNING][4666] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" HandleID="k8s-pod-network.1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--fn9tg-eth0" Apr 30 03:47:47.243938 containerd[1635]: 2025-04-30 03:47:47.238 [INFO][4666] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" HandleID="k8s-pod-network.1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--fn9tg-eth0" Apr 30 03:47:47.243938 containerd[1635]: 2025-04-30 03:47:47.240 [INFO][4666] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:47:47.243938 containerd[1635]: 2025-04-30 03:47:47.241 [INFO][4611] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" Apr 30 03:47:47.247254 systemd[1]: run-netns-cni\x2d3edf16b4\x2db3b2\x2d7fb0\x2d03be\x2dc2e2adb9526e.mount: Deactivated successfully. Apr 30 03:47:47.250414 containerd[1635]: time="2025-04-30T03:47:47.250231094Z" level=info msg="TearDown network for sandbox \"1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611\" successfully" Apr 30 03:47:47.250678 containerd[1635]: time="2025-04-30T03:47:47.250404243Z" level=info msg="StopPodSandbox for \"1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611\" returns successfully" Apr 30 03:47:47.251647 containerd[1635]: time="2025-04-30T03:47:47.251630111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fn9tg,Uid:7758e0dd-8e52-4b84-ad62-173247f00bf9,Namespace:kube-system,Attempt:1,}" Apr 30 03:47:47.259246 containerd[1635]: 2025-04-30 03:47:47.068 [INFO][4620] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" Apr 30 03:47:47.259246 containerd[1635]: 2025-04-30 03:47:47.069 [INFO][4620] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" iface="eth0" netns="/var/run/netns/cni-958b3b40-f64c-1db4-6a45-9091a2651c62" Apr 30 03:47:47.259246 containerd[1635]: 2025-04-30 03:47:47.070 [INFO][4620] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" iface="eth0" netns="/var/run/netns/cni-958b3b40-f64c-1db4-6a45-9091a2651c62" Apr 30 03:47:47.259246 containerd[1635]: 2025-04-30 03:47:47.074 [INFO][4620] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" iface="eth0" netns="/var/run/netns/cni-958b3b40-f64c-1db4-6a45-9091a2651c62" Apr 30 03:47:47.259246 containerd[1635]: 2025-04-30 03:47:47.075 [INFO][4620] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" Apr 30 03:47:47.259246 containerd[1635]: 2025-04-30 03:47:47.076 [INFO][4620] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" Apr 30 03:47:47.259246 containerd[1635]: 2025-04-30 03:47:47.228 [INFO][4664] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" HandleID="k8s-pod-network.0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" Apr 30 03:47:47.259246 containerd[1635]: 2025-04-30 03:47:47.230 [INFO][4664] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:47:47.259246 containerd[1635]: 2025-04-30 03:47:47.240 [INFO][4664] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:47:47.259246 containerd[1635]: 2025-04-30 03:47:47.250 [WARNING][4664] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" HandleID="k8s-pod-network.0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" Apr 30 03:47:47.259246 containerd[1635]: 2025-04-30 03:47:47.251 [INFO][4664] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" HandleID="k8s-pod-network.0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" Apr 30 03:47:47.259246 containerd[1635]: 2025-04-30 03:47:47.252 [INFO][4664] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:47:47.259246 containerd[1635]: 2025-04-30 03:47:47.257 [INFO][4620] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" Apr 30 03:47:47.262013 systemd[1]: run-netns-cni\x2d958b3b40\x2df64c\x2d1db4\x2d6a45\x2d9091a2651c62.mount: Deactivated successfully. Apr 30 03:47:47.263042 containerd[1635]: time="2025-04-30T03:47:47.262944296Z" level=info msg="TearDown network for sandbox \"0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847\" successfully" Apr 30 03:47:47.263042 containerd[1635]: time="2025-04-30T03:47:47.262961669Z" level=info msg="StopPodSandbox for \"0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847\" returns successfully" Apr 30 03:47:47.264356 containerd[1635]: time="2025-04-30T03:47:47.264209227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855c9c9d54-6r7fb,Uid:94898af1-d8d4-4a31-ac96-01740beca0cc,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:47:47.266346 containerd[1635]: 2025-04-30 03:47:47.073 [INFO][4623] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" Apr 30 03:47:47.266346 containerd[1635]: 2025-04-30 03:47:47.074 [INFO][4623] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" iface="eth0" netns="/var/run/netns/cni-6dda3087-07a4-8c3d-e377-e962f4f6ed65" Apr 30 03:47:47.266346 containerd[1635]: 2025-04-30 03:47:47.074 [INFO][4623] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" iface="eth0" netns="/var/run/netns/cni-6dda3087-07a4-8c3d-e377-e962f4f6ed65" Apr 30 03:47:47.266346 containerd[1635]: 2025-04-30 03:47:47.075 [INFO][4623] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" iface="eth0" netns="/var/run/netns/cni-6dda3087-07a4-8c3d-e377-e962f4f6ed65" Apr 30 03:47:47.266346 containerd[1635]: 2025-04-30 03:47:47.075 [INFO][4623] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" Apr 30 03:47:47.266346 containerd[1635]: 2025-04-30 03:47:47.075 [INFO][4623] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" Apr 30 03:47:47.266346 containerd[1635]: 2025-04-30 03:47:47.227 [INFO][4662] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" HandleID="k8s-pod-network.c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-csi--node--driver--q77vc-eth0" Apr 30 03:47:47.266346 containerd[1635]: 2025-04-30 03:47:47.230 [INFO][4662] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:47:47.266346 containerd[1635]: 2025-04-30 03:47:47.252 [INFO][4662] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:47:47.266346 containerd[1635]: 2025-04-30 03:47:47.258 [WARNING][4662] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" HandleID="k8s-pod-network.c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-csi--node--driver--q77vc-eth0" Apr 30 03:47:47.266346 containerd[1635]: 2025-04-30 03:47:47.258 [INFO][4662] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" HandleID="k8s-pod-network.c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-csi--node--driver--q77vc-eth0" Apr 30 03:47:47.266346 containerd[1635]: 2025-04-30 03:47:47.261 [INFO][4662] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:47:47.266346 containerd[1635]: 2025-04-30 03:47:47.265 [INFO][4623] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" Apr 30 03:47:47.268249 containerd[1635]: time="2025-04-30T03:47:47.268190808Z" level=info msg="TearDown network for sandbox \"c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63\" successfully" Apr 30 03:47:47.268249 containerd[1635]: time="2025-04-30T03:47:47.268211868Z" level=info msg="StopPodSandbox for \"c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63\" returns successfully" Apr 30 03:47:47.268546 systemd[1]: run-netns-cni\x2d6dda3087\x2d07a4\x2d8c3d\x2de377\x2de962f4f6ed65.mount: Deactivated successfully. Apr 30 03:47:47.269351 containerd[1635]: time="2025-04-30T03:47:47.269329722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q77vc,Uid:77fda787-d7b6-4fd3-822b-fc38fd6f240c,Namespace:calico-system,Attempt:1,}" Apr 30 03:47:47.291860 containerd[1635]: 2025-04-30 03:47:47.066 [INFO][4605] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" Apr 30 03:47:47.291860 containerd[1635]: 2025-04-30 03:47:47.066 [INFO][4605] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" iface="eth0" netns="/var/run/netns/cni-751fcc78-ee1c-045c-3b2a-4d0bdc6a6865" Apr 30 03:47:47.291860 containerd[1635]: 2025-04-30 03:47:47.068 [INFO][4605] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" iface="eth0" netns="/var/run/netns/cni-751fcc78-ee1c-045c-3b2a-4d0bdc6a6865" Apr 30 03:47:47.291860 containerd[1635]: 2025-04-30 03:47:47.072 [INFO][4605] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" iface="eth0" netns="/var/run/netns/cni-751fcc78-ee1c-045c-3b2a-4d0bdc6a6865" Apr 30 03:47:47.291860 containerd[1635]: 2025-04-30 03:47:47.072 [INFO][4605] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" Apr 30 03:47:47.291860 containerd[1635]: 2025-04-30 03:47:47.072 [INFO][4605] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" Apr 30 03:47:47.291860 containerd[1635]: 2025-04-30 03:47:47.228 [INFO][4657] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" HandleID="k8s-pod-network.916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--zsrkz-eth0" Apr 30 03:47:47.291860 containerd[1635]: 2025-04-30 03:47:47.230 [INFO][4657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:47:47.291860 containerd[1635]: 2025-04-30 03:47:47.261 [INFO][4657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:47:47.291860 containerd[1635]: 2025-04-30 03:47:47.270 [WARNING][4657] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" HandleID="k8s-pod-network.916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--zsrkz-eth0" Apr 30 03:47:47.291860 containerd[1635]: 2025-04-30 03:47:47.270 [INFO][4657] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" HandleID="k8s-pod-network.916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--zsrkz-eth0" Apr 30 03:47:47.291860 containerd[1635]: 2025-04-30 03:47:47.280 [INFO][4657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:47:47.291860 containerd[1635]: 2025-04-30 03:47:47.286 [INFO][4605] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" Apr 30 03:47:47.291860 containerd[1635]: time="2025-04-30T03:47:47.291662750Z" level=info msg="TearDown network for sandbox \"916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2\" successfully" Apr 30 03:47:47.291860 containerd[1635]: time="2025-04-30T03:47:47.291679982Z" level=info msg="StopPodSandbox for \"916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2\" returns successfully" Apr 30 03:47:47.294237 containerd[1635]: time="2025-04-30T03:47:47.293240216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zsrkz,Uid:8ac99651-c6e8-4435-af79-6d59e81f514c,Namespace:kube-system,Attempt:1,}" Apr 30 03:47:47.298653 containerd[1635]: 2025-04-30 03:47:47.070 [INFO][4632] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" Apr 30 03:47:47.298653 containerd[1635]: 2025-04-30 03:47:47.071 [INFO][4632] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" iface="eth0" netns="/var/run/netns/cni-7816e37d-2be9-e9fd-0473-30d73d9183f5" Apr 30 03:47:47.298653 containerd[1635]: 2025-04-30 03:47:47.072 [INFO][4632] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" iface="eth0" netns="/var/run/netns/cni-7816e37d-2be9-e9fd-0473-30d73d9183f5" Apr 30 03:47:47.298653 containerd[1635]: 2025-04-30 03:47:47.073 [INFO][4632] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" iface="eth0" netns="/var/run/netns/cni-7816e37d-2be9-e9fd-0473-30d73d9183f5" Apr 30 03:47:47.298653 containerd[1635]: 2025-04-30 03:47:47.073 [INFO][4632] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" Apr 30 03:47:47.298653 containerd[1635]: 2025-04-30 03:47:47.073 [INFO][4632] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" Apr 30 03:47:47.298653 containerd[1635]: 2025-04-30 03:47:47.228 [INFO][4659] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" HandleID="k8s-pod-network.94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--l6gfz-eth0" Apr 30 03:47:47.298653 containerd[1635]: 2025-04-30 03:47:47.230 [INFO][4659] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:47:47.298653 containerd[1635]: 2025-04-30 03:47:47.279 [INFO][4659] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:47:47.298653 containerd[1635]: 2025-04-30 03:47:47.284 [WARNING][4659] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" HandleID="k8s-pod-network.94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--l6gfz-eth0" Apr 30 03:47:47.298653 containerd[1635]: 2025-04-30 03:47:47.284 [INFO][4659] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" HandleID="k8s-pod-network.94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--l6gfz-eth0" Apr 30 03:47:47.298653 containerd[1635]: 2025-04-30 03:47:47.289 [INFO][4659] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:47:47.298653 containerd[1635]: 2025-04-30 03:47:47.294 [INFO][4632] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" Apr 30 03:47:47.298653 containerd[1635]: time="2025-04-30T03:47:47.298373323Z" level=info msg="TearDown network for sandbox \"94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f\" successfully" Apr 30 03:47:47.298653 containerd[1635]: time="2025-04-30T03:47:47.298388070Z" level=info msg="StopPodSandbox for \"94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f\" returns successfully" Apr 30 03:47:47.301927 containerd[1635]: time="2025-04-30T03:47:47.301801763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d6dd767f5-l6gfz,Uid:517ddd0e-51ba-4be9-9cd1-8ef6b20c0eec,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:47:47.306034 containerd[1635]: 2025-04-30 03:47:47.073 [INFO][4610] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" Apr 30 03:47:47.306034 containerd[1635]: 2025-04-30 03:47:47.073 [INFO][4610] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" iface="eth0" netns="/var/run/netns/cni-3077b758-2679-deb0-6965-66cec35917e3" Apr 30 03:47:47.306034 containerd[1635]: 2025-04-30 03:47:47.074 [INFO][4610] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" iface="eth0" netns="/var/run/netns/cni-3077b758-2679-deb0-6965-66cec35917e3" Apr 30 03:47:47.306034 containerd[1635]: 2025-04-30 03:47:47.074 [INFO][4610] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" iface="eth0" netns="/var/run/netns/cni-3077b758-2679-deb0-6965-66cec35917e3" Apr 30 03:47:47.306034 containerd[1635]: 2025-04-30 03:47:47.074 [INFO][4610] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" Apr 30 03:47:47.306034 containerd[1635]: 2025-04-30 03:47:47.074 [INFO][4610] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" Apr 30 03:47:47.306034 containerd[1635]: 2025-04-30 03:47:47.228 [INFO][4660] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" HandleID="k8s-pod-network.a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0" Apr 30 03:47:47.306034 containerd[1635]: 2025-04-30 03:47:47.229 [INFO][4660] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:47:47.306034 containerd[1635]: 2025-04-30 03:47:47.286 [INFO][4660] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:47:47.306034 containerd[1635]: 2025-04-30 03:47:47.293 [WARNING][4660] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" HandleID="k8s-pod-network.a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0" Apr 30 03:47:47.306034 containerd[1635]: 2025-04-30 03:47:47.293 [INFO][4660] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" HandleID="k8s-pod-network.a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0" Apr 30 03:47:47.306034 containerd[1635]: 2025-04-30 03:47:47.295 [INFO][4660] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:47:47.306034 containerd[1635]: 2025-04-30 03:47:47.297 [INFO][4610] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" Apr 30 03:47:47.306649 containerd[1635]: time="2025-04-30T03:47:47.306308060Z" level=info msg="TearDown network for sandbox \"a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca\" successfully" Apr 30 03:47:47.306649 containerd[1635]: time="2025-04-30T03:47:47.306324181Z" level=info msg="StopPodSandbox for \"a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca\" returns successfully" Apr 30 03:47:47.306773 containerd[1635]: time="2025-04-30T03:47:47.306757163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-548667d5d7-rpfg6,Uid:c2c1ef43-1732-4ee2-984a-7c96357acb4c,Namespace:calico-system,Attempt:1,}" Apr 30 03:47:47.531349 systemd-networkd[1258]: cali59a626749dc: Link UP Apr 30 03:47:47.531577 systemd-networkd[1258]: cali59a626749dc: Gained carrier Apr 30 03:47:47.557922 containerd[1635]: 2025-04-30 03:47:47.388 [INFO][4716] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0 calico-apiserver-855c9c9d54- calico-apiserver 94898af1-d8d4-4a31-ac96-01740beca0cc 772 0 2025-04-30 03:47:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:855c9c9d54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-c-b54c1f5c93 calico-apiserver-855c9c9d54-6r7fb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali59a626749dc [] []}} ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Namespace="calico-apiserver" Pod="calico-apiserver-855c9c9d54-6r7fb" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-" Apr 30 03:47:47.557922 containerd[1635]: 2025-04-30 03:47:47.388 [INFO][4716] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Namespace="calico-apiserver" Pod="calico-apiserver-855c9c9d54-6r7fb" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" Apr 30 03:47:47.557922 containerd[1635]: 2025-04-30 03:47:47.460 [INFO][4787] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" HandleID="k8s-pod-network.d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" Apr 30 03:47:47.557922 containerd[1635]: 2025-04-30 03:47:47.474 [INFO][4787] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" HandleID="k8s-pod-network.d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b7b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-c-b54c1f5c93", "pod":"calico-apiserver-855c9c9d54-6r7fb", "timestamp":"2025-04-30 03:47:47.460114157 +0000 UTC"}, Hostname:"ci-4081-3-3-c-b54c1f5c93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:47:47.557922 containerd[1635]: 2025-04-30 03:47:47.475 [INFO][4787] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:47:47.557922 containerd[1635]: 2025-04-30 03:47:47.475 [INFO][4787] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:47:47.557922 containerd[1635]: 2025-04-30 03:47:47.475 [INFO][4787] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-c-b54c1f5c93' Apr 30 03:47:47.557922 containerd[1635]: 2025-04-30 03:47:47.481 [INFO][4787] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.557922 containerd[1635]: 2025-04-30 03:47:47.492 [INFO][4787] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.557922 containerd[1635]: 2025-04-30 03:47:47.499 [INFO][4787] ipam/ipam.go 489: Trying affinity for 192.168.106.64/26 host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.557922 containerd[1635]: 2025-04-30 03:47:47.501 [INFO][4787] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.64/26 host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.557922 containerd[1635]: 2025-04-30 03:47:47.503 [INFO][4787] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.64/26 host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.557922 containerd[1635]: 2025-04-30 03:47:47.503 [INFO][4787] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.64/26 handle="k8s-pod-network.d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.557922 containerd[1635]: 2025-04-30 03:47:47.507 [INFO][4787] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070 Apr 30 03:47:47.557922 containerd[1635]: 2025-04-30 03:47:47.514 [INFO][4787] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.64/26 handle="k8s-pod-network.d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.557922 containerd[1635]: 2025-04-30 03:47:47.522 [INFO][4787] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.65/26] block=192.168.106.64/26 handle="k8s-pod-network.d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.557922 containerd[1635]: 2025-04-30 03:47:47.522 [INFO][4787] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.65/26] handle="k8s-pod-network.d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.557922 containerd[1635]: 2025-04-30 03:47:47.522 [INFO][4787] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:47:47.557922 containerd[1635]: 2025-04-30 03:47:47.522 [INFO][4787] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.65/26] IPv6=[] ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" HandleID="k8s-pod-network.d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" Apr 30 03:47:47.558384 containerd[1635]: 2025-04-30 03:47:47.527 [INFO][4716] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Namespace="calico-apiserver" Pod="calico-apiserver-855c9c9d54-6r7fb" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0", GenerateName:"calico-apiserver-855c9c9d54-", Namespace:"calico-apiserver", SelfLink:"", UID:"94898af1-d8d4-4a31-ac96-01740beca0cc", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"855c9c9d54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"", Pod:"calico-apiserver-855c9c9d54-6r7fb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59a626749dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:47:47.558384 containerd[1635]: 2025-04-30 03:47:47.528 [INFO][4716] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.65/32] ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Namespace="calico-apiserver" Pod="calico-apiserver-855c9c9d54-6r7fb" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" Apr 30 03:47:47.558384 containerd[1635]: 2025-04-30 03:47:47.528 [INFO][4716] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali59a626749dc ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Namespace="calico-apiserver" Pod="calico-apiserver-855c9c9d54-6r7fb" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" Apr 30 03:47:47.558384 containerd[1635]: 2025-04-30 03:47:47.531 [INFO][4716] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Namespace="calico-apiserver" Pod="calico-apiserver-855c9c9d54-6r7fb" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" Apr 30 03:47:47.558384 containerd[1635]: 2025-04-30 03:47:47.532 [INFO][4716] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Namespace="calico-apiserver" Pod="calico-apiserver-855c9c9d54-6r7fb" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0", GenerateName:"calico-apiserver-855c9c9d54-", Namespace:"calico-apiserver", SelfLink:"", UID:"94898af1-d8d4-4a31-ac96-01740beca0cc", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"855c9c9d54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070", Pod:"calico-apiserver-855c9c9d54-6r7fb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59a626749dc", MAC:"02:52:cd:00:c1:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:47:47.558384 containerd[1635]: 2025-04-30 03:47:47.547 [INFO][4716] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Namespace="calico-apiserver" Pod="calico-apiserver-855c9c9d54-6r7fb" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" Apr 30 03:47:47.577250 systemd-networkd[1258]: cali77cc5461302: Link UP Apr 30 03:47:47.578406 systemd-networkd[1258]: cali77cc5461302: Gained carrier Apr 30 03:47:47.595230 containerd[1635]: 2025-04-30 03:47:47.370 [INFO][4704] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--fn9tg-eth0 coredns-7db6d8ff4d- kube-system 7758e0dd-8e52-4b84-ad62-173247f00bf9 770 0 2025-04-30 03:47:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-c-b54c1f5c93 coredns-7db6d8ff4d-fn9tg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali77cc5461302 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2e1c9525008c241da82957dde1817d0d5ce1e7f978eed10c6d5c880104eaf554" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fn9tg" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--fn9tg-" Apr 30 03:47:47.595230 containerd[1635]: 2025-04-30 03:47:47.371 [INFO][4704] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2e1c9525008c241da82957dde1817d0d5ce1e7f978eed10c6d5c880104eaf554" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fn9tg" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--fn9tg-eth0" Apr 30 03:47:47.595230 containerd[1635]: 2025-04-30 03:47:47.463 [INFO][4769] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e1c9525008c241da82957dde1817d0d5ce1e7f978eed10c6d5c880104eaf554" HandleID="k8s-pod-network.2e1c9525008c241da82957dde1817d0d5ce1e7f978eed10c6d5c880104eaf554" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--fn9tg-eth0" Apr 30 03:47:47.595230 containerd[1635]: 2025-04-30 03:47:47.476 [INFO][4769] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2e1c9525008c241da82957dde1817d0d5ce1e7f978eed10c6d5c880104eaf554" HandleID="k8s-pod-network.2e1c9525008c241da82957dde1817d0d5ce1e7f978eed10c6d5c880104eaf554" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--fn9tg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000304450), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-c-b54c1f5c93", "pod":"coredns-7db6d8ff4d-fn9tg", "timestamp":"2025-04-30 03:47:47.463425265 +0000 UTC"}, Hostname:"ci-4081-3-3-c-b54c1f5c93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:47:47.595230 containerd[1635]: 2025-04-30 03:47:47.476 [INFO][4769] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:47:47.595230 containerd[1635]: 2025-04-30 03:47:47.522 [INFO][4769] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:47:47.595230 containerd[1635]: 2025-04-30 03:47:47.523 [INFO][4769] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-c-b54c1f5c93' Apr 30 03:47:47.595230 containerd[1635]: 2025-04-30 03:47:47.525 [INFO][4769] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2e1c9525008c241da82957dde1817d0d5ce1e7f978eed10c6d5c880104eaf554" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.595230 containerd[1635]: 2025-04-30 03:47:47.530 [INFO][4769] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.595230 containerd[1635]: 2025-04-30 03:47:47.534 [INFO][4769] ipam/ipam.go 489: Trying affinity for 192.168.106.64/26 host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.595230 containerd[1635]: 2025-04-30 03:47:47.538 [INFO][4769] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.64/26 host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.595230 containerd[1635]: 2025-04-30 03:47:47.548 [INFO][4769] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.64/26 host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.595230 containerd[1635]: 2025-04-30 03:47:47.548 [INFO][4769] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.64/26 handle="k8s-pod-network.2e1c9525008c241da82957dde1817d0d5ce1e7f978eed10c6d5c880104eaf554" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.595230 containerd[1635]: 2025-04-30 03:47:47.551 [INFO][4769] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2e1c9525008c241da82957dde1817d0d5ce1e7f978eed10c6d5c880104eaf554 Apr 30 03:47:47.595230 containerd[1635]: 2025-04-30 03:47:47.562 [INFO][4769] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.64/26 handle="k8s-pod-network.2e1c9525008c241da82957dde1817d0d5ce1e7f978eed10c6d5c880104eaf554" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.595230 containerd[1635]: 2025-04-30 03:47:47.572 [INFO][4769] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.66/26] block=192.168.106.64/26 handle="k8s-pod-network.2e1c9525008c241da82957dde1817d0d5ce1e7f978eed10c6d5c880104eaf554" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.595230 containerd[1635]: 2025-04-30 03:47:47.572 [INFO][4769] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.66/26] handle="k8s-pod-network.2e1c9525008c241da82957dde1817d0d5ce1e7f978eed10c6d5c880104eaf554" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.595230 containerd[1635]: 2025-04-30 03:47:47.573 [INFO][4769] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:47:47.595230 containerd[1635]: 2025-04-30 03:47:47.573 [INFO][4769] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.66/26] IPv6=[] ContainerID="2e1c9525008c241da82957dde1817d0d5ce1e7f978eed10c6d5c880104eaf554" HandleID="k8s-pod-network.2e1c9525008c241da82957dde1817d0d5ce1e7f978eed10c6d5c880104eaf554" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--fn9tg-eth0" Apr 30 03:47:47.596654 containerd[1635]: 2025-04-30 03:47:47.574 [INFO][4704] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2e1c9525008c241da82957dde1817d0d5ce1e7f978eed10c6d5c880104eaf554" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fn9tg" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--fn9tg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--fn9tg-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7758e0dd-8e52-4b84-ad62-173247f00bf9", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"", Pod:"coredns-7db6d8ff4d-fn9tg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali77cc5461302", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:47:47.596654 containerd[1635]: 2025-04-30 03:47:47.575 [INFO][4704] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.66/32] ContainerID="2e1c9525008c241da82957dde1817d0d5ce1e7f978eed10c6d5c880104eaf554" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fn9tg" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--fn9tg-eth0" Apr 30 03:47:47.596654 containerd[1635]: 2025-04-30 03:47:47.575 [INFO][4704] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali77cc5461302 ContainerID="2e1c9525008c241da82957dde1817d0d5ce1e7f978eed10c6d5c880104eaf554" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fn9tg" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--fn9tg-eth0" Apr 30 03:47:47.596654 containerd[1635]: 2025-04-30 03:47:47.576 [INFO][4704] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e1c9525008c241da82957dde1817d0d5ce1e7f978eed10c6d5c880104eaf554" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fn9tg" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--fn9tg-eth0" Apr 30 03:47:47.596654 containerd[1635]: 2025-04-30 03:47:47.576 [INFO][4704] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2e1c9525008c241da82957dde1817d0d5ce1e7f978eed10c6d5c880104eaf554" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fn9tg" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--fn9tg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--fn9tg-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7758e0dd-8e52-4b84-ad62-173247f00bf9", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"2e1c9525008c241da82957dde1817d0d5ce1e7f978eed10c6d5c880104eaf554", Pod:"coredns-7db6d8ff4d-fn9tg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali77cc5461302", MAC:"62:6d:e7:a2:60:2e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:47:47.596654 containerd[1635]: 2025-04-30 03:47:47.592 [INFO][4704] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2e1c9525008c241da82957dde1817d0d5ce1e7f978eed10c6d5c880104eaf554" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fn9tg" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--fn9tg-eth0" Apr 30 03:47:47.627081 systemd-networkd[1258]: calib9c823a46ef: Link UP Apr 30 03:47:47.628579 systemd-networkd[1258]: calib9c823a46ef: Gained carrier Apr 30 03:47:47.633515 containerd[1635]: time="2025-04-30T03:47:47.633143718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:47:47.633865 containerd[1635]: time="2025-04-30T03:47:47.633618140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:47:47.633941 containerd[1635]: time="2025-04-30T03:47:47.633853537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:47.634794 containerd[1635]: time="2025-04-30T03:47:47.634281028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:47.655171 containerd[1635]: time="2025-04-30T03:47:47.653460238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:47:47.655171 containerd[1635]: time="2025-04-30T03:47:47.653504432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:47:47.655171 containerd[1635]: time="2025-04-30T03:47:47.653517187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:47.656140 containerd[1635]: time="2025-04-30T03:47:47.656006013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:47.658337 containerd[1635]: 2025-04-30 03:47:47.406 [INFO][4756] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0 calico-kube-controllers-548667d5d7- calico-system c2c1ef43-1732-4ee2-984a-7c96357acb4c 773 0 2025-04-30 03:47:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:548667d5d7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-3-c-b54c1f5c93 calico-kube-controllers-548667d5d7-rpfg6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib9c823a46ef [] []}} ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Namespace="calico-system" Pod="calico-kube-controllers-548667d5d7-rpfg6" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-" Apr 30 03:47:47.658337 containerd[1635]: 2025-04-30 03:47:47.407 [INFO][4756] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Namespace="calico-system" Pod="calico-kube-controllers-548667d5d7-rpfg6" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0" Apr 30 03:47:47.658337 containerd[1635]: 2025-04-30 03:47:47.468 [INFO][4782] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" HandleID="k8s-pod-network.b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0" Apr 30 03:47:47.658337 containerd[1635]: 2025-04-30 03:47:47.484 [INFO][4782] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" HandleID="k8s-pod-network.b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000292b20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-c-b54c1f5c93", "pod":"calico-kube-controllers-548667d5d7-rpfg6", "timestamp":"2025-04-30 03:47:47.468491024 +0000 UTC"}, Hostname:"ci-4081-3-3-c-b54c1f5c93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:47:47.658337 containerd[1635]: 2025-04-30 03:47:47.484 [INFO][4782] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:47:47.658337 containerd[1635]: 2025-04-30 03:47:47.573 [INFO][4782] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:47:47.658337 containerd[1635]: 2025-04-30 03:47:47.573 [INFO][4782] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-c-b54c1f5c93' Apr 30 03:47:47.658337 containerd[1635]: 2025-04-30 03:47:47.577 [INFO][4782] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.658337 containerd[1635]: 2025-04-30 03:47:47.584 [INFO][4782] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.658337 containerd[1635]: 2025-04-30 03:47:47.596 [INFO][4782] ipam/ipam.go 489: Trying affinity for 192.168.106.64/26 host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.658337 containerd[1635]: 2025-04-30 03:47:47.598 [INFO][4782] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.64/26 host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.658337 containerd[1635]: 2025-04-30 03:47:47.604 [INFO][4782] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.64/26 host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.658337 containerd[1635]: 2025-04-30 03:47:47.604 [INFO][4782] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.64/26 handle="k8s-pod-network.b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.658337 containerd[1635]: 2025-04-30 03:47:47.606 [INFO][4782] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8 Apr 30 03:47:47.658337 containerd[1635]: 2025-04-30 03:47:47.612 [INFO][4782] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.64/26 handle="k8s-pod-network.b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.658337 containerd[1635]: 2025-04-30 03:47:47.617 [INFO][4782] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.67/26] block=192.168.106.64/26 handle="k8s-pod-network.b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.658337 containerd[1635]: 2025-04-30 03:47:47.617 [INFO][4782] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.67/26] handle="k8s-pod-network.b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.658337 containerd[1635]: 2025-04-30 03:47:47.617 [INFO][4782] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:47:47.658337 containerd[1635]: 2025-04-30 03:47:47.617 [INFO][4782] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.67/26] IPv6=[] ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" HandleID="k8s-pod-network.b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0" Apr 30 03:47:47.658744 containerd[1635]: 2025-04-30 03:47:47.620 [INFO][4756] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Namespace="calico-system" Pod="calico-kube-controllers-548667d5d7-rpfg6" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0", GenerateName:"calico-kube-controllers-548667d5d7-", Namespace:"calico-system", SelfLink:"", UID:"c2c1ef43-1732-4ee2-984a-7c96357acb4c", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"548667d5d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"", Pod:"calico-kube-controllers-548667d5d7-rpfg6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib9c823a46ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:47:47.658744 containerd[1635]: 2025-04-30 03:47:47.622 [INFO][4756] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.67/32] ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Namespace="calico-system" Pod="calico-kube-controllers-548667d5d7-rpfg6" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0" Apr 30 03:47:47.658744 containerd[1635]: 2025-04-30 03:47:47.622 [INFO][4756] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9c823a46ef ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Namespace="calico-system" Pod="calico-kube-controllers-548667d5d7-rpfg6" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0" Apr 30 03:47:47.658744 containerd[1635]: 2025-04-30 03:47:47.629 [INFO][4756] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Namespace="calico-system" Pod="calico-kube-controllers-548667d5d7-rpfg6" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0" Apr 30 03:47:47.658744 containerd[1635]: 2025-04-30 03:47:47.631 [INFO][4756] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Namespace="calico-system" Pod="calico-kube-controllers-548667d5d7-rpfg6" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0", GenerateName:"calico-kube-controllers-548667d5d7-", Namespace:"calico-system", SelfLink:"", UID:"c2c1ef43-1732-4ee2-984a-7c96357acb4c", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"548667d5d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8", Pod:"calico-kube-controllers-548667d5d7-rpfg6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib9c823a46ef", MAC:"1e:b1:1b:70:e0:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:47:47.658744 containerd[1635]: 2025-04-30 03:47:47.649 [INFO][4756] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Namespace="calico-system" Pod="calico-kube-controllers-548667d5d7-rpfg6" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0" Apr 30 03:47:47.692628 systemd-networkd[1258]: calia8c29090d42: Link UP Apr 30 03:47:47.694106 systemd-networkd[1258]: calia8c29090d42: Gained carrier Apr 30 03:47:47.730090 containerd[1635]: 2025-04-30 03:47:47.443 [INFO][4730] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--zsrkz-eth0 coredns-7db6d8ff4d- kube-system 8ac99651-c6e8-4435-af79-6d59e81f514c 771 0 2025-04-30 03:47:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-c-b54c1f5c93 coredns-7db6d8ff4d-zsrkz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia8c29090d42 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e1b450454878acf1cf484aabe9b30c69960ea6f5c1cad6145f86a3959690b88b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zsrkz" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--zsrkz-" Apr 30 03:47:47.730090 containerd[1635]: 2025-04-30 03:47:47.443 [INFO][4730] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e1b450454878acf1cf484aabe9b30c69960ea6f5c1cad6145f86a3959690b88b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zsrkz" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--zsrkz-eth0" Apr 30 03:47:47.730090 containerd[1635]: 2025-04-30 03:47:47.495 [INFO][4798] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e1b450454878acf1cf484aabe9b30c69960ea6f5c1cad6145f86a3959690b88b" HandleID="k8s-pod-network.e1b450454878acf1cf484aabe9b30c69960ea6f5c1cad6145f86a3959690b88b" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--zsrkz-eth0" Apr 30 03:47:47.730090 containerd[1635]: 2025-04-30 03:47:47.505 [INFO][4798] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e1b450454878acf1cf484aabe9b30c69960ea6f5c1cad6145f86a3959690b88b" HandleID="k8s-pod-network.e1b450454878acf1cf484aabe9b30c69960ea6f5c1cad6145f86a3959690b88b" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--zsrkz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031d7b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-c-b54c1f5c93", "pod":"coredns-7db6d8ff4d-zsrkz", "timestamp":"2025-04-30 03:47:47.495976538 +0000 UTC"}, Hostname:"ci-4081-3-3-c-b54c1f5c93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:47:47.730090 containerd[1635]: 2025-04-30 03:47:47.505 [INFO][4798] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:47:47.730090 containerd[1635]: 2025-04-30 03:47:47.617 [INFO][4798] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:47:47.730090 containerd[1635]: 2025-04-30 03:47:47.617 [INFO][4798] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-c-b54c1f5c93' Apr 30 03:47:47.730090 containerd[1635]: 2025-04-30 03:47:47.620 [INFO][4798] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e1b450454878acf1cf484aabe9b30c69960ea6f5c1cad6145f86a3959690b88b" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.730090 containerd[1635]: 2025-04-30 03:47:47.629 [INFO][4798] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.730090 containerd[1635]: 2025-04-30 03:47:47.650 [INFO][4798] ipam/ipam.go 489: Trying affinity for 192.168.106.64/26 host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.730090 containerd[1635]: 2025-04-30 03:47:47.652 [INFO][4798] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.64/26 host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.730090 containerd[1635]: 2025-04-30 03:47:47.656 [INFO][4798] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.64/26 host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.730090 containerd[1635]: 2025-04-30 03:47:47.656 [INFO][4798] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.64/26 handle="k8s-pod-network.e1b450454878acf1cf484aabe9b30c69960ea6f5c1cad6145f86a3959690b88b" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.730090 containerd[1635]: 2025-04-30 03:47:47.659 [INFO][4798] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e1b450454878acf1cf484aabe9b30c69960ea6f5c1cad6145f86a3959690b88b Apr 30 03:47:47.730090 containerd[1635]: 2025-04-30 03:47:47.668 [INFO][4798] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.64/26 handle="k8s-pod-network.e1b450454878acf1cf484aabe9b30c69960ea6f5c1cad6145f86a3959690b88b" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.730090 containerd[1635]: 2025-04-30 03:47:47.680 [INFO][4798] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.68/26] block=192.168.106.64/26 handle="k8s-pod-network.e1b450454878acf1cf484aabe9b30c69960ea6f5c1cad6145f86a3959690b88b" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.730090 containerd[1635]: 2025-04-30 03:47:47.680 [INFO][4798] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.68/26] handle="k8s-pod-network.e1b450454878acf1cf484aabe9b30c69960ea6f5c1cad6145f86a3959690b88b" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.730090 containerd[1635]: 2025-04-30 03:47:47.680 [INFO][4798] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:47:47.730090 containerd[1635]: 2025-04-30 03:47:47.680 [INFO][4798] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.68/26] IPv6=[] ContainerID="e1b450454878acf1cf484aabe9b30c69960ea6f5c1cad6145f86a3959690b88b" HandleID="k8s-pod-network.e1b450454878acf1cf484aabe9b30c69960ea6f5c1cad6145f86a3959690b88b" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--zsrkz-eth0" Apr 30 03:47:47.731649 containerd[1635]: 2025-04-30 03:47:47.686 [INFO][4730] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e1b450454878acf1cf484aabe9b30c69960ea6f5c1cad6145f86a3959690b88b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zsrkz" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--zsrkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--zsrkz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8ac99651-c6e8-4435-af79-6d59e81f514c", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"", Pod:"coredns-7db6d8ff4d-zsrkz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia8c29090d42", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:47:47.731649 containerd[1635]: 2025-04-30 03:47:47.686 [INFO][4730] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.68/32] ContainerID="e1b450454878acf1cf484aabe9b30c69960ea6f5c1cad6145f86a3959690b88b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zsrkz" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--zsrkz-eth0" Apr 30 03:47:47.731649 containerd[1635]: 2025-04-30 03:47:47.686 [INFO][4730] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia8c29090d42 ContainerID="e1b450454878acf1cf484aabe9b30c69960ea6f5c1cad6145f86a3959690b88b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zsrkz" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--zsrkz-eth0" Apr 30 03:47:47.731649 containerd[1635]: 2025-04-30 03:47:47.694 [INFO][4730] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e1b450454878acf1cf484aabe9b30c69960ea6f5c1cad6145f86a3959690b88b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zsrkz" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--zsrkz-eth0" Apr 30 03:47:47.731649 containerd[1635]: 2025-04-30 03:47:47.697 [INFO][4730] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e1b450454878acf1cf484aabe9b30c69960ea6f5c1cad6145f86a3959690b88b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zsrkz" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--zsrkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--zsrkz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8ac99651-c6e8-4435-af79-6d59e81f514c", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"e1b450454878acf1cf484aabe9b30c69960ea6f5c1cad6145f86a3959690b88b", Pod:"coredns-7db6d8ff4d-zsrkz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia8c29090d42", MAC:"ca:d8:53:82:10:e7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:47:47.731649 containerd[1635]: 2025-04-30 03:47:47.720 [INFO][4730] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e1b450454878acf1cf484aabe9b30c69960ea6f5c1cad6145f86a3959690b88b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zsrkz" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--zsrkz-eth0" Apr 30 03:47:47.735877 containerd[1635]: time="2025-04-30T03:47:47.735439732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:47:47.735877 containerd[1635]: time="2025-04-30T03:47:47.735510497Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:47:47.735877 containerd[1635]: time="2025-04-30T03:47:47.735526216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:47.735877 containerd[1635]: time="2025-04-30T03:47:47.735635022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:47.768452 systemd-networkd[1258]: cali4cf2372eaa5: Link UP Apr 30 03:47:47.772996 systemd-networkd[1258]: cali4cf2372eaa5: Gained carrier Apr 30 03:47:47.792416 containerd[1635]: 2025-04-30 03:47:47.363 [INFO][4708] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--c--b54c1f5c93-k8s-csi--node--driver--q77vc-eth0 csi-node-driver- calico-system 77fda787-d7b6-4fd3-822b-fc38fd6f240c 775 0 2025-04-30 03:47:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-3-c-b54c1f5c93 csi-node-driver-q77vc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4cf2372eaa5 [] []}} ContainerID="f7e2b00c491f018de364ea7e7465b97b912f051c497f1116dd86553f4fe5d7d3" Namespace="calico-system" Pod="csi-node-driver-q77vc" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-csi--node--driver--q77vc-" Apr 30 03:47:47.792416 containerd[1635]: 2025-04-30 03:47:47.367 [INFO][4708] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f7e2b00c491f018de364ea7e7465b97b912f051c497f1116dd86553f4fe5d7d3" Namespace="calico-system" Pod="csi-node-driver-q77vc" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-csi--node--driver--q77vc-eth0" Apr 30 03:47:47.792416 containerd[1635]: 2025-04-30 03:47:47.497 [INFO][4772] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7e2b00c491f018de364ea7e7465b97b912f051c497f1116dd86553f4fe5d7d3" HandleID="k8s-pod-network.f7e2b00c491f018de364ea7e7465b97b912f051c497f1116dd86553f4fe5d7d3" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-csi--node--driver--q77vc-eth0" Apr 30 03:47:47.792416 containerd[1635]: 2025-04-30 03:47:47.513 [INFO][4772] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f7e2b00c491f018de364ea7e7465b97b912f051c497f1116dd86553f4fe5d7d3" HandleID="k8s-pod-network.f7e2b00c491f018de364ea7e7465b97b912f051c497f1116dd86553f4fe5d7d3" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-csi--node--driver--q77vc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000d3ea0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-c-b54c1f5c93", "pod":"csi-node-driver-q77vc", "timestamp":"2025-04-30 03:47:47.497820118 +0000 UTC"}, Hostname:"ci-4081-3-3-c-b54c1f5c93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:47:47.792416 containerd[1635]: 2025-04-30 03:47:47.513 [INFO][4772] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:47:47.792416 containerd[1635]: 2025-04-30 03:47:47.680 [INFO][4772] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:47:47.792416 containerd[1635]: 2025-04-30 03:47:47.680 [INFO][4772] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-c-b54c1f5c93' Apr 30 03:47:47.792416 containerd[1635]: 2025-04-30 03:47:47.683 [INFO][4772] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f7e2b00c491f018de364ea7e7465b97b912f051c497f1116dd86553f4fe5d7d3" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.792416 containerd[1635]: 2025-04-30 03:47:47.688 [INFO][4772] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.792416 containerd[1635]: 2025-04-30 03:47:47.702 [INFO][4772] ipam/ipam.go 489: Trying affinity for 192.168.106.64/26 host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.792416 containerd[1635]: 2025-04-30 03:47:47.705 [INFO][4772] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.64/26 host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.792416 containerd[1635]: 2025-04-30 03:47:47.713 [INFO][4772] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.64/26 host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.792416 containerd[1635]: 2025-04-30 03:47:47.713 [INFO][4772] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.64/26 handle="k8s-pod-network.f7e2b00c491f018de364ea7e7465b97b912f051c497f1116dd86553f4fe5d7d3" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.792416 containerd[1635]: 2025-04-30 03:47:47.716 [INFO][4772] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f7e2b00c491f018de364ea7e7465b97b912f051c497f1116dd86553f4fe5d7d3 Apr 30 03:47:47.792416 containerd[1635]: 2025-04-30 03:47:47.725 [INFO][4772] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.64/26 handle="k8s-pod-network.f7e2b00c491f018de364ea7e7465b97b912f051c497f1116dd86553f4fe5d7d3" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.792416 containerd[1635]: 2025-04-30 03:47:47.735 [INFO][4772] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.69/26] block=192.168.106.64/26 handle="k8s-pod-network.f7e2b00c491f018de364ea7e7465b97b912f051c497f1116dd86553f4fe5d7d3" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.792416 containerd[1635]: 2025-04-30 03:47:47.735 [INFO][4772] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.69/26] handle="k8s-pod-network.f7e2b00c491f018de364ea7e7465b97b912f051c497f1116dd86553f4fe5d7d3" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.792416 containerd[1635]: 2025-04-30 03:47:47.736 [INFO][4772] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:47:47.792416 containerd[1635]: 2025-04-30 03:47:47.736 [INFO][4772] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.69/26] IPv6=[] ContainerID="f7e2b00c491f018de364ea7e7465b97b912f051c497f1116dd86553f4fe5d7d3" HandleID="k8s-pod-network.f7e2b00c491f018de364ea7e7465b97b912f051c497f1116dd86553f4fe5d7d3" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-csi--node--driver--q77vc-eth0" Apr 30 03:47:47.792882 containerd[1635]: 2025-04-30 03:47:47.750 [INFO][4708] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f7e2b00c491f018de364ea7e7465b97b912f051c497f1116dd86553f4fe5d7d3" Namespace="calico-system" Pod="csi-node-driver-q77vc" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-csi--node--driver--q77vc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-csi--node--driver--q77vc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77fda787-d7b6-4fd3-822b-fc38fd6f240c", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"", Pod:"csi-node-driver-q77vc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4cf2372eaa5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:47:47.792882 containerd[1635]: 2025-04-30 03:47:47.751 [INFO][4708] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.69/32] ContainerID="f7e2b00c491f018de364ea7e7465b97b912f051c497f1116dd86553f4fe5d7d3" Namespace="calico-system" Pod="csi-node-driver-q77vc" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-csi--node--driver--q77vc-eth0" Apr 30 03:47:47.792882 containerd[1635]: 2025-04-30 03:47:47.751 [INFO][4708] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4cf2372eaa5 ContainerID="f7e2b00c491f018de364ea7e7465b97b912f051c497f1116dd86553f4fe5d7d3" Namespace="calico-system" Pod="csi-node-driver-q77vc" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-csi--node--driver--q77vc-eth0" Apr 30 03:47:47.792882 containerd[1635]: 2025-04-30 03:47:47.771 [INFO][4708] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f7e2b00c491f018de364ea7e7465b97b912f051c497f1116dd86553f4fe5d7d3" Namespace="calico-system" Pod="csi-node-driver-q77vc" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-csi--node--driver--q77vc-eth0" Apr 30 03:47:47.792882 containerd[1635]: 2025-04-30 03:47:47.772 [INFO][4708] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f7e2b00c491f018de364ea7e7465b97b912f051c497f1116dd86553f4fe5d7d3" Namespace="calico-system" Pod="csi-node-driver-q77vc" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-csi--node--driver--q77vc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-csi--node--driver--q77vc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77fda787-d7b6-4fd3-822b-fc38fd6f240c", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"f7e2b00c491f018de364ea7e7465b97b912f051c497f1116dd86553f4fe5d7d3", Pod:"csi-node-driver-q77vc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4cf2372eaa5", MAC:"a2:8d:19:98:da:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:47:47.792882 containerd[1635]: 2025-04-30 03:47:47.787 [INFO][4708] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f7e2b00c491f018de364ea7e7465b97b912f051c497f1116dd86553f4fe5d7d3" Namespace="calico-system" Pod="csi-node-driver-q77vc" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-csi--node--driver--q77vc-eth0" Apr 30 03:47:47.811802 containerd[1635]: time="2025-04-30T03:47:47.810522509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855c9c9d54-6r7fb,Uid:94898af1-d8d4-4a31-ac96-01740beca0cc,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070\"" Apr 30 03:47:47.818182 containerd[1635]: time="2025-04-30T03:47:47.818078838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:47:47.827850 containerd[1635]: time="2025-04-30T03:47:47.827388616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fn9tg,Uid:7758e0dd-8e52-4b84-ad62-173247f00bf9,Namespace:kube-system,Attempt:1,} returns sandbox id \"2e1c9525008c241da82957dde1817d0d5ce1e7f978eed10c6d5c880104eaf554\"" Apr 30 03:47:47.833488 systemd-networkd[1258]: cali394eefb2576: Link UP Apr 30 03:47:47.834032 systemd-networkd[1258]: cali394eefb2576: Gained carrier Apr 30 03:47:47.838545 containerd[1635]: time="2025-04-30T03:47:47.832731623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:47:47.838545 containerd[1635]: time="2025-04-30T03:47:47.832777229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:47:47.838545 containerd[1635]: time="2025-04-30T03:47:47.832791105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:47.838545 containerd[1635]: time="2025-04-30T03:47:47.833681927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:47.840798 containerd[1635]: time="2025-04-30T03:47:47.840776589Z" level=info msg="CreateContainer within sandbox \"2e1c9525008c241da82957dde1817d0d5ce1e7f978eed10c6d5c880104eaf554\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:47:47.851500 containerd[1635]: 2025-04-30 03:47:47.444 [INFO][4740] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--l6gfz-eth0 calico-apiserver-d6dd767f5- calico-apiserver 517ddd0e-51ba-4be9-9cd1-8ef6b20c0eec 774 0 2025-04-30 03:47:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d6dd767f5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-c-b54c1f5c93 calico-apiserver-d6dd767f5-l6gfz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali394eefb2576 [] []}} ContainerID="e1012af04eaec3a425dce9a75ee7eaf175f801d5dd1ede972062dc47802383d7" Namespace="calico-apiserver" Pod="calico-apiserver-d6dd767f5-l6gfz" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--l6gfz-" Apr 30 03:47:47.851500 containerd[1635]: 2025-04-30 03:47:47.445 [INFO][4740] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e1012af04eaec3a425dce9a75ee7eaf175f801d5dd1ede972062dc47802383d7" Namespace="calico-apiserver" Pod="calico-apiserver-d6dd767f5-l6gfz" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--l6gfz-eth0" Apr 30 03:47:47.851500 containerd[1635]: 2025-04-30 03:47:47.530 [INFO][4804] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e1012af04eaec3a425dce9a75ee7eaf175f801d5dd1ede972062dc47802383d7" HandleID="k8s-pod-network.e1012af04eaec3a425dce9a75ee7eaf175f801d5dd1ede972062dc47802383d7" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--l6gfz-eth0" Apr 30 03:47:47.851500 containerd[1635]: 2025-04-30 03:47:47.551 [INFO][4804] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e1012af04eaec3a425dce9a75ee7eaf175f801d5dd1ede972062dc47802383d7" HandleID="k8s-pod-network.e1012af04eaec3a425dce9a75ee7eaf175f801d5dd1ede972062dc47802383d7" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--l6gfz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318ae0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-c-b54c1f5c93", "pod":"calico-apiserver-d6dd767f5-l6gfz", "timestamp":"2025-04-30 03:47:47.530229552 +0000 UTC"}, Hostname:"ci-4081-3-3-c-b54c1f5c93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:47:47.851500 containerd[1635]: 2025-04-30 03:47:47.551 [INFO][4804] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:47:47.851500 containerd[1635]: 2025-04-30 03:47:47.736 [INFO][4804] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:47:47.851500 containerd[1635]: 2025-04-30 03:47:47.736 [INFO][4804] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-c-b54c1f5c93' Apr 30 03:47:47.851500 containerd[1635]: 2025-04-30 03:47:47.739 [INFO][4804] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e1012af04eaec3a425dce9a75ee7eaf175f801d5dd1ede972062dc47802383d7" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.851500 containerd[1635]: 2025-04-30 03:47:47.749 [INFO][4804] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.851500 containerd[1635]: 2025-04-30 03:47:47.763 [INFO][4804] ipam/ipam.go 489: Trying affinity for 192.168.106.64/26 host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.851500 containerd[1635]: 2025-04-30 03:47:47.778 [INFO][4804] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.64/26 host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.851500 containerd[1635]: 2025-04-30 03:47:47.790 [INFO][4804] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.64/26 host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.851500 containerd[1635]: 2025-04-30 03:47:47.791 [INFO][4804] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.64/26 handle="k8s-pod-network.e1012af04eaec3a425dce9a75ee7eaf175f801d5dd1ede972062dc47802383d7" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.851500 containerd[1635]: 2025-04-30 03:47:47.795 [INFO][4804] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e1012af04eaec3a425dce9a75ee7eaf175f801d5dd1ede972062dc47802383d7 Apr 30 03:47:47.851500 containerd[1635]: 2025-04-30 03:47:47.806 [INFO][4804] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.64/26 handle="k8s-pod-network.e1012af04eaec3a425dce9a75ee7eaf175f801d5dd1ede972062dc47802383d7" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.851500 containerd[1635]: 2025-04-30 03:47:47.819 [INFO][4804] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.70/26] block=192.168.106.64/26 handle="k8s-pod-network.e1012af04eaec3a425dce9a75ee7eaf175f801d5dd1ede972062dc47802383d7" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.851500 containerd[1635]: 2025-04-30 03:47:47.819 [INFO][4804] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.70/26] handle="k8s-pod-network.e1012af04eaec3a425dce9a75ee7eaf175f801d5dd1ede972062dc47802383d7" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:47.851500 containerd[1635]: 2025-04-30 03:47:47.819 [INFO][4804] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:47:47.851500 containerd[1635]: 2025-04-30 03:47:47.819 [INFO][4804] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.70/26] IPv6=[] ContainerID="e1012af04eaec3a425dce9a75ee7eaf175f801d5dd1ede972062dc47802383d7" HandleID="k8s-pod-network.e1012af04eaec3a425dce9a75ee7eaf175f801d5dd1ede972062dc47802383d7" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--l6gfz-eth0" Apr 30 03:47:47.852483 containerd[1635]: 2025-04-30 03:47:47.826 [INFO][4740] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e1012af04eaec3a425dce9a75ee7eaf175f801d5dd1ede972062dc47802383d7" Namespace="calico-apiserver" Pod="calico-apiserver-d6dd767f5-l6gfz" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--l6gfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--l6gfz-eth0", GenerateName:"calico-apiserver-d6dd767f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"517ddd0e-51ba-4be9-9cd1-8ef6b20c0eec", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d6dd767f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"", Pod:"calico-apiserver-d6dd767f5-l6gfz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali394eefb2576", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:47:47.852483 containerd[1635]: 2025-04-30 03:47:47.827 [INFO][4740] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.70/32] ContainerID="e1012af04eaec3a425dce9a75ee7eaf175f801d5dd1ede972062dc47802383d7" Namespace="calico-apiserver" Pod="calico-apiserver-d6dd767f5-l6gfz" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--l6gfz-eth0" Apr 30 03:47:47.852483 containerd[1635]: 2025-04-30 03:47:47.827 [INFO][4740] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali394eefb2576 ContainerID="e1012af04eaec3a425dce9a75ee7eaf175f801d5dd1ede972062dc47802383d7" Namespace="calico-apiserver" Pod="calico-apiserver-d6dd767f5-l6gfz" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--l6gfz-eth0" Apr 30 03:47:47.852483 containerd[1635]: 2025-04-30 03:47:47.834 [INFO][4740] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e1012af04eaec3a425dce9a75ee7eaf175f801d5dd1ede972062dc47802383d7" Namespace="calico-apiserver" Pod="calico-apiserver-d6dd767f5-l6gfz" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--l6gfz-eth0" Apr 30 03:47:47.852483 containerd[1635]: 2025-04-30 03:47:47.834 [INFO][4740] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e1012af04eaec3a425dce9a75ee7eaf175f801d5dd1ede972062dc47802383d7" Namespace="calico-apiserver" Pod="calico-apiserver-d6dd767f5-l6gfz" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--l6gfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--l6gfz-eth0", GenerateName:"calico-apiserver-d6dd767f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"517ddd0e-51ba-4be9-9cd1-8ef6b20c0eec", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d6dd767f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"e1012af04eaec3a425dce9a75ee7eaf175f801d5dd1ede972062dc47802383d7", Pod:"calico-apiserver-d6dd767f5-l6gfz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali394eefb2576", MAC:"06:9f:f8:07:7d:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:47:47.852483 containerd[1635]: 2025-04-30 03:47:47.846 [INFO][4740] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e1012af04eaec3a425dce9a75ee7eaf175f801d5dd1ede972062dc47802383d7" Namespace="calico-apiserver" Pod="calico-apiserver-d6dd767f5-l6gfz" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--l6gfz-eth0" Apr 30 03:47:47.873406 containerd[1635]: time="2025-04-30T03:47:47.871573450Z" level=info msg="CreateContainer within sandbox \"2e1c9525008c241da82957dde1817d0d5ce1e7f978eed10c6d5c880104eaf554\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"50c04b04afbfdbaa00d4e7639fabbe82580555888ddc830bbd3184ada6b522fd\"" Apr 30 03:47:47.892642 containerd[1635]: time="2025-04-30T03:47:47.892583336Z" level=info msg="StartContainer for \"50c04b04afbfdbaa00d4e7639fabbe82580555888ddc830bbd3184ada6b522fd\"" Apr 30 03:47:47.922883 containerd[1635]: time="2025-04-30T03:47:47.919711522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:47:47.922883 containerd[1635]: time="2025-04-30T03:47:47.919959793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:47:47.922883 containerd[1635]: time="2025-04-30T03:47:47.919974520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:47.922883 containerd[1635]: time="2025-04-30T03:47:47.922610306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:47.939800 containerd[1635]: time="2025-04-30T03:47:47.939607402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:47:47.939800 containerd[1635]: time="2025-04-30T03:47:47.939643460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:47:47.939800 containerd[1635]: time="2025-04-30T03:47:47.939652598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:47.939800 containerd[1635]: time="2025-04-30T03:47:47.939728331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:47.964230 containerd[1635]: time="2025-04-30T03:47:47.964139987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-548667d5d7-rpfg6,Uid:c2c1ef43-1732-4ee2-984a-7c96357acb4c,Namespace:calico-system,Attempt:1,} returns sandbox id \"b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8\"" Apr 30 03:47:47.969217 containerd[1635]: time="2025-04-30T03:47:47.969185529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zsrkz,Uid:8ac99651-c6e8-4435-af79-6d59e81f514c,Namespace:kube-system,Attempt:1,} returns sandbox id \"e1b450454878acf1cf484aabe9b30c69960ea6f5c1cad6145f86a3959690b88b\"" Apr 30 03:47:47.976279 containerd[1635]: time="2025-04-30T03:47:47.976213194Z" level=info msg="CreateContainer within sandbox \"e1b450454878acf1cf484aabe9b30c69960ea6f5c1cad6145f86a3959690b88b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:47:47.998597 containerd[1635]: time="2025-04-30T03:47:47.998571711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q77vc,Uid:77fda787-d7b6-4fd3-822b-fc38fd6f240c,Namespace:calico-system,Attempt:1,} returns sandbox id \"f7e2b00c491f018de364ea7e7465b97b912f051c497f1116dd86553f4fe5d7d3\"" Apr 30 03:47:47.998857 containerd[1635]: time="2025-04-30T03:47:47.998841463Z" level=info msg="CreateContainer within sandbox \"e1b450454878acf1cf484aabe9b30c69960ea6f5c1cad6145f86a3959690b88b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"57529a19568998843974b21b4086a4b561f7e7cac531c81b6c15b86d3e9c5d6c\"" Apr 30 03:47:47.999561 containerd[1635]: time="2025-04-30T03:47:47.999537325Z" level=info msg="StartContainer for \"57529a19568998843974b21b4086a4b561f7e7cac531c81b6c15b86d3e9c5d6c\"" Apr 30 03:47:48.018796 containerd[1635]: time="2025-04-30T03:47:48.018763775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d6dd767f5-l6gfz,Uid:517ddd0e-51ba-4be9-9cd1-8ef6b20c0eec,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e1012af04eaec3a425dce9a75ee7eaf175f801d5dd1ede972062dc47802383d7\"" Apr 30 03:47:48.033458 containerd[1635]: time="2025-04-30T03:47:48.033318363Z" level=info msg="StartContainer for \"50c04b04afbfdbaa00d4e7639fabbe82580555888ddc830bbd3184ada6b522fd\" returns successfully" Apr 30 03:47:48.057621 containerd[1635]: time="2025-04-30T03:47:48.056700793Z" level=info msg="StartContainer for \"57529a19568998843974b21b4086a4b561f7e7cac531c81b6c15b86d3e9c5d6c\" returns successfully" Apr 30 03:47:48.264980 kubelet[3046]: I0430 03:47:48.264934 3046 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fn9tg" podStartSLOduration=32.264920784 podStartE2EDuration="32.264920784s" podCreationTimestamp="2025-04-30 03:47:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:47:48.264620494 +0000 UTC m=+46.436152385" watchObservedRunningTime="2025-04-30 03:47:48.264920784 +0000 UTC m=+46.436452666" Apr 30 03:47:48.271176 systemd[1]: run-netns-cni\x2d7816e37d\x2d2be9\x2de9fd\x2d0473\x2d30d73d9183f5.mount: Deactivated successfully. Apr 30 03:47:48.271283 systemd[1]: run-netns-cni\x2d751fcc78\x2dee1c\x2d045c\x2d3b2a\x2d4d0bdc6a6865.mount: Deactivated successfully. Apr 30 03:47:48.271359 systemd[1]: run-netns-cni\x2d3077b758\x2d2679\x2ddeb0\x2d6965\x2d66cec35917e3.mount: Deactivated successfully. Apr 30 03:47:48.293630 kubelet[3046]: I0430 03:47:48.293556 3046 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zsrkz" podStartSLOduration=32.293538898 podStartE2EDuration="32.293538898s" podCreationTimestamp="2025-04-30 03:47:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:47:48.293242796 +0000 UTC m=+46.464774677" watchObservedRunningTime="2025-04-30 03:47:48.293538898 +0000 UTC m=+46.465070779" Apr 30 03:47:48.546142 systemd-networkd[1258]: cali59a626749dc: Gained IPv6LL Apr 30 03:47:48.802241 systemd-networkd[1258]: cali77cc5461302: Gained IPv6LL Apr 30 03:47:48.804840 systemd-networkd[1258]: calia8c29090d42: Gained IPv6LL Apr 30 03:47:48.907586 containerd[1635]: time="2025-04-30T03:47:48.905778667Z" level=info msg="StopPodSandbox for \"5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f\"" Apr 30 03:47:49.023350 containerd[1635]: 2025-04-30 03:47:48.987 [INFO][5233] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" Apr 30 03:47:49.023350 containerd[1635]: 2025-04-30 03:47:48.988 [INFO][5233] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" iface="eth0" netns="/var/run/netns/cni-3b47a9d2-4138-3be3-2f37-44918228a783" Apr 30 03:47:49.023350 containerd[1635]: 2025-04-30 03:47:48.988 [INFO][5233] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" iface="eth0" netns="/var/run/netns/cni-3b47a9d2-4138-3be3-2f37-44918228a783" Apr 30 03:47:49.023350 containerd[1635]: 2025-04-30 03:47:48.989 [INFO][5233] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" iface="eth0" netns="/var/run/netns/cni-3b47a9d2-4138-3be3-2f37-44918228a783" Apr 30 03:47:49.023350 containerd[1635]: 2025-04-30 03:47:48.989 [INFO][5233] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" Apr 30 03:47:49.023350 containerd[1635]: 2025-04-30 03:47:48.989 [INFO][5233] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" Apr 30 03:47:49.023350 containerd[1635]: 2025-04-30 03:47:49.014 [INFO][5240] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" HandleID="k8s-pod-network.5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0" Apr 30 03:47:49.023350 containerd[1635]: 2025-04-30 03:47:49.014 [INFO][5240] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:47:49.023350 containerd[1635]: 2025-04-30 03:47:49.014 [INFO][5240] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:47:49.023350 containerd[1635]: 2025-04-30 03:47:49.019 [WARNING][5240] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" HandleID="k8s-pod-network.5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0" Apr 30 03:47:49.023350 containerd[1635]: 2025-04-30 03:47:49.019 [INFO][5240] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" HandleID="k8s-pod-network.5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0" Apr 30 03:47:49.023350 containerd[1635]: 2025-04-30 03:47:49.020 [INFO][5240] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:47:49.023350 containerd[1635]: 2025-04-30 03:47:49.022 [INFO][5233] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" Apr 30 03:47:49.026375 containerd[1635]: time="2025-04-30T03:47:49.024048581Z" level=info msg="TearDown network for sandbox \"5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f\" successfully" Apr 30 03:47:49.026375 containerd[1635]: time="2025-04-30T03:47:49.024079260Z" level=info msg="StopPodSandbox for \"5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f\" returns successfully" Apr 30 03:47:49.027958 containerd[1635]: time="2025-04-30T03:47:49.027088444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855c9c9d54-2tcgf,Uid:e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:47:49.028883 systemd[1]: run-netns-cni\x2d3b47a9d2\x2d4138\x2d3be3\x2d2f37\x2d44918228a783.mount: Deactivated successfully. Apr 30 03:47:49.126962 systemd-networkd[1258]: cali6cb2c30de00: Link UP Apr 30 03:47:49.127710 systemd-networkd[1258]: cali6cb2c30de00: Gained carrier Apr 30 03:47:49.145004 containerd[1635]: 2025-04-30 03:47:49.066 [INFO][5246] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0 calico-apiserver-855c9c9d54- calico-apiserver e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86 821 0 2025-04-30 03:47:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:855c9c9d54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-c-b54c1f5c93 calico-apiserver-855c9c9d54-2tcgf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6cb2c30de00 [] []}} ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Namespace="calico-apiserver" Pod="calico-apiserver-855c9c9d54-2tcgf" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-" Apr 30 03:47:49.145004 containerd[1635]: 2025-04-30 03:47:49.066 [INFO][5246] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Namespace="calico-apiserver" Pod="calico-apiserver-855c9c9d54-2tcgf" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0" Apr 30 03:47:49.145004 containerd[1635]: 2025-04-30 03:47:49.088 [INFO][5259] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" HandleID="k8s-pod-network.3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0" Apr 30 03:47:49.145004 containerd[1635]: 2025-04-30 03:47:49.097 [INFO][5259] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" HandleID="k8s-pod-network.3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334460), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-c-b54c1f5c93", "pod":"calico-apiserver-855c9c9d54-2tcgf", "timestamp":"2025-04-30 03:47:49.088266794 +0000 UTC"}, Hostname:"ci-4081-3-3-c-b54c1f5c93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:47:49.145004 containerd[1635]: 2025-04-30 03:47:49.097 [INFO][5259] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:47:49.145004 containerd[1635]: 2025-04-30 03:47:49.097 [INFO][5259] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:47:49.145004 containerd[1635]: 2025-04-30 03:47:49.097 [INFO][5259] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-c-b54c1f5c93' Apr 30 03:47:49.145004 containerd[1635]: 2025-04-30 03:47:49.098 [INFO][5259] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:49.145004 containerd[1635]: 2025-04-30 03:47:49.102 [INFO][5259] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:49.145004 containerd[1635]: 2025-04-30 03:47:49.105 [INFO][5259] ipam/ipam.go 489: Trying affinity for 192.168.106.64/26 host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:49.145004 containerd[1635]: 2025-04-30 03:47:49.106 [INFO][5259] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.64/26 host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:49.145004 containerd[1635]: 2025-04-30 03:47:49.108 [INFO][5259] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.64/26 host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:49.145004 containerd[1635]: 2025-04-30 03:47:49.108 [INFO][5259] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.64/26 handle="k8s-pod-network.3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:49.145004 containerd[1635]: 2025-04-30 03:47:49.110 [INFO][5259] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc Apr 30 03:47:49.145004 containerd[1635]: 2025-04-30 03:47:49.114 [INFO][5259] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.64/26 handle="k8s-pod-network.3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:49.145004 containerd[1635]: 2025-04-30 03:47:49.122 [INFO][5259] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.71/26] block=192.168.106.64/26 handle="k8s-pod-network.3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:49.145004 containerd[1635]: 2025-04-30 03:47:49.122 [INFO][5259] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.71/26] handle="k8s-pod-network.3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:47:49.145004 containerd[1635]: 2025-04-30 03:47:49.122 [INFO][5259] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:47:49.145004 containerd[1635]: 2025-04-30 03:47:49.122 [INFO][5259] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.71/26] IPv6=[] ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" HandleID="k8s-pod-network.3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0" Apr 30 03:47:49.145806 containerd[1635]: 2025-04-30 03:47:49.124 [INFO][5246] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Namespace="calico-apiserver" Pod="calico-apiserver-855c9c9d54-2tcgf" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0", GenerateName:"calico-apiserver-855c9c9d54-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"855c9c9d54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"", Pod:"calico-apiserver-855c9c9d54-2tcgf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6cb2c30de00", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:47:49.145806 containerd[1635]: 2025-04-30 03:47:49.124 [INFO][5246] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.71/32] ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Namespace="calico-apiserver" Pod="calico-apiserver-855c9c9d54-2tcgf" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0" Apr 30 03:47:49.145806 containerd[1635]: 2025-04-30 03:47:49.124 [INFO][5246] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6cb2c30de00 ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Namespace="calico-apiserver" Pod="calico-apiserver-855c9c9d54-2tcgf" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0" Apr 30 03:47:49.145806 containerd[1635]: 2025-04-30 03:47:49.127 [INFO][5246] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Namespace="calico-apiserver" Pod="calico-apiserver-855c9c9d54-2tcgf" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0" Apr 30 03:47:49.145806 containerd[1635]: 2025-04-30 03:47:49.128 [INFO][5246] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Namespace="calico-apiserver" Pod="calico-apiserver-855c9c9d54-2tcgf" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0", GenerateName:"calico-apiserver-855c9c9d54-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"855c9c9d54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc", Pod:"calico-apiserver-855c9c9d54-2tcgf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6cb2c30de00", MAC:"96:4c:3e:97:2b:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:47:49.145806 containerd[1635]: 2025-04-30 03:47:49.140 [INFO][5246] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Namespace="calico-apiserver" Pod="calico-apiserver-855c9c9d54-2tcgf" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0" Apr 30 03:47:49.168208 containerd[1635]: time="2025-04-30T03:47:49.167243863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:47:49.168208 containerd[1635]: time="2025-04-30T03:47:49.167403907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:47:49.168208 containerd[1635]: time="2025-04-30T03:47:49.167416922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:49.168208 containerd[1635]: time="2025-04-30T03:47:49.167664422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:47:49.214426 containerd[1635]: time="2025-04-30T03:47:49.214323482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855c9c9d54-2tcgf,Uid:e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc\"" Apr 30 03:47:49.317842 systemd-networkd[1258]: calib9c823a46ef: Gained IPv6LL Apr 30 03:47:49.570021 systemd-networkd[1258]: cali394eefb2576: Gained IPv6LL Apr 30 03:47:49.762029 systemd-networkd[1258]: cali4cf2372eaa5: Gained IPv6LL Apr 30 03:47:50.166651 containerd[1635]: time="2025-04-30T03:47:50.166617814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:50.167623 containerd[1635]: time="2025-04-30T03:47:50.167573198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" Apr 30 03:47:50.167977 containerd[1635]: time="2025-04-30T03:47:50.167941919Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:50.171479 containerd[1635]: time="2025-04-30T03:47:50.171437206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:50.172486 containerd[1635]: time="2025-04-30T03:47:50.172007378Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 2.353751565s" Apr 30 03:47:50.172486 containerd[1635]: time="2025-04-30T03:47:50.172031364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:47:50.175381 containerd[1635]: time="2025-04-30T03:47:50.175245828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" Apr 30 03:47:50.176615 containerd[1635]: time="2025-04-30T03:47:50.176287566Z" level=info msg="CreateContainer within sandbox \"d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:47:50.190275 containerd[1635]: time="2025-04-30T03:47:50.190249047Z" level=info msg="CreateContainer within sandbox \"d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"42a0dc42a55ad5197a5b25a2f510eb9cefbb60e066d611a244d1722c6522bcd6\"" Apr 30 03:47:50.190866 containerd[1635]: time="2025-04-30T03:47:50.190844738Z" level=info msg="StartContainer for \"42a0dc42a55ad5197a5b25a2f510eb9cefbb60e066d611a244d1722c6522bcd6\"" Apr 30 03:47:50.245056 containerd[1635]: time="2025-04-30T03:47:50.245019529Z" level=info msg="StartContainer for \"42a0dc42a55ad5197a5b25a2f510eb9cefbb60e066d611a244d1722c6522bcd6\" returns successfully" Apr 30 03:47:50.338259 systemd-networkd[1258]: cali6cb2c30de00: Gained IPv6LL Apr 30 03:47:51.260618 kubelet[3046]: I0430 03:47:51.260536 3046 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:47:53.071834 containerd[1635]: time="2025-04-30T03:47:53.071733528Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:53.072705 containerd[1635]: time="2025-04-30T03:47:53.072667992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" Apr 30 03:47:53.073520 containerd[1635]: time="2025-04-30T03:47:53.073496906Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:53.076442 containerd[1635]: time="2025-04-30T03:47:53.075860444Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:53.076442 containerd[1635]: time="2025-04-30T03:47:53.076331037Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 2.901061325s" Apr 30 03:47:53.076442 containerd[1635]: time="2025-04-30T03:47:53.076370042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" Apr 30 03:47:53.077471 containerd[1635]: time="2025-04-30T03:47:53.077455432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 03:47:53.097171 containerd[1635]: time="2025-04-30T03:47:53.097079876Z" level=info msg="CreateContainer within sandbox \"b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 03:47:53.106978 containerd[1635]: time="2025-04-30T03:47:53.106955407Z" level=info msg="CreateContainer within sandbox \"b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ea7f5d8fda3930f1ed38ef1b15a32068c291dc1a7dc062926607e214e34561bd\"" Apr 30 03:47:53.107841 containerd[1635]: time="2025-04-30T03:47:53.107394852Z" level=info msg="StartContainer for \"ea7f5d8fda3930f1ed38ef1b15a32068c291dc1a7dc062926607e214e34561bd\"" Apr 30 03:47:53.183689 containerd[1635]: time="2025-04-30T03:47:53.183658614Z" level=info msg="StartContainer for \"ea7f5d8fda3930f1ed38ef1b15a32068c291dc1a7dc062926607e214e34561bd\" returns successfully" Apr 30 03:47:53.292925 kubelet[3046]: I0430 03:47:53.291795 3046 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-855c9c9d54-6r7fb" podStartSLOduration=28.932482451 podStartE2EDuration="31.291782531s" podCreationTimestamp="2025-04-30 03:47:22 +0000 UTC" firstStartedPulling="2025-04-30 03:47:47.81561022 +0000 UTC m=+45.987142101" lastFinishedPulling="2025-04-30 03:47:50.1749103 +0000 UTC m=+48.346442181" observedRunningTime="2025-04-30 03:47:50.270419811 +0000 UTC m=+48.441951723" watchObservedRunningTime="2025-04-30 03:47:53.291782531 +0000 UTC m=+51.463314412" Apr 30 03:47:54.325775 kubelet[3046]: I0430 03:47:54.325596 3046 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-548667d5d7-rpfg6" podStartSLOduration=27.21420786 podStartE2EDuration="32.325579854s" podCreationTimestamp="2025-04-30 03:47:22 +0000 UTC" firstStartedPulling="2025-04-30 03:47:47.965914728 +0000 UTC m=+46.137446609" lastFinishedPulling="2025-04-30 03:47:53.077286722 +0000 UTC m=+51.248818603" observedRunningTime="2025-04-30 03:47:53.31301096 +0000 UTC m=+51.484542842" watchObservedRunningTime="2025-04-30 03:47:54.325579854 +0000 UTC m=+52.497111745" Apr 30 03:47:54.612945 containerd[1635]: time="2025-04-30T03:47:54.612656047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:54.613573 containerd[1635]: time="2025-04-30T03:47:54.613538632Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" Apr 30 03:47:54.614916 containerd[1635]: time="2025-04-30T03:47:54.614473277Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:54.616143 containerd[1635]: time="2025-04-30T03:47:54.616124153Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:54.616582 containerd[1635]: time="2025-04-30T03:47:54.616556333Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.538862456s" Apr 30 03:47:54.616620 containerd[1635]: time="2025-04-30T03:47:54.616583774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" Apr 30 03:47:54.617655 containerd[1635]: time="2025-04-30T03:47:54.617641303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:47:54.622273 containerd[1635]: time="2025-04-30T03:47:54.622255054Z" level=info msg="CreateContainer within sandbox \"f7e2b00c491f018de364ea7e7465b97b912f051c497f1116dd86553f4fe5d7d3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 03:47:54.643101 containerd[1635]: time="2025-04-30T03:47:54.643073052Z" level=info msg="CreateContainer within sandbox \"f7e2b00c491f018de364ea7e7465b97b912f051c497f1116dd86553f4fe5d7d3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ff74f7aa1213943a2c3b11d1288e28eb1de0ace4f94970259f413627f26dddfd\"" Apr 30 03:47:54.644341 containerd[1635]: time="2025-04-30T03:47:54.643567582Z" level=info msg="StartContainer for \"ff74f7aa1213943a2c3b11d1288e28eb1de0ace4f94970259f413627f26dddfd\"" Apr 30 03:47:54.706072 containerd[1635]: time="2025-04-30T03:47:54.705971495Z" level=info msg="StartContainer for \"ff74f7aa1213943a2c3b11d1288e28eb1de0ace4f94970259f413627f26dddfd\" returns successfully" Apr 30 03:47:55.068973 containerd[1635]: time="2025-04-30T03:47:55.068310155Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:55.069705 containerd[1635]: time="2025-04-30T03:47:55.069363986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" Apr 30 03:47:55.071366 containerd[1635]: time="2025-04-30T03:47:55.071333515Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 453.566313ms" Apr 30 03:47:55.071443 containerd[1635]: time="2025-04-30T03:47:55.071366859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:47:55.072673 containerd[1635]: time="2025-04-30T03:47:55.072260997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:47:55.073922 containerd[1635]: time="2025-04-30T03:47:55.073857779Z" level=info msg="CreateContainer within sandbox \"e1012af04eaec3a425dce9a75ee7eaf175f801d5dd1ede972062dc47802383d7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:47:55.090102 containerd[1635]: time="2025-04-30T03:47:55.090078950Z" level=info msg="CreateContainer within sandbox \"e1012af04eaec3a425dce9a75ee7eaf175f801d5dd1ede972062dc47802383d7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3d8a9d0f387c512496f88cc3aca86cc55706adf7e40d8aa1132f2b87e55c2679\"" Apr 30 03:47:55.090464 containerd[1635]: time="2025-04-30T03:47:55.090443832Z" level=info msg="StartContainer for \"3d8a9d0f387c512496f88cc3aca86cc55706adf7e40d8aa1132f2b87e55c2679\"" Apr 30 03:47:55.154259 containerd[1635]: time="2025-04-30T03:47:55.153943073Z" level=info msg="StartContainer for \"3d8a9d0f387c512496f88cc3aca86cc55706adf7e40d8aa1132f2b87e55c2679\" returns successfully" Apr 30 03:47:55.288506 kubelet[3046]: I0430 03:47:55.288454 3046 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d6dd767f5-l6gfz" podStartSLOduration=25.237642529 podStartE2EDuration="32.288441027s" podCreationTimestamp="2025-04-30 03:47:23 +0000 UTC" firstStartedPulling="2025-04-30 03:47:48.021251798 +0000 UTC m=+46.192783680" lastFinishedPulling="2025-04-30 03:47:55.072050286 +0000 UTC m=+53.243582178" observedRunningTime="2025-04-30 03:47:55.288030749 +0000 UTC m=+53.459562630" watchObservedRunningTime="2025-04-30 03:47:55.288441027 +0000 UTC m=+53.459972908" Apr 30 03:47:55.556520 containerd[1635]: time="2025-04-30T03:47:55.556398263Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:55.557913 containerd[1635]: time="2025-04-30T03:47:55.557658084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" Apr 30 03:47:55.559961 containerd[1635]: time="2025-04-30T03:47:55.559920090Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 487.632402ms" Apr 30 03:47:55.559961 containerd[1635]: time="2025-04-30T03:47:55.559951879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:47:55.564764 containerd[1635]: time="2025-04-30T03:47:55.564643248Z" level=info msg="CreateContainer within sandbox \"3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:47:55.566048 containerd[1635]: time="2025-04-30T03:47:55.566032305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 03:47:55.586584 containerd[1635]: time="2025-04-30T03:47:55.586539736Z" level=info msg="CreateContainer within sandbox \"3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cffeaf3e19de132af0d98ef1b1e492bd893ba75fa1193f4b7911895dfeaa7655\"" Apr 30 03:47:55.590160 containerd[1635]: time="2025-04-30T03:47:55.589192231Z" level=info msg="StartContainer for \"cffeaf3e19de132af0d98ef1b1e492bd893ba75fa1193f4b7911895dfeaa7655\"" Apr 30 03:47:55.592308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1966885333.mount: Deactivated successfully. Apr 30 03:47:55.678233 containerd[1635]: time="2025-04-30T03:47:55.678193546Z" level=info msg="StartContainer for \"cffeaf3e19de132af0d98ef1b1e492bd893ba75fa1193f4b7911895dfeaa7655\" returns successfully" Apr 30 03:47:56.284555 kubelet[3046]: I0430 03:47:56.284479 3046 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:47:56.298378 kubelet[3046]: I0430 03:47:56.298332 3046 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-855c9c9d54-2tcgf" podStartSLOduration=27.952880073 podStartE2EDuration="34.298314512s" podCreationTimestamp="2025-04-30 03:47:22 +0000 UTC" firstStartedPulling="2025-04-30 03:47:49.215492192 +0000 UTC m=+47.387024073" lastFinishedPulling="2025-04-30 03:47:55.560926631 +0000 UTC m=+53.732458512" observedRunningTime="2025-04-30 03:47:56.2976867 +0000 UTC m=+54.469218581" watchObservedRunningTime="2025-04-30 03:47:56.298314512 +0000 UTC m=+54.469846393" Apr 30 03:47:57.441584 containerd[1635]: time="2025-04-30T03:47:57.441537499Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:57.442706 containerd[1635]: time="2025-04-30T03:47:57.442666252Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" Apr 30 03:47:57.444220 containerd[1635]: time="2025-04-30T03:47:57.443685247Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:57.446696 containerd[1635]: time="2025-04-30T03:47:57.446146580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:47:57.446696 containerd[1635]: time="2025-04-30T03:47:57.446589592Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 1.880462717s" Apr 30 03:47:57.446696 containerd[1635]: time="2025-04-30T03:47:57.446616293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" Apr 30 03:47:57.448592 containerd[1635]: time="2025-04-30T03:47:57.448536729Z" level=info msg="CreateContainer within sandbox \"f7e2b00c491f018de364ea7e7465b97b912f051c497f1116dd86553f4fe5d7d3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 03:47:57.472298 containerd[1635]: time="2025-04-30T03:47:57.472261837Z" level=info msg="CreateContainer within sandbox \"f7e2b00c491f018de364ea7e7465b97b912f051c497f1116dd86553f4fe5d7d3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8a99e4f54211637f1df7fc5ea5f60ae7aa27d5e02dcadbd99363de6475e1f093\"" Apr 30 03:47:57.472663 containerd[1635]: time="2025-04-30T03:47:57.472635997Z" level=info msg="StartContainer for \"8a99e4f54211637f1df7fc5ea5f60ae7aa27d5e02dcadbd99363de6475e1f093\"" Apr 30 03:47:57.525787 containerd[1635]: time="2025-04-30T03:47:57.525706946Z" level=info msg="StartContainer for \"8a99e4f54211637f1df7fc5ea5f60ae7aa27d5e02dcadbd99363de6475e1f093\" returns successfully" Apr 30 03:47:58.185714 kubelet[3046]: I0430 03:47:58.185671 3046 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 03:47:58.189362 kubelet[3046]: I0430 03:47:58.189341 3046 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 03:47:58.317974 kubelet[3046]: I0430 03:47:58.317913 3046 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-q77vc" podStartSLOduration=26.871129236 podStartE2EDuration="36.317880341s" podCreationTimestamp="2025-04-30 03:47:22 +0000 UTC" firstStartedPulling="2025-04-30 03:47:48.000663594 +0000 UTC m=+46.172195485" lastFinishedPulling="2025-04-30 03:47:57.447414708 +0000 UTC m=+55.618946590" observedRunningTime="2025-04-30 03:47:58.316334976 +0000 UTC m=+56.487866867" watchObservedRunningTime="2025-04-30 03:47:58.317880341 +0000 UTC m=+56.489412232" Apr 30 03:47:59.794823 kubelet[3046]: I0430 03:47:59.794619 3046 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:48:01.444078 kubelet[3046]: I0430 03:48:01.443924 3046 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:48:01.529059 containerd[1635]: time="2025-04-30T03:48:01.529014843Z" level=info msg="StopContainer for \"42a0dc42a55ad5197a5b25a2f510eb9cefbb60e066d611a244d1722c6522bcd6\" with timeout 30 (s)" Apr 30 03:48:01.548998 containerd[1635]: time="2025-04-30T03:48:01.548903847Z" level=info msg="Stop container \"42a0dc42a55ad5197a5b25a2f510eb9cefbb60e066d611a244d1722c6522bcd6\" with signal terminated" Apr 30 03:48:01.610240 kubelet[3046]: I0430 03:48:01.596130 3046 topology_manager.go:215] "Topology Admit Handler" podUID="1e509a69-9ea1-4717-9361-97a47dcd4f18" podNamespace="calico-apiserver" podName="calico-apiserver-d6dd767f5-srwnv" Apr 30 03:48:01.670140 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42a0dc42a55ad5197a5b25a2f510eb9cefbb60e066d611a244d1722c6522bcd6-rootfs.mount: Deactivated successfully. Apr 30 03:48:01.678461 containerd[1635]: time="2025-04-30T03:48:01.672987906Z" level=info msg="shim disconnected" id=42a0dc42a55ad5197a5b25a2f510eb9cefbb60e066d611a244d1722c6522bcd6 namespace=k8s.io Apr 30 03:48:01.678461 containerd[1635]: time="2025-04-30T03:48:01.678422224Z" level=warning msg="cleaning up after shim disconnected" id=42a0dc42a55ad5197a5b25a2f510eb9cefbb60e066d611a244d1722c6522bcd6 namespace=k8s.io Apr 30 03:48:01.678461 containerd[1635]: time="2025-04-30T03:48:01.678433296Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:48:01.722109 containerd[1635]: time="2025-04-30T03:48:01.721922859Z" level=info msg="StopContainer for \"42a0dc42a55ad5197a5b25a2f510eb9cefbb60e066d611a244d1722c6522bcd6\" returns successfully" Apr 30 03:48:01.722753 containerd[1635]: time="2025-04-30T03:48:01.722620664Z" level=info msg="StopPodSandbox for \"d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070\"" Apr 30 03:48:01.722753 containerd[1635]: time="2025-04-30T03:48:01.722652495Z" level=info msg="Container to stop \"42a0dc42a55ad5197a5b25a2f510eb9cefbb60e066d611a244d1722c6522bcd6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:48:01.728564 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070-shm.mount: Deactivated successfully. Apr 30 03:48:01.737560 kubelet[3046]: I0430 03:48:01.736883 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjlmc\" (UniqueName: \"kubernetes.io/projected/1e509a69-9ea1-4717-9361-97a47dcd4f18-kube-api-access-bjlmc\") pod \"calico-apiserver-d6dd767f5-srwnv\" (UID: \"1e509a69-9ea1-4717-9361-97a47dcd4f18\") " pod="calico-apiserver/calico-apiserver-d6dd767f5-srwnv" Apr 30 03:48:01.738556 kubelet[3046]: I0430 03:48:01.737646 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1e509a69-9ea1-4717-9361-97a47dcd4f18-calico-apiserver-certs\") pod \"calico-apiserver-d6dd767f5-srwnv\" (UID: \"1e509a69-9ea1-4717-9361-97a47dcd4f18\") " pod="calico-apiserver/calico-apiserver-d6dd767f5-srwnv" Apr 30 03:48:01.766508 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070-rootfs.mount: Deactivated successfully. Apr 30 03:48:01.768038 containerd[1635]: time="2025-04-30T03:48:01.764981705Z" level=info msg="shim disconnected" id=d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070 namespace=k8s.io Apr 30 03:48:01.768038 containerd[1635]: time="2025-04-30T03:48:01.766844432Z" level=warning msg="cleaning up after shim disconnected" id=d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070 namespace=k8s.io Apr 30 03:48:01.768038 containerd[1635]: time="2025-04-30T03:48:01.766857247Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:48:01.928292 containerd[1635]: time="2025-04-30T03:48:01.928031599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d6dd767f5-srwnv,Uid:1e509a69-9ea1-4717-9361-97a47dcd4f18,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:48:01.929331 containerd[1635]: time="2025-04-30T03:48:01.929288757Z" level=info msg="StopPodSandbox for \"c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63\"" Apr 30 03:48:02.013705 systemd-networkd[1258]: cali59a626749dc: Link DOWN Apr 30 03:48:02.013711 systemd-networkd[1258]: cali59a626749dc: Lost carrier Apr 30 03:48:02.081154 containerd[1635]: 2025-04-30 03:48:01.996 [WARNING][5714] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-csi--node--driver--q77vc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77fda787-d7b6-4fd3-822b-fc38fd6f240c", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"f7e2b00c491f018de364ea7e7465b97b912f051c497f1116dd86553f4fe5d7d3", Pod:"csi-node-driver-q77vc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4cf2372eaa5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:48:02.081154 containerd[1635]: 2025-04-30 03:48:01.997 [INFO][5714] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" Apr 30 03:48:02.081154 containerd[1635]: 2025-04-30 03:48:01.997 [INFO][5714] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" iface="eth0" netns="" Apr 30 03:48:02.081154 containerd[1635]: 2025-04-30 03:48:01.997 [INFO][5714] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" Apr 30 03:48:02.081154 containerd[1635]: 2025-04-30 03:48:01.997 [INFO][5714] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" Apr 30 03:48:02.081154 containerd[1635]: 2025-04-30 03:48:02.059 [INFO][5736] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" HandleID="k8s-pod-network.c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-csi--node--driver--q77vc-eth0" Apr 30 03:48:02.081154 containerd[1635]: 2025-04-30 03:48:02.059 [INFO][5736] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:02.081154 containerd[1635]: 2025-04-30 03:48:02.059 [INFO][5736] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:02.081154 containerd[1635]: 2025-04-30 03:48:02.066 [WARNING][5736] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" HandleID="k8s-pod-network.c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-csi--node--driver--q77vc-eth0" Apr 30 03:48:02.081154 containerd[1635]: 2025-04-30 03:48:02.066 [INFO][5736] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" HandleID="k8s-pod-network.c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-csi--node--driver--q77vc-eth0" Apr 30 03:48:02.081154 containerd[1635]: 2025-04-30 03:48:02.069 [INFO][5736] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:02.081154 containerd[1635]: 2025-04-30 03:48:02.079 [INFO][5714] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" Apr 30 03:48:02.082396 containerd[1635]: time="2025-04-30T03:48:02.081629627Z" level=info msg="TearDown network for sandbox \"c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63\" successfully" Apr 30 03:48:02.082396 containerd[1635]: time="2025-04-30T03:48:02.081662108Z" level=info msg="StopPodSandbox for \"c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63\" returns successfully" Apr 30 03:48:02.083029 containerd[1635]: time="2025-04-30T03:48:02.082972135Z" level=info msg="RemovePodSandbox for \"c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63\"" Apr 30 03:48:02.084640 containerd[1635]: time="2025-04-30T03:48:02.084624943Z" level=info msg="Forcibly stopping sandbox \"c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63\"" Apr 30 03:48:02.115727 systemd-networkd[1258]: califabb3f401cb: Link UP Apr 30 03:48:02.117213 systemd-networkd[1258]: califabb3f401cb: Gained carrier Apr 30 03:48:02.141098 containerd[1635]: 2025-04-30 03:48:02.002 [INFO][5719] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--srwnv-eth0 calico-apiserver-d6dd767f5- calico-apiserver 1e509a69-9ea1-4717-9361-97a47dcd4f18 932 0 2025-04-30 03:48:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d6dd767f5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-c-b54c1f5c93 calico-apiserver-d6dd767f5-srwnv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califabb3f401cb [] []}} ContainerID="d8f52b84441736e7164d25457c232b864d543145a639100a09a78906aa3fdf53" Namespace="calico-apiserver" Pod="calico-apiserver-d6dd767f5-srwnv" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--srwnv-" Apr 30 03:48:02.141098 containerd[1635]: 2025-04-30 03:48:02.002 [INFO][5719] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d8f52b84441736e7164d25457c232b864d543145a639100a09a78906aa3fdf53" Namespace="calico-apiserver" Pod="calico-apiserver-d6dd767f5-srwnv" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--srwnv-eth0" Apr 30 03:48:02.141098 containerd[1635]: 2025-04-30 03:48:02.066 [INFO][5753] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8f52b84441736e7164d25457c232b864d543145a639100a09a78906aa3fdf53" HandleID="k8s-pod-network.d8f52b84441736e7164d25457c232b864d543145a639100a09a78906aa3fdf53" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--srwnv-eth0" Apr 30 03:48:02.141098 containerd[1635]: 2025-04-30 03:48:02.074 [INFO][5753] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d8f52b84441736e7164d25457c232b864d543145a639100a09a78906aa3fdf53" HandleID="k8s-pod-network.d8f52b84441736e7164d25457c232b864d543145a639100a09a78906aa3fdf53" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--srwnv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a5550), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-c-b54c1f5c93", "pod":"calico-apiserver-d6dd767f5-srwnv", "timestamp":"2025-04-30 03:48:02.066130719 +0000 UTC"}, Hostname:"ci-4081-3-3-c-b54c1f5c93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:48:02.141098 containerd[1635]: 2025-04-30 03:48:02.074 [INFO][5753] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:02.141098 containerd[1635]: 2025-04-30 03:48:02.074 [INFO][5753] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:02.141098 containerd[1635]: 2025-04-30 03:48:02.074 [INFO][5753] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-c-b54c1f5c93' Apr 30 03:48:02.141098 containerd[1635]: 2025-04-30 03:48:02.076 [INFO][5753] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d8f52b84441736e7164d25457c232b864d543145a639100a09a78906aa3fdf53" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:48:02.141098 containerd[1635]: 2025-04-30 03:48:02.081 [INFO][5753] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:48:02.141098 containerd[1635]: 2025-04-30 03:48:02.086 [INFO][5753] ipam/ipam.go 489: Trying affinity for 192.168.106.64/26 host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:48:02.141098 containerd[1635]: 2025-04-30 03:48:02.088 [INFO][5753] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.64/26 host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:48:02.141098 containerd[1635]: 2025-04-30 03:48:02.090 [INFO][5753] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.64/26 host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:48:02.141098 containerd[1635]: 2025-04-30 03:48:02.090 [INFO][5753] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.64/26 handle="k8s-pod-network.d8f52b84441736e7164d25457c232b864d543145a639100a09a78906aa3fdf53" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:48:02.141098 containerd[1635]: 2025-04-30 03:48:02.092 [INFO][5753] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d8f52b84441736e7164d25457c232b864d543145a639100a09a78906aa3fdf53 Apr 30 03:48:02.141098 containerd[1635]: 2025-04-30 03:48:02.097 [INFO][5753] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.64/26 handle="k8s-pod-network.d8f52b84441736e7164d25457c232b864d543145a639100a09a78906aa3fdf53" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:48:02.141098 containerd[1635]: 2025-04-30 03:48:02.106 [INFO][5753] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.72/26] block=192.168.106.64/26 handle="k8s-pod-network.d8f52b84441736e7164d25457c232b864d543145a639100a09a78906aa3fdf53" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:48:02.141098 containerd[1635]: 2025-04-30 03:48:02.106 [INFO][5753] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.72/26] handle="k8s-pod-network.d8f52b84441736e7164d25457c232b864d543145a639100a09a78906aa3fdf53" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:48:02.141098 containerd[1635]: 2025-04-30 03:48:02.106 [INFO][5753] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:02.141098 containerd[1635]: 2025-04-30 03:48:02.106 [INFO][5753] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.72/26] IPv6=[] ContainerID="d8f52b84441736e7164d25457c232b864d543145a639100a09a78906aa3fdf53" HandleID="k8s-pod-network.d8f52b84441736e7164d25457c232b864d543145a639100a09a78906aa3fdf53" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--srwnv-eth0" Apr 30 03:48:02.144436 containerd[1635]: 2025-04-30 03:48:02.111 [INFO][5719] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d8f52b84441736e7164d25457c232b864d543145a639100a09a78906aa3fdf53" Namespace="calico-apiserver" Pod="calico-apiserver-d6dd767f5-srwnv" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--srwnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--srwnv-eth0", GenerateName:"calico-apiserver-d6dd767f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e509a69-9ea1-4717-9361-97a47dcd4f18", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 48, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d6dd767f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"", Pod:"calico-apiserver-d6dd767f5-srwnv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califabb3f401cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:48:02.144436 containerd[1635]: 2025-04-30 03:48:02.112 [INFO][5719] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.72/32] ContainerID="d8f52b84441736e7164d25457c232b864d543145a639100a09a78906aa3fdf53" Namespace="calico-apiserver" Pod="calico-apiserver-d6dd767f5-srwnv" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--srwnv-eth0" Apr 30 03:48:02.144436 containerd[1635]: 2025-04-30 03:48:02.112 [INFO][5719] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califabb3f401cb ContainerID="d8f52b84441736e7164d25457c232b864d543145a639100a09a78906aa3fdf53" Namespace="calico-apiserver" Pod="calico-apiserver-d6dd767f5-srwnv" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--srwnv-eth0" Apr 30 03:48:02.144436 containerd[1635]: 2025-04-30 03:48:02.116 [INFO][5719] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8f52b84441736e7164d25457c232b864d543145a639100a09a78906aa3fdf53" Namespace="calico-apiserver" Pod="calico-apiserver-d6dd767f5-srwnv" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--srwnv-eth0" Apr 30 03:48:02.144436 containerd[1635]: 2025-04-30 03:48:02.119 [INFO][5719] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d8f52b84441736e7164d25457c232b864d543145a639100a09a78906aa3fdf53" Namespace="calico-apiserver" Pod="calico-apiserver-d6dd767f5-srwnv" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--srwnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--srwnv-eth0", GenerateName:"calico-apiserver-d6dd767f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e509a69-9ea1-4717-9361-97a47dcd4f18", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 48, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d6dd767f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"d8f52b84441736e7164d25457c232b864d543145a639100a09a78906aa3fdf53", Pod:"calico-apiserver-d6dd767f5-srwnv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califabb3f401cb", MAC:"d2:73:14:7d:b8:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:48:02.144436 containerd[1635]: 2025-04-30 03:48:02.138 [INFO][5719] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d8f52b84441736e7164d25457c232b864d543145a639100a09a78906aa3fdf53" Namespace="calico-apiserver" Pod="calico-apiserver-d6dd767f5-srwnv" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--srwnv-eth0" Apr 30 03:48:02.168139 containerd[1635]: 2025-04-30 03:48:02.009 [INFO][5694] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Apr 30 03:48:02.168139 containerd[1635]: 2025-04-30 03:48:02.010 [INFO][5694] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" iface="eth0" netns="/var/run/netns/cni-643d698b-9456-4ebe-61df-31e2b5353a9e" Apr 30 03:48:02.168139 containerd[1635]: 2025-04-30 03:48:02.011 [INFO][5694] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" iface="eth0" netns="/var/run/netns/cni-643d698b-9456-4ebe-61df-31e2b5353a9e" Apr 30 03:48:02.168139 containerd[1635]: 2025-04-30 03:48:02.025 [INFO][5694] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" after=15.055765ms iface="eth0" netns="/var/run/netns/cni-643d698b-9456-4ebe-61df-31e2b5353a9e" Apr 30 03:48:02.168139 containerd[1635]: 2025-04-30 03:48:02.027 [INFO][5694] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Apr 30 03:48:02.168139 containerd[1635]: 2025-04-30 03:48:02.027 [INFO][5694] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Apr 30 03:48:02.168139 containerd[1635]: 2025-04-30 03:48:02.074 [INFO][5746] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" HandleID="k8s-pod-network.d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" Apr 30 03:48:02.168139 containerd[1635]: 2025-04-30 03:48:02.076 [INFO][5746] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:02.168139 containerd[1635]: 2025-04-30 03:48:02.106 [INFO][5746] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:02.168139 containerd[1635]: 2025-04-30 03:48:02.159 [INFO][5746] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" HandleID="k8s-pod-network.d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" Apr 30 03:48:02.168139 containerd[1635]: 2025-04-30 03:48:02.159 [INFO][5746] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" HandleID="k8s-pod-network.d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" Apr 30 03:48:02.168139 containerd[1635]: 2025-04-30 03:48:02.161 [INFO][5746] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:02.168139 containerd[1635]: 2025-04-30 03:48:02.165 [INFO][5694] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Apr 30 03:48:02.169363 containerd[1635]: time="2025-04-30T03:48:02.169022118Z" level=info msg="TearDown network for sandbox \"d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070\" successfully" Apr 30 03:48:02.169363 containerd[1635]: time="2025-04-30T03:48:02.169055251Z" level=info msg="StopPodSandbox for \"d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070\" returns successfully" Apr 30 03:48:02.170098 containerd[1635]: time="2025-04-30T03:48:02.169447586Z" level=info msg="StopPodSandbox for \"0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847\"" Apr 30 03:48:02.187609 containerd[1635]: time="2025-04-30T03:48:02.187178051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:48:02.187609 containerd[1635]: time="2025-04-30T03:48:02.187221713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:48:02.187609 containerd[1635]: time="2025-04-30T03:48:02.187231713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:48:02.187609 containerd[1635]: time="2025-04-30T03:48:02.187291446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:48:02.212025 containerd[1635]: 2025-04-30 03:48:02.146 [WARNING][5775] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-csi--node--driver--q77vc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77fda787-d7b6-4fd3-822b-fc38fd6f240c", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"f7e2b00c491f018de364ea7e7465b97b912f051c497f1116dd86553f4fe5d7d3", Pod:"csi-node-driver-q77vc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4cf2372eaa5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:48:02.212025 containerd[1635]: 2025-04-30 03:48:02.147 [INFO][5775] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" Apr 30 03:48:02.212025 containerd[1635]: 2025-04-30 03:48:02.147 [INFO][5775] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" iface="eth0" netns="" Apr 30 03:48:02.212025 containerd[1635]: 2025-04-30 03:48:02.147 [INFO][5775] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" Apr 30 03:48:02.212025 containerd[1635]: 2025-04-30 03:48:02.147 [INFO][5775] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" Apr 30 03:48:02.212025 containerd[1635]: 2025-04-30 03:48:02.190 [INFO][5791] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" HandleID="k8s-pod-network.c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-csi--node--driver--q77vc-eth0" Apr 30 03:48:02.212025 containerd[1635]: 2025-04-30 03:48:02.191 [INFO][5791] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:02.212025 containerd[1635]: 2025-04-30 03:48:02.191 [INFO][5791] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:02.212025 containerd[1635]: 2025-04-30 03:48:02.200 [WARNING][5791] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" HandleID="k8s-pod-network.c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-csi--node--driver--q77vc-eth0" Apr 30 03:48:02.212025 containerd[1635]: 2025-04-30 03:48:02.200 [INFO][5791] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" HandleID="k8s-pod-network.c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-csi--node--driver--q77vc-eth0" Apr 30 03:48:02.212025 containerd[1635]: 2025-04-30 03:48:02.202 [INFO][5791] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:02.212025 containerd[1635]: 2025-04-30 03:48:02.209 [INFO][5775] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63" Apr 30 03:48:02.213881 containerd[1635]: time="2025-04-30T03:48:02.211994429Z" level=info msg="TearDown network for sandbox \"c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63\" successfully" Apr 30 03:48:02.230346 containerd[1635]: time="2025-04-30T03:48:02.230303602Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:48:02.230531 containerd[1635]: time="2025-04-30T03:48:02.230515936Z" level=info msg="RemovePodSandbox \"c453e79a4f4624e02fb6bcbd9326cf6b1f3e9173bac7678e20639f75ac6dfa63\" returns successfully" Apr 30 03:48:02.235216 containerd[1635]: time="2025-04-30T03:48:02.235191825Z" level=info msg="StopPodSandbox for \"94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f\"" Apr 30 03:48:02.274995 containerd[1635]: time="2025-04-30T03:48:02.274908875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d6dd767f5-srwnv,Uid:1e509a69-9ea1-4717-9361-97a47dcd4f18,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d8f52b84441736e7164d25457c232b864d543145a639100a09a78906aa3fdf53\"" Apr 30 03:48:02.282091 containerd[1635]: time="2025-04-30T03:48:02.282071324Z" level=info msg="CreateContainer within sandbox \"d8f52b84441736e7164d25457c232b864d543145a639100a09a78906aa3fdf53\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:48:02.300709 containerd[1635]: time="2025-04-30T03:48:02.300681960Z" level=info msg="CreateContainer within sandbox \"d8f52b84441736e7164d25457c232b864d543145a639100a09a78906aa3fdf53\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"934c1bd84a64d250b8ebe136dc281b1156807f76930ac34606f88e15a3802346\"" Apr 30 03:48:02.303989 containerd[1635]: time="2025-04-30T03:48:02.302438666Z" level=info msg="StartContainer for \"934c1bd84a64d250b8ebe136dc281b1156807f76930ac34606f88e15a3802346\"" Apr 30 03:48:02.314604 kubelet[3046]: I0430 03:48:02.314586 3046 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Apr 30 03:48:02.338254 containerd[1635]: 2025-04-30 03:48:02.257 [WARNING][5830] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0", GenerateName:"calico-apiserver-855c9c9d54-", Namespace:"calico-apiserver", SelfLink:"", UID:"94898af1-d8d4-4a31-ac96-01740beca0cc", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"855c9c9d54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070", Pod:"calico-apiserver-855c9c9d54-6r7fb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59a626749dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:48:02.338254 containerd[1635]: 2025-04-30 03:48:02.261 [INFO][5830] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" Apr 30 03:48:02.338254 containerd[1635]: 2025-04-30 03:48:02.261 [INFO][5830] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" iface="eth0" netns="" Apr 30 03:48:02.338254 containerd[1635]: 2025-04-30 03:48:02.261 [INFO][5830] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" Apr 30 03:48:02.338254 containerd[1635]: 2025-04-30 03:48:02.261 [INFO][5830] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" Apr 30 03:48:02.338254 containerd[1635]: 2025-04-30 03:48:02.322 [INFO][5869] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" HandleID="k8s-pod-network.0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" Apr 30 03:48:02.338254 containerd[1635]: 2025-04-30 03:48:02.322 [INFO][5869] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:02.338254 containerd[1635]: 2025-04-30 03:48:02.322 [INFO][5869] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:02.338254 containerd[1635]: 2025-04-30 03:48:02.332 [WARNING][5869] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" HandleID="k8s-pod-network.0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" Apr 30 03:48:02.338254 containerd[1635]: 2025-04-30 03:48:02.332 [INFO][5869] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" HandleID="k8s-pod-network.0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" Apr 30 03:48:02.338254 containerd[1635]: 2025-04-30 03:48:02.334 [INFO][5869] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:02.338254 containerd[1635]: 2025-04-30 03:48:02.336 [INFO][5830] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" Apr 30 03:48:02.340520 containerd[1635]: time="2025-04-30T03:48:02.338359699Z" level=info msg="TearDown network for sandbox \"0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847\" successfully" Apr 30 03:48:02.340520 containerd[1635]: time="2025-04-30T03:48:02.338379336Z" level=info msg="StopPodSandbox for \"0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847\" returns successfully" Apr 30 03:48:02.355032 containerd[1635]: 2025-04-30 03:48:02.310 [WARNING][5864] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--l6gfz-eth0", GenerateName:"calico-apiserver-d6dd767f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"517ddd0e-51ba-4be9-9cd1-8ef6b20c0eec", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d6dd767f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"e1012af04eaec3a425dce9a75ee7eaf175f801d5dd1ede972062dc47802383d7", Pod:"calico-apiserver-d6dd767f5-l6gfz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali394eefb2576", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:48:02.355032 containerd[1635]: 2025-04-30 03:48:02.311 [INFO][5864] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" Apr 30 03:48:02.355032 containerd[1635]: 2025-04-30 03:48:02.311 [INFO][5864] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" iface="eth0" netns="" Apr 30 03:48:02.355032 containerd[1635]: 2025-04-30 03:48:02.311 [INFO][5864] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" Apr 30 03:48:02.355032 containerd[1635]: 2025-04-30 03:48:02.311 [INFO][5864] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" Apr 30 03:48:02.355032 containerd[1635]: 2025-04-30 03:48:02.337 [INFO][5891] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" HandleID="k8s-pod-network.94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--l6gfz-eth0" Apr 30 03:48:02.355032 containerd[1635]: 2025-04-30 03:48:02.337 [INFO][5891] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:02.355032 containerd[1635]: 2025-04-30 03:48:02.337 [INFO][5891] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:02.355032 containerd[1635]: 2025-04-30 03:48:02.344 [WARNING][5891] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" HandleID="k8s-pod-network.94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--l6gfz-eth0" Apr 30 03:48:02.355032 containerd[1635]: 2025-04-30 03:48:02.344 [INFO][5891] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" HandleID="k8s-pod-network.94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--l6gfz-eth0" Apr 30 03:48:02.355032 containerd[1635]: 2025-04-30 03:48:02.346 [INFO][5891] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:02.355032 containerd[1635]: 2025-04-30 03:48:02.348 [INFO][5864] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" Apr 30 03:48:02.355032 containerd[1635]: time="2025-04-30T03:48:02.349920158Z" level=info msg="TearDown network for sandbox \"94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f\" successfully" Apr 30 03:48:02.355032 containerd[1635]: time="2025-04-30T03:48:02.349936269Z" level=info msg="StopPodSandbox for \"94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f\" returns successfully" Apr 30 03:48:02.355032 containerd[1635]: time="2025-04-30T03:48:02.350241348Z" level=info msg="RemovePodSandbox for \"94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f\"" Apr 30 03:48:02.355032 containerd[1635]: time="2025-04-30T03:48:02.350294258Z" level=info msg="Forcibly stopping sandbox \"94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f\"" Apr 30 03:48:02.403343 containerd[1635]: time="2025-04-30T03:48:02.403313766Z" level=info msg="StartContainer for \"934c1bd84a64d250b8ebe136dc281b1156807f76930ac34606f88e15a3802346\" returns successfully" Apr 30 03:48:02.435856 containerd[1635]: 2025-04-30 03:48:02.398 [WARNING][5926] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--l6gfz-eth0", GenerateName:"calico-apiserver-d6dd767f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"517ddd0e-51ba-4be9-9cd1-8ef6b20c0eec", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d6dd767f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"e1012af04eaec3a425dce9a75ee7eaf175f801d5dd1ede972062dc47802383d7", Pod:"calico-apiserver-d6dd767f5-l6gfz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali394eefb2576", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:48:02.435856 containerd[1635]: 2025-04-30 03:48:02.398 [INFO][5926] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" Apr 30 03:48:02.435856 containerd[1635]: 2025-04-30 03:48:02.398 [INFO][5926] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" iface="eth0" netns="" Apr 30 03:48:02.435856 containerd[1635]: 2025-04-30 03:48:02.398 [INFO][5926] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" Apr 30 03:48:02.435856 containerd[1635]: 2025-04-30 03:48:02.398 [INFO][5926] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" Apr 30 03:48:02.435856 containerd[1635]: 2025-04-30 03:48:02.425 [INFO][5938] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" HandleID="k8s-pod-network.94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--l6gfz-eth0" Apr 30 03:48:02.435856 containerd[1635]: 2025-04-30 03:48:02.425 [INFO][5938] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:02.435856 containerd[1635]: 2025-04-30 03:48:02.425 [INFO][5938] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:02.435856 containerd[1635]: 2025-04-30 03:48:02.431 [WARNING][5938] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" HandleID="k8s-pod-network.94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--l6gfz-eth0" Apr 30 03:48:02.435856 containerd[1635]: 2025-04-30 03:48:02.431 [INFO][5938] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" HandleID="k8s-pod-network.94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--d6dd767f5--l6gfz-eth0" Apr 30 03:48:02.435856 containerd[1635]: 2025-04-30 03:48:02.433 [INFO][5938] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:02.435856 containerd[1635]: 2025-04-30 03:48:02.434 [INFO][5926] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f" Apr 30 03:48:02.436362 containerd[1635]: time="2025-04-30T03:48:02.435917862Z" level=info msg="TearDown network for sandbox \"94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f\" successfully" Apr 30 03:48:02.441393 containerd[1635]: time="2025-04-30T03:48:02.441367088Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:48:02.441457 containerd[1635]: time="2025-04-30T03:48:02.441421181Z" level=info msg="RemovePodSandbox \"94b898f26f5b6fd6be0de078c4c3dd9cc9dc2f29d08edaab95e883a0ca6d769f\" returns successfully" Apr 30 03:48:02.442304 containerd[1635]: time="2025-04-30T03:48:02.442276405Z" level=info msg="StopPodSandbox for \"a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca\"" Apr 30 03:48:02.504480 containerd[1635]: 2025-04-30 03:48:02.472 [WARNING][5962] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0", GenerateName:"calico-kube-controllers-548667d5d7-", Namespace:"calico-system", SelfLink:"", UID:"c2c1ef43-1732-4ee2-984a-7c96357acb4c", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"548667d5d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8", Pod:"calico-kube-controllers-548667d5d7-rpfg6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib9c823a46ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:48:02.504480 containerd[1635]: 2025-04-30 03:48:02.472 [INFO][5962] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" Apr 30 03:48:02.504480 containerd[1635]: 2025-04-30 03:48:02.472 [INFO][5962] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" iface="eth0" netns="" Apr 30 03:48:02.504480 containerd[1635]: 2025-04-30 03:48:02.472 [INFO][5962] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" Apr 30 03:48:02.504480 containerd[1635]: 2025-04-30 03:48:02.472 [INFO][5962] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" Apr 30 03:48:02.504480 containerd[1635]: 2025-04-30 03:48:02.488 [INFO][5970] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" HandleID="k8s-pod-network.a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0" Apr 30 03:48:02.504480 containerd[1635]: 2025-04-30 03:48:02.488 [INFO][5970] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:02.504480 containerd[1635]: 2025-04-30 03:48:02.488 [INFO][5970] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:02.504480 containerd[1635]: 2025-04-30 03:48:02.493 [WARNING][5970] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" HandleID="k8s-pod-network.a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0" Apr 30 03:48:02.504480 containerd[1635]: 2025-04-30 03:48:02.494 [INFO][5970] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" HandleID="k8s-pod-network.a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0" Apr 30 03:48:02.504480 containerd[1635]: 2025-04-30 03:48:02.496 [INFO][5970] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:02.504480 containerd[1635]: 2025-04-30 03:48:02.500 [INFO][5962] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" Apr 30 03:48:02.506273 containerd[1635]: time="2025-04-30T03:48:02.505239442Z" level=info msg="TearDown network for sandbox \"a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca\" successfully" Apr 30 03:48:02.506273 containerd[1635]: time="2025-04-30T03:48:02.505264229Z" level=info msg="StopPodSandbox for \"a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca\" returns successfully" Apr 30 03:48:02.508127 containerd[1635]: time="2025-04-30T03:48:02.507161391Z" level=info msg="RemovePodSandbox for \"a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca\"" Apr 30 03:48:02.508127 containerd[1635]: time="2025-04-30T03:48:02.507188653Z" level=info msg="Forcibly stopping sandbox \"a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca\"" Apr 30 03:48:02.561249 kubelet[3046]: I0430 03:48:02.561151 3046 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbvb6\" (UniqueName: \"kubernetes.io/projected/94898af1-d8d4-4a31-ac96-01740beca0cc-kube-api-access-nbvb6\") pod \"94898af1-d8d4-4a31-ac96-01740beca0cc\" (UID: \"94898af1-d8d4-4a31-ac96-01740beca0cc\") " Apr 30 03:48:02.561572 kubelet[3046]: I0430 03:48:02.561428 3046 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/94898af1-d8d4-4a31-ac96-01740beca0cc-calico-apiserver-certs\") pod \"94898af1-d8d4-4a31-ac96-01740beca0cc\" (UID: \"94898af1-d8d4-4a31-ac96-01740beca0cc\") " Apr 30 03:48:02.579867 kubelet[3046]: I0430 03:48:02.579234 3046 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94898af1-d8d4-4a31-ac96-01740beca0cc-kube-api-access-nbvb6" (OuterVolumeSpecName: "kube-api-access-nbvb6") pod "94898af1-d8d4-4a31-ac96-01740beca0cc" (UID: "94898af1-d8d4-4a31-ac96-01740beca0cc"). InnerVolumeSpecName "kube-api-access-nbvb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:48:02.579867 kubelet[3046]: I0430 03:48:02.579796 3046 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94898af1-d8d4-4a31-ac96-01740beca0cc-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "94898af1-d8d4-4a31-ac96-01740beca0cc" (UID: "94898af1-d8d4-4a31-ac96-01740beca0cc"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 03:48:02.617392 containerd[1635]: 2025-04-30 03:48:02.588 [WARNING][5988] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0", GenerateName:"calico-kube-controllers-548667d5d7-", Namespace:"calico-system", SelfLink:"", UID:"c2c1ef43-1732-4ee2-984a-7c96357acb4c", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"548667d5d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8", Pod:"calico-kube-controllers-548667d5d7-rpfg6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib9c823a46ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:48:02.617392 containerd[1635]: 2025-04-30 03:48:02.588 [INFO][5988] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" Apr 30 03:48:02.617392 containerd[1635]: 2025-04-30 03:48:02.588 [INFO][5988] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" iface="eth0" netns="" Apr 30 03:48:02.617392 containerd[1635]: 2025-04-30 03:48:02.589 [INFO][5988] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" Apr 30 03:48:02.617392 containerd[1635]: 2025-04-30 03:48:02.589 [INFO][5988] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" Apr 30 03:48:02.617392 containerd[1635]: 2025-04-30 03:48:02.608 [INFO][5998] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" HandleID="k8s-pod-network.a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0" Apr 30 03:48:02.617392 containerd[1635]: 2025-04-30 03:48:02.608 [INFO][5998] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:02.617392 containerd[1635]: 2025-04-30 03:48:02.608 [INFO][5998] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:02.617392 containerd[1635]: 2025-04-30 03:48:02.613 [WARNING][5998] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" HandleID="k8s-pod-network.a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0" Apr 30 03:48:02.617392 containerd[1635]: 2025-04-30 03:48:02.613 [INFO][5998] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" HandleID="k8s-pod-network.a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0" Apr 30 03:48:02.617392 containerd[1635]: 2025-04-30 03:48:02.614 [INFO][5998] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:02.617392 containerd[1635]: 2025-04-30 03:48:02.615 [INFO][5988] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca" Apr 30 03:48:02.619520 containerd[1635]: time="2025-04-30T03:48:02.617434015Z" level=info msg="TearDown network for sandbox \"a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca\" successfully" Apr 30 03:48:02.620372 containerd[1635]: time="2025-04-30T03:48:02.620268697Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:48:02.620372 containerd[1635]: time="2025-04-30T03:48:02.620329083Z" level=info msg="RemovePodSandbox \"a4c2246a6770f71b95ed1e535c5d4031dc7e4dd743119e5405bbf011706220ca\" returns successfully" Apr 30 03:48:02.621140 containerd[1635]: time="2025-04-30T03:48:02.620982724Z" level=info msg="StopPodSandbox for \"0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847\"" Apr 30 03:48:02.661887 kubelet[3046]: I0430 03:48:02.661851 3046 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-nbvb6\" (UniqueName: \"kubernetes.io/projected/94898af1-d8d4-4a31-ac96-01740beca0cc-kube-api-access-nbvb6\") on node \"ci-4081-3-3-c-b54c1f5c93\" DevicePath \"\"" Apr 30 03:48:02.662592 kubelet[3046]: I0430 03:48:02.662077 3046 reconciler_common.go:289] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/94898af1-d8d4-4a31-ac96-01740beca0cc-calico-apiserver-certs\") on node \"ci-4081-3-3-c-b54c1f5c93\" DevicePath \"\"" Apr 30 03:48:02.679976 systemd[1]: run-netns-cni\x2d643d698b\x2d9456\x2d4ebe\x2d61df\x2d31e2b5353a9e.mount: Deactivated successfully. Apr 30 03:48:02.680089 systemd[1]: var-lib-kubelet-pods-94898af1\x2dd8d4\x2d4a31\x2dac96\x2d01740beca0cc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnbvb6.mount: Deactivated successfully. Apr 30 03:48:02.680165 systemd[1]: var-lib-kubelet-pods-94898af1\x2dd8d4\x2d4a31\x2dac96\x2d01740beca0cc-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Apr 30 03:48:02.696655 containerd[1635]: 2025-04-30 03:48:02.652 [WARNING][6017] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0", GenerateName:"calico-apiserver-855c9c9d54-", Namespace:"calico-apiserver", SelfLink:"", UID:"94898af1-d8d4-4a31-ac96-01740beca0cc", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"855c9c9d54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070", Pod:"calico-apiserver-855c9c9d54-6r7fb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59a626749dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:48:02.696655 containerd[1635]: 2025-04-30 03:48:02.652 [INFO][6017] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" Apr 30 03:48:02.696655 containerd[1635]: 2025-04-30 03:48:02.652 [INFO][6017] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" iface="eth0" netns="" Apr 30 03:48:02.696655 containerd[1635]: 2025-04-30 03:48:02.652 [INFO][6017] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" Apr 30 03:48:02.696655 containerd[1635]: 2025-04-30 03:48:02.652 [INFO][6017] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" Apr 30 03:48:02.696655 containerd[1635]: 2025-04-30 03:48:02.684 [INFO][6024] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" HandleID="k8s-pod-network.0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" Apr 30 03:48:02.696655 containerd[1635]: 2025-04-30 03:48:02.685 [INFO][6024] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:02.696655 containerd[1635]: 2025-04-30 03:48:02.685 [INFO][6024] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:02.696655 containerd[1635]: 2025-04-30 03:48:02.692 [WARNING][6024] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" HandleID="k8s-pod-network.0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" Apr 30 03:48:02.696655 containerd[1635]: 2025-04-30 03:48:02.692 [INFO][6024] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" HandleID="k8s-pod-network.0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" Apr 30 03:48:02.696655 containerd[1635]: 2025-04-30 03:48:02.694 [INFO][6024] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:02.696655 containerd[1635]: 2025-04-30 03:48:02.695 [INFO][6017] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" Apr 30 03:48:02.696655 containerd[1635]: time="2025-04-30T03:48:02.696598284Z" level=info msg="TearDown network for sandbox \"0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847\" successfully" Apr 30 03:48:02.696655 containerd[1635]: time="2025-04-30T03:48:02.696619505Z" level=info msg="StopPodSandbox for \"0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847\" returns successfully" Apr 30 03:48:02.697748 containerd[1635]: time="2025-04-30T03:48:02.697318311Z" level=info msg="RemovePodSandbox for \"0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847\"" Apr 30 03:48:02.697748 containerd[1635]: time="2025-04-30T03:48:02.697340423Z" level=info msg="Forcibly stopping sandbox \"0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847\"" Apr 30 03:48:02.768912 containerd[1635]: 2025-04-30 03:48:02.730 [WARNING][6045] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0", GenerateName:"calico-apiserver-855c9c9d54-", Namespace:"calico-apiserver", SelfLink:"", UID:"94898af1-d8d4-4a31-ac96-01740beca0cc", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"855c9c9d54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070", Pod:"calico-apiserver-855c9c9d54-6r7fb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59a626749dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:48:02.768912 containerd[1635]: 2025-04-30 03:48:02.730 [INFO][6045] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" Apr 30 03:48:02.768912 containerd[1635]: 2025-04-30 03:48:02.732 [INFO][6045] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" iface="eth0" netns="" Apr 30 03:48:02.768912 containerd[1635]: 2025-04-30 03:48:02.732 [INFO][6045] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" Apr 30 03:48:02.768912 containerd[1635]: 2025-04-30 03:48:02.732 [INFO][6045] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" Apr 30 03:48:02.768912 containerd[1635]: 2025-04-30 03:48:02.756 [INFO][6053] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" HandleID="k8s-pod-network.0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" Apr 30 03:48:02.768912 containerd[1635]: 2025-04-30 03:48:02.756 [INFO][6053] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:02.768912 containerd[1635]: 2025-04-30 03:48:02.756 [INFO][6053] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:02.768912 containerd[1635]: 2025-04-30 03:48:02.762 [WARNING][6053] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" HandleID="k8s-pod-network.0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" Apr 30 03:48:02.768912 containerd[1635]: 2025-04-30 03:48:02.762 [INFO][6053] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" HandleID="k8s-pod-network.0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" Apr 30 03:48:02.768912 containerd[1635]: 2025-04-30 03:48:02.763 [INFO][6053] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:02.768912 containerd[1635]: 2025-04-30 03:48:02.765 [INFO][6045] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847" Apr 30 03:48:02.768912 containerd[1635]: time="2025-04-30T03:48:02.768583973Z" level=info msg="TearDown network for sandbox \"0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847\" successfully" Apr 30 03:48:02.776523 containerd[1635]: time="2025-04-30T03:48:02.776486818Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:48:02.776624 containerd[1635]: time="2025-04-30T03:48:02.776603139Z" level=info msg="RemovePodSandbox \"0141d72e74feb9aa28d6ab82a36b47910a465c34ab785ae23a9302c71b9c6847\" returns successfully" Apr 30 03:48:02.777016 containerd[1635]: time="2025-04-30T03:48:02.776996176Z" level=info msg="StopPodSandbox for \"916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2\"" Apr 30 03:48:02.844323 containerd[1635]: 2025-04-30 03:48:02.811 [WARNING][6072] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--zsrkz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8ac99651-c6e8-4435-af79-6d59e81f514c", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"e1b450454878acf1cf484aabe9b30c69960ea6f5c1cad6145f86a3959690b88b", Pod:"coredns-7db6d8ff4d-zsrkz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia8c29090d42", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:48:02.844323 containerd[1635]: 2025-04-30 03:48:02.812 [INFO][6072] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" Apr 30 03:48:02.844323 containerd[1635]: 2025-04-30 03:48:02.812 [INFO][6072] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" iface="eth0" netns="" Apr 30 03:48:02.844323 containerd[1635]: 2025-04-30 03:48:02.812 [INFO][6072] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" Apr 30 03:48:02.844323 containerd[1635]: 2025-04-30 03:48:02.812 [INFO][6072] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" Apr 30 03:48:02.844323 containerd[1635]: 2025-04-30 03:48:02.833 [INFO][6079] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" HandleID="k8s-pod-network.916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--zsrkz-eth0" Apr 30 03:48:02.844323 containerd[1635]: 2025-04-30 03:48:02.834 [INFO][6079] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:02.844323 containerd[1635]: 2025-04-30 03:48:02.834 [INFO][6079] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:02.844323 containerd[1635]: 2025-04-30 03:48:02.838 [WARNING][6079] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" HandleID="k8s-pod-network.916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--zsrkz-eth0" Apr 30 03:48:02.844323 containerd[1635]: 2025-04-30 03:48:02.838 [INFO][6079] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" HandleID="k8s-pod-network.916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--zsrkz-eth0" Apr 30 03:48:02.844323 containerd[1635]: 2025-04-30 03:48:02.840 [INFO][6079] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:02.844323 containerd[1635]: 2025-04-30 03:48:02.842 [INFO][6072] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" Apr 30 03:48:02.845394 containerd[1635]: time="2025-04-30T03:48:02.844626305Z" level=info msg="TearDown network for sandbox \"916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2\" successfully" Apr 30 03:48:02.845394 containerd[1635]: time="2025-04-30T03:48:02.844648667Z" level=info msg="StopPodSandbox for \"916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2\" returns successfully" Apr 30 03:48:02.845394 containerd[1635]: time="2025-04-30T03:48:02.845188041Z" level=info msg="RemovePodSandbox for \"916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2\"" Apr 30 03:48:02.845394 containerd[1635]: time="2025-04-30T03:48:02.845209351Z" level=info msg="Forcibly stopping sandbox \"916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2\"" Apr 30 03:48:02.902943 containerd[1635]: 2025-04-30 03:48:02.874 [WARNING][6097] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--zsrkz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8ac99651-c6e8-4435-af79-6d59e81f514c", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"e1b450454878acf1cf484aabe9b30c69960ea6f5c1cad6145f86a3959690b88b", Pod:"coredns-7db6d8ff4d-zsrkz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia8c29090d42", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:48:02.902943 containerd[1635]: 2025-04-30 03:48:02.875 [INFO][6097] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" Apr 30 03:48:02.902943 containerd[1635]: 2025-04-30 03:48:02.875 [INFO][6097] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" iface="eth0" netns="" Apr 30 03:48:02.902943 containerd[1635]: 2025-04-30 03:48:02.875 [INFO][6097] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" Apr 30 03:48:02.902943 containerd[1635]: 2025-04-30 03:48:02.875 [INFO][6097] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" Apr 30 03:48:02.902943 containerd[1635]: 2025-04-30 03:48:02.893 [INFO][6104] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" HandleID="k8s-pod-network.916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--zsrkz-eth0" Apr 30 03:48:02.902943 containerd[1635]: 2025-04-30 03:48:02.893 [INFO][6104] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:02.902943 containerd[1635]: 2025-04-30 03:48:02.893 [INFO][6104] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:02.902943 containerd[1635]: 2025-04-30 03:48:02.898 [WARNING][6104] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" HandleID="k8s-pod-network.916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--zsrkz-eth0" Apr 30 03:48:02.902943 containerd[1635]: 2025-04-30 03:48:02.898 [INFO][6104] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" HandleID="k8s-pod-network.916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--zsrkz-eth0" Apr 30 03:48:02.902943 containerd[1635]: 2025-04-30 03:48:02.900 [INFO][6104] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:02.902943 containerd[1635]: 2025-04-30 03:48:02.901 [INFO][6097] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2" Apr 30 03:48:02.903311 containerd[1635]: time="2025-04-30T03:48:02.902960052Z" level=info msg="TearDown network for sandbox \"916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2\" successfully" Apr 30 03:48:02.905790 containerd[1635]: time="2025-04-30T03:48:02.905758766Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:48:02.906280 containerd[1635]: time="2025-04-30T03:48:02.905843115Z" level=info msg="RemovePodSandbox \"916ae5b75e7238443e56dee09b04cba469ccd10072c74e6b373a4d0774a6b1e2\" returns successfully" Apr 30 03:48:02.906606 containerd[1635]: time="2025-04-30T03:48:02.906320112Z" level=info msg="StopPodSandbox for \"1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611\"" Apr 30 03:48:02.976224 containerd[1635]: 2025-04-30 03:48:02.939 [WARNING][6123] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--fn9tg-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7758e0dd-8e52-4b84-ad62-173247f00bf9", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"2e1c9525008c241da82957dde1817d0d5ce1e7f978eed10c6d5c880104eaf554", Pod:"coredns-7db6d8ff4d-fn9tg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali77cc5461302", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:48:02.976224 containerd[1635]: 2025-04-30 03:48:02.939 [INFO][6123] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" Apr 30 03:48:02.976224 containerd[1635]: 2025-04-30 03:48:02.939 [INFO][6123] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" iface="eth0" netns="" Apr 30 03:48:02.976224 containerd[1635]: 2025-04-30 03:48:02.939 [INFO][6123] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" Apr 30 03:48:02.976224 containerd[1635]: 2025-04-30 03:48:02.939 [INFO][6123] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" Apr 30 03:48:02.976224 containerd[1635]: 2025-04-30 03:48:02.964 [INFO][6131] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" HandleID="k8s-pod-network.1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--fn9tg-eth0" Apr 30 03:48:02.976224 containerd[1635]: 2025-04-30 03:48:02.964 [INFO][6131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:02.976224 containerd[1635]: 2025-04-30 03:48:02.964 [INFO][6131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:02.976224 containerd[1635]: 2025-04-30 03:48:02.970 [WARNING][6131] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" HandleID="k8s-pod-network.1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--fn9tg-eth0" Apr 30 03:48:02.976224 containerd[1635]: 2025-04-30 03:48:02.970 [INFO][6131] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" HandleID="k8s-pod-network.1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--fn9tg-eth0" Apr 30 03:48:02.976224 containerd[1635]: 2025-04-30 03:48:02.972 [INFO][6131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:02.976224 containerd[1635]: 2025-04-30 03:48:02.973 [INFO][6123] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" Apr 30 03:48:02.978040 containerd[1635]: time="2025-04-30T03:48:02.976668512Z" level=info msg="TearDown network for sandbox \"1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611\" successfully" Apr 30 03:48:02.978040 containerd[1635]: time="2025-04-30T03:48:02.976694691Z" level=info msg="StopPodSandbox for \"1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611\" returns successfully" Apr 30 03:48:02.978295 containerd[1635]: time="2025-04-30T03:48:02.978267107Z" level=info msg="RemovePodSandbox for \"1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611\"" Apr 30 03:48:02.978395 containerd[1635]: time="2025-04-30T03:48:02.978376465Z" level=info msg="Forcibly stopping sandbox \"1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611\"" Apr 30 03:48:03.044998 containerd[1635]: 2025-04-30 03:48:03.011 [WARNING][6149] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--fn9tg-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7758e0dd-8e52-4b84-ad62-173247f00bf9", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"2e1c9525008c241da82957dde1817d0d5ce1e7f978eed10c6d5c880104eaf554", Pod:"coredns-7db6d8ff4d-fn9tg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali77cc5461302", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:48:03.044998 containerd[1635]: 2025-04-30 03:48:03.011 [INFO][6149] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" Apr 30 03:48:03.044998 containerd[1635]: 2025-04-30 03:48:03.011 [INFO][6149] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" iface="eth0" netns="" Apr 30 03:48:03.044998 containerd[1635]: 2025-04-30 03:48:03.012 [INFO][6149] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" Apr 30 03:48:03.044998 containerd[1635]: 2025-04-30 03:48:03.012 [INFO][6149] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" Apr 30 03:48:03.044998 containerd[1635]: 2025-04-30 03:48:03.035 [INFO][6157] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" HandleID="k8s-pod-network.1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--fn9tg-eth0" Apr 30 03:48:03.044998 containerd[1635]: 2025-04-30 03:48:03.035 [INFO][6157] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:03.044998 containerd[1635]: 2025-04-30 03:48:03.035 [INFO][6157] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:03.044998 containerd[1635]: 2025-04-30 03:48:03.040 [WARNING][6157] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" HandleID="k8s-pod-network.1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--fn9tg-eth0" Apr 30 03:48:03.044998 containerd[1635]: 2025-04-30 03:48:03.040 [INFO][6157] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" HandleID="k8s-pod-network.1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-coredns--7db6d8ff4d--fn9tg-eth0" Apr 30 03:48:03.044998 containerd[1635]: 2025-04-30 03:48:03.042 [INFO][6157] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:03.044998 containerd[1635]: 2025-04-30 03:48:03.043 [INFO][6149] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611" Apr 30 03:48:03.045775 containerd[1635]: time="2025-04-30T03:48:03.045499230Z" level=info msg="TearDown network for sandbox \"1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611\" successfully" Apr 30 03:48:03.049644 containerd[1635]: time="2025-04-30T03:48:03.049497712Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:48:03.049644 containerd[1635]: time="2025-04-30T03:48:03.049558738Z" level=info msg="RemovePodSandbox \"1cef62b74739ba5db13b29a56ba542e2ca312645f304f645d452b32d5acc8611\" returns successfully" Apr 30 03:48:03.049965 containerd[1635]: time="2025-04-30T03:48:03.049942316Z" level=info msg="StopPodSandbox for \"5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f\"" Apr 30 03:48:03.107968 containerd[1635]: 2025-04-30 03:48:03.080 [WARNING][6175] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0", GenerateName:"calico-apiserver-855c9c9d54-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"855c9c9d54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc", Pod:"calico-apiserver-855c9c9d54-2tcgf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6cb2c30de00", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:48:03.107968 containerd[1635]: 2025-04-30 03:48:03.080 [INFO][6175] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" Apr 30 03:48:03.107968 containerd[1635]: 2025-04-30 03:48:03.080 [INFO][6175] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" iface="eth0" netns="" Apr 30 03:48:03.107968 containerd[1635]: 2025-04-30 03:48:03.080 [INFO][6175] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" Apr 30 03:48:03.107968 containerd[1635]: 2025-04-30 03:48:03.080 [INFO][6175] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" Apr 30 03:48:03.107968 containerd[1635]: 2025-04-30 03:48:03.097 [INFO][6182] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" HandleID="k8s-pod-network.5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0" Apr 30 03:48:03.107968 containerd[1635]: 2025-04-30 03:48:03.097 [INFO][6182] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:03.107968 containerd[1635]: 2025-04-30 03:48:03.097 [INFO][6182] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:03.107968 containerd[1635]: 2025-04-30 03:48:03.102 [WARNING][6182] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" HandleID="k8s-pod-network.5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0" Apr 30 03:48:03.107968 containerd[1635]: 2025-04-30 03:48:03.102 [INFO][6182] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" HandleID="k8s-pod-network.5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0" Apr 30 03:48:03.107968 containerd[1635]: 2025-04-30 03:48:03.104 [INFO][6182] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:03.107968 containerd[1635]: 2025-04-30 03:48:03.106 [INFO][6175] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" Apr 30 03:48:03.107968 containerd[1635]: time="2025-04-30T03:48:03.107936068Z" level=info msg="TearDown network for sandbox \"5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f\" successfully" Apr 30 03:48:03.107968 containerd[1635]: time="2025-04-30T03:48:03.107957258Z" level=info msg="StopPodSandbox for \"5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f\" returns successfully" Apr 30 03:48:03.112178 containerd[1635]: time="2025-04-30T03:48:03.108342088Z" level=info msg="RemovePodSandbox for \"5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f\"" Apr 30 03:48:03.112178 containerd[1635]: time="2025-04-30T03:48:03.108379479Z" level=info msg="Forcibly stopping sandbox \"5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f\"" Apr 30 03:48:03.193536 containerd[1635]: 2025-04-30 03:48:03.161 [WARNING][6200] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0", GenerateName:"calico-apiserver-855c9c9d54-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 47, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"855c9c9d54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc", Pod:"calico-apiserver-855c9c9d54-2tcgf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6cb2c30de00", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:48:03.193536 containerd[1635]: 2025-04-30 03:48:03.161 [INFO][6200] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" Apr 30 03:48:03.193536 containerd[1635]: 2025-04-30 03:48:03.161 [INFO][6200] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" iface="eth0" netns="" Apr 30 03:48:03.193536 containerd[1635]: 2025-04-30 03:48:03.161 [INFO][6200] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" Apr 30 03:48:03.193536 containerd[1635]: 2025-04-30 03:48:03.161 [INFO][6200] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" Apr 30 03:48:03.193536 containerd[1635]: 2025-04-30 03:48:03.182 [INFO][6208] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" HandleID="k8s-pod-network.5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0" Apr 30 03:48:03.193536 containerd[1635]: 2025-04-30 03:48:03.182 [INFO][6208] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:03.193536 containerd[1635]: 2025-04-30 03:48:03.182 [INFO][6208] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:03.193536 containerd[1635]: 2025-04-30 03:48:03.187 [WARNING][6208] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" HandleID="k8s-pod-network.5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0" Apr 30 03:48:03.193536 containerd[1635]: 2025-04-30 03:48:03.187 [INFO][6208] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" HandleID="k8s-pod-network.5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0" Apr 30 03:48:03.193536 containerd[1635]: 2025-04-30 03:48:03.189 [INFO][6208] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:03.193536 containerd[1635]: 2025-04-30 03:48:03.191 [INFO][6200] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f" Apr 30 03:48:03.194564 containerd[1635]: time="2025-04-30T03:48:03.193949600Z" level=info msg="TearDown network for sandbox \"5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f\" successfully" Apr 30 03:48:03.196769 containerd[1635]: time="2025-04-30T03:48:03.196737453Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:48:03.196962 containerd[1635]: time="2025-04-30T03:48:03.196944897Z" level=info msg="RemovePodSandbox \"5ad5642b9e3dc0f2f7fd655265af38332538805e529542edfe188dafb57b4a5f\" returns successfully" Apr 30 03:48:03.372917 kubelet[3046]: I0430 03:48:03.372768 3046 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d6dd767f5-srwnv" podStartSLOduration=2.372754535 podStartE2EDuration="2.372754535s" podCreationTimestamp="2025-04-30 03:48:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:48:03.35234857 +0000 UTC m=+61.523880451" watchObservedRunningTime="2025-04-30 03:48:03.372754535 +0000 UTC m=+61.544286415" Apr 30 03:48:03.653405 containerd[1635]: time="2025-04-30T03:48:03.653119278Z" level=info msg="StopContainer for \"cffeaf3e19de132af0d98ef1b1e492bd893ba75fa1193f4b7911895dfeaa7655\" with timeout 30 (s)" Apr 30 03:48:03.653984 containerd[1635]: time="2025-04-30T03:48:03.653920379Z" level=info msg="Stop container \"cffeaf3e19de132af0d98ef1b1e492bd893ba75fa1193f4b7911895dfeaa7655\" with signal terminated" Apr 30 03:48:03.692369 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cffeaf3e19de132af0d98ef1b1e492bd893ba75fa1193f4b7911895dfeaa7655-rootfs.mount: Deactivated successfully. Apr 30 03:48:03.699328 containerd[1635]: time="2025-04-30T03:48:03.699254214Z" level=info msg="shim disconnected" id=cffeaf3e19de132af0d98ef1b1e492bd893ba75fa1193f4b7911895dfeaa7655 namespace=k8s.io Apr 30 03:48:03.699328 containerd[1635]: time="2025-04-30T03:48:03.699304269Z" level=warning msg="cleaning up after shim disconnected" id=cffeaf3e19de132af0d98ef1b1e492bd893ba75fa1193f4b7911895dfeaa7655 namespace=k8s.io Apr 30 03:48:03.699328 containerd[1635]: time="2025-04-30T03:48:03.699312654Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:48:03.713452 containerd[1635]: time="2025-04-30T03:48:03.713407826Z" level=info msg="StopContainer for \"cffeaf3e19de132af0d98ef1b1e492bd893ba75fa1193f4b7911895dfeaa7655\" returns successfully" Apr 30 03:48:03.714081 containerd[1635]: time="2025-04-30T03:48:03.714040778Z" level=info msg="StopPodSandbox for \"3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc\"" Apr 30 03:48:03.714081 containerd[1635]: time="2025-04-30T03:48:03.714103957Z" level=info msg="Container to stop \"cffeaf3e19de132af0d98ef1b1e492bd893ba75fa1193f4b7911895dfeaa7655\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:48:03.717452 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc-shm.mount: Deactivated successfully. Apr 30 03:48:03.739215 containerd[1635]: time="2025-04-30T03:48:03.739026769Z" level=info msg="shim disconnected" id=3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc namespace=k8s.io Apr 30 03:48:03.739215 containerd[1635]: time="2025-04-30T03:48:03.739073187Z" level=warning msg="cleaning up after shim disconnected" id=3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc namespace=k8s.io Apr 30 03:48:03.739215 containerd[1635]: time="2025-04-30T03:48:03.739080671Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:48:03.741806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc-rootfs.mount: Deactivated successfully. Apr 30 03:48:03.802664 systemd-networkd[1258]: cali6cb2c30de00: Link DOWN Apr 30 03:48:03.802865 systemd-networkd[1258]: cali6cb2c30de00: Lost carrier Apr 30 03:48:03.873026 containerd[1635]: 2025-04-30 03:48:03.799 [INFO][6294] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Apr 30 03:48:03.873026 containerd[1635]: 2025-04-30 03:48:03.799 [INFO][6294] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" iface="eth0" netns="/var/run/netns/cni-c0d04c02-38d1-d1e9-d391-b08dcc13785b" Apr 30 03:48:03.873026 containerd[1635]: 2025-04-30 03:48:03.801 [INFO][6294] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" iface="eth0" netns="/var/run/netns/cni-c0d04c02-38d1-d1e9-d391-b08dcc13785b" Apr 30 03:48:03.873026 containerd[1635]: 2025-04-30 03:48:03.816 [INFO][6294] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" after=16.083137ms iface="eth0" netns="/var/run/netns/cni-c0d04c02-38d1-d1e9-d391-b08dcc13785b" Apr 30 03:48:03.873026 containerd[1635]: 2025-04-30 03:48:03.816 [INFO][6294] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Apr 30 03:48:03.873026 containerd[1635]: 2025-04-30 03:48:03.816 [INFO][6294] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Apr 30 03:48:03.873026 containerd[1635]: 2025-04-30 03:48:03.835 [INFO][6305] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" HandleID="k8s-pod-network.3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0" Apr 30 03:48:03.873026 containerd[1635]: 2025-04-30 03:48:03.835 [INFO][6305] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:03.873026 containerd[1635]: 2025-04-30 03:48:03.835 [INFO][6305] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:03.873026 containerd[1635]: 2025-04-30 03:48:03.868 [INFO][6305] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" HandleID="k8s-pod-network.3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0" Apr 30 03:48:03.873026 containerd[1635]: 2025-04-30 03:48:03.868 [INFO][6305] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" HandleID="k8s-pod-network.3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0" Apr 30 03:48:03.873026 containerd[1635]: 2025-04-30 03:48:03.870 [INFO][6305] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:03.873026 containerd[1635]: 2025-04-30 03:48:03.871 [INFO][6294] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Apr 30 03:48:03.875731 containerd[1635]: time="2025-04-30T03:48:03.875685509Z" level=info msg="TearDown network for sandbox \"3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc\" successfully" Apr 30 03:48:03.875731 containerd[1635]: time="2025-04-30T03:48:03.875729243Z" level=info msg="StopPodSandbox for \"3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc\" returns successfully" Apr 30 03:48:03.877574 systemd[1]: run-netns-cni\x2dc0d04c02\x2d38d1\x2dd1e9\x2dd391\x2db08dcc13785b.mount: Deactivated successfully. Apr 30 03:48:03.909250 kubelet[3046]: I0430 03:48:03.908579 3046 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94898af1-d8d4-4a31-ac96-01740beca0cc" path="/var/lib/kubelet/pods/94898af1-d8d4-4a31-ac96-01740beca0cc/volumes" Apr 30 03:48:03.971923 kubelet[3046]: I0430 03:48:03.971466 3046 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86-calico-apiserver-certs\") pod \"e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86\" (UID: \"e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86\") " Apr 30 03:48:03.971923 kubelet[3046]: I0430 03:48:03.971500 3046 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn8lf\" (UniqueName: \"kubernetes.io/projected/e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86-kube-api-access-sn8lf\") pod \"e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86\" (UID: \"e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86\") " Apr 30 03:48:03.976044 kubelet[3046]: I0430 03:48:03.976011 3046 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86" (UID: "e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 03:48:03.976295 kubelet[3046]: I0430 03:48:03.976261 3046 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86-kube-api-access-sn8lf" (OuterVolumeSpecName: "kube-api-access-sn8lf") pod "e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86" (UID: "e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86"). InnerVolumeSpecName "kube-api-access-sn8lf". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:48:04.072506 kubelet[3046]: I0430 03:48:04.072441 3046 reconciler_common.go:289] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86-calico-apiserver-certs\") on node \"ci-4081-3-3-c-b54c1f5c93\" DevicePath \"\"" Apr 30 03:48:04.072506 kubelet[3046]: I0430 03:48:04.072481 3046 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-sn8lf\" (UniqueName: \"kubernetes.io/projected/e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86-kube-api-access-sn8lf\") on node \"ci-4081-3-3-c-b54c1f5c93\" DevicePath \"\"" Apr 30 03:48:04.099077 systemd-networkd[1258]: califabb3f401cb: Gained IPv6LL Apr 30 03:48:04.354316 kubelet[3046]: I0430 03:48:04.354259 3046 scope.go:117] "RemoveContainer" containerID="cffeaf3e19de132af0d98ef1b1e492bd893ba75fa1193f4b7911895dfeaa7655" Apr 30 03:48:04.358118 containerd[1635]: time="2025-04-30T03:48:04.357973798Z" level=info msg="RemoveContainer for \"cffeaf3e19de132af0d98ef1b1e492bd893ba75fa1193f4b7911895dfeaa7655\"" Apr 30 03:48:04.385349 containerd[1635]: time="2025-04-30T03:48:04.385308217Z" level=info msg="RemoveContainer for \"cffeaf3e19de132af0d98ef1b1e492bd893ba75fa1193f4b7911895dfeaa7655\" returns successfully" Apr 30 03:48:04.398727 kubelet[3046]: I0430 03:48:04.398695 3046 scope.go:117] "RemoveContainer" containerID="cffeaf3e19de132af0d98ef1b1e492bd893ba75fa1193f4b7911895dfeaa7655" Apr 30 03:48:04.405381 containerd[1635]: time="2025-04-30T03:48:04.398992640Z" level=error msg="ContainerStatus for \"cffeaf3e19de132af0d98ef1b1e492bd893ba75fa1193f4b7911895dfeaa7655\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cffeaf3e19de132af0d98ef1b1e492bd893ba75fa1193f4b7911895dfeaa7655\": not found" Apr 30 03:48:04.405529 kubelet[3046]: E0430 03:48:04.405507 3046 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cffeaf3e19de132af0d98ef1b1e492bd893ba75fa1193f4b7911895dfeaa7655\": not found" containerID="cffeaf3e19de132af0d98ef1b1e492bd893ba75fa1193f4b7911895dfeaa7655" Apr 30 03:48:04.405616 kubelet[3046]: I0430 03:48:04.405546 3046 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cffeaf3e19de132af0d98ef1b1e492bd893ba75fa1193f4b7911895dfeaa7655"} err="failed to get container status \"cffeaf3e19de132af0d98ef1b1e492bd893ba75fa1193f4b7911895dfeaa7655\": rpc error: code = NotFound desc = an error occurred when try to find container \"cffeaf3e19de132af0d98ef1b1e492bd893ba75fa1193f4b7911895dfeaa7655\": not found" Apr 30 03:48:04.676495 systemd-journald[1184]: Under memory pressure, flushing caches. Apr 30 03:48:04.674611 systemd-resolved[1522]: Under memory pressure, flushing caches. Apr 30 03:48:04.674659 systemd-resolved[1522]: Flushed all caches. Apr 30 03:48:04.693040 systemd[1]: var-lib-kubelet-pods-e0d8d012\x2de7a1\x2d4b65\x2d9dc5\x2dbdfa589a9f86-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsn8lf.mount: Deactivated successfully. Apr 30 03:48:04.693220 systemd[1]: var-lib-kubelet-pods-e0d8d012\x2de7a1\x2d4b65\x2d9dc5\x2dbdfa589a9f86-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Apr 30 03:48:05.907762 kubelet[3046]: I0430 03:48:05.907728 3046 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86" path="/var/lib/kubelet/pods/e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86/volumes" Apr 30 03:48:13.316204 containerd[1635]: time="2025-04-30T03:48:13.316163944Z" level=info msg="StopContainer for \"4dd040ca34c5d11459991d3eea3c83ddea76b05b367a23e1f149d61755732fff\" with timeout 300 (s)" Apr 30 03:48:13.319570 containerd[1635]: time="2025-04-30T03:48:13.319217823Z" level=info msg="Stop container \"4dd040ca34c5d11459991d3eea3c83ddea76b05b367a23e1f149d61755732fff\" with signal terminated" Apr 30 03:48:13.489865 containerd[1635]: time="2025-04-30T03:48:13.489612130Z" level=info msg="StopContainer for \"ea7f5d8fda3930f1ed38ef1b15a32068c291dc1a7dc062926607e214e34561bd\" with timeout 30 (s)" Apr 30 03:48:13.490869 containerd[1635]: time="2025-04-30T03:48:13.490772323Z" level=info msg="Stop container \"ea7f5d8fda3930f1ed38ef1b15a32068c291dc1a7dc062926607e214e34561bd\" with signal terminated" Apr 30 03:48:13.555187 containerd[1635]: time="2025-04-30T03:48:13.554931143Z" level=info msg="shim disconnected" id=ea7f5d8fda3930f1ed38ef1b15a32068c291dc1a7dc062926607e214e34561bd namespace=k8s.io Apr 30 03:48:13.555187 containerd[1635]: time="2025-04-30T03:48:13.555039089Z" level=warning msg="cleaning up after shim disconnected" id=ea7f5d8fda3930f1ed38ef1b15a32068c291dc1a7dc062926607e214e34561bd namespace=k8s.io Apr 30 03:48:13.555187 containerd[1635]: time="2025-04-30T03:48:13.555048557Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:48:13.556310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea7f5d8fda3930f1ed38ef1b15a32068c291dc1a7dc062926607e214e34561bd-rootfs.mount: Deactivated successfully. Apr 30 03:48:13.579124 containerd[1635]: time="2025-04-30T03:48:13.579098518Z" level=info msg="StopContainer for \"c3e5110b459900bd3808e96cd4b12da2b79a16e074a4c30327dd51e085adbf39\" with timeout 5 (s)" Apr 30 03:48:13.580309 containerd[1635]: time="2025-04-30T03:48:13.579517673Z" level=info msg="Stop container \"c3e5110b459900bd3808e96cd4b12da2b79a16e074a4c30327dd51e085adbf39\" with signal terminated" Apr 30 03:48:13.615645 containerd[1635]: time="2025-04-30T03:48:13.615529254Z" level=info msg="StopContainer for \"ea7f5d8fda3930f1ed38ef1b15a32068c291dc1a7dc062926607e214e34561bd\" returns successfully" Apr 30 03:48:13.617634 containerd[1635]: time="2025-04-30T03:48:13.617074819Z" level=info msg="StopPodSandbox for \"b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8\"" Apr 30 03:48:13.617634 containerd[1635]: time="2025-04-30T03:48:13.617163527Z" level=info msg="Container to stop \"ea7f5d8fda3930f1ed38ef1b15a32068c291dc1a7dc062926607e214e34561bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:48:13.622363 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8-shm.mount: Deactivated successfully. Apr 30 03:48:13.648768 containerd[1635]: time="2025-04-30T03:48:13.648612676Z" level=info msg="shim disconnected" id=c3e5110b459900bd3808e96cd4b12da2b79a16e074a4c30327dd51e085adbf39 namespace=k8s.io Apr 30 03:48:13.650465 containerd[1635]: time="2025-04-30T03:48:13.649965905Z" level=warning msg="cleaning up after shim disconnected" id=c3e5110b459900bd3808e96cd4b12da2b79a16e074a4c30327dd51e085adbf39 namespace=k8s.io Apr 30 03:48:13.650465 containerd[1635]: time="2025-04-30T03:48:13.649982546Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:48:13.649407 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3e5110b459900bd3808e96cd4b12da2b79a16e074a4c30327dd51e085adbf39-rootfs.mount: Deactivated successfully. Apr 30 03:48:13.679049 containerd[1635]: time="2025-04-30T03:48:13.678517593Z" level=info msg="shim disconnected" id=b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8 namespace=k8s.io Apr 30 03:48:13.679190 containerd[1635]: time="2025-04-30T03:48:13.679175251Z" level=warning msg="cleaning up after shim disconnected" id=b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8 namespace=k8s.io Apr 30 03:48:13.679920 containerd[1635]: time="2025-04-30T03:48:13.679230366Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:48:13.680491 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8-rootfs.mount: Deactivated successfully. Apr 30 03:48:13.693913 containerd[1635]: time="2025-04-30T03:48:13.693815687Z" level=warning msg="cleanup warnings time=\"2025-04-30T03:48:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 03:48:13.694488 containerd[1635]: time="2025-04-30T03:48:13.694282423Z" level=info msg="StopContainer for \"c3e5110b459900bd3808e96cd4b12da2b79a16e074a4c30327dd51e085adbf39\" returns successfully" Apr 30 03:48:13.695159 containerd[1635]: time="2025-04-30T03:48:13.695037447Z" level=info msg="StopPodSandbox for \"95bc9fc4b74d97da37cd866088304e53a66c69dd9e4f4a854c310b97c0fa1981\"" Apr 30 03:48:13.695159 containerd[1635]: time="2025-04-30T03:48:13.695064107Z" level=info msg="Container to stop \"6107d0ef9475aefcee285683d508ffae69294f6563a51a79018b1b2cf7195d61\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:48:13.695159 containerd[1635]: time="2025-04-30T03:48:13.695074496Z" level=info msg="Container to stop \"c3e5110b459900bd3808e96cd4b12da2b79a16e074a4c30327dd51e085adbf39\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:48:13.695159 containerd[1635]: time="2025-04-30T03:48:13.695082663Z" level=info msg="Container to stop \"523feefa3d1856551b8bcdc80d22c7f1851dce7cb8a5337f2d4ab6fd4f353ace\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:48:13.730011 containerd[1635]: time="2025-04-30T03:48:13.729919674Z" level=info msg="shim disconnected" id=95bc9fc4b74d97da37cd866088304e53a66c69dd9e4f4a854c310b97c0fa1981 namespace=k8s.io Apr 30 03:48:13.730011 containerd[1635]: time="2025-04-30T03:48:13.729962265Z" level=warning msg="cleaning up after shim disconnected" id=95bc9fc4b74d97da37cd866088304e53a66c69dd9e4f4a854c310b97c0fa1981 namespace=k8s.io Apr 30 03:48:13.730011 containerd[1635]: time="2025-04-30T03:48:13.729970660Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:48:13.759454 containerd[1635]: time="2025-04-30T03:48:13.759381740Z" level=warning msg="cleanup warnings time=\"2025-04-30T03:48:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 03:48:13.765158 containerd[1635]: time="2025-04-30T03:48:13.764033872Z" level=info msg="TearDown network for sandbox \"95bc9fc4b74d97da37cd866088304e53a66c69dd9e4f4a854c310b97c0fa1981\" successfully" Apr 30 03:48:13.765158 containerd[1635]: time="2025-04-30T03:48:13.764057958Z" level=info msg="StopPodSandbox for \"95bc9fc4b74d97da37cd866088304e53a66c69dd9e4f4a854c310b97c0fa1981\" returns successfully" Apr 30 03:48:13.783094 systemd-networkd[1258]: calib9c823a46ef: Link DOWN Apr 30 03:48:13.783101 systemd-networkd[1258]: calib9c823a46ef: Lost carrier Apr 30 03:48:13.806694 kubelet[3046]: I0430 03:48:13.806653 3046 topology_manager.go:215] "Topology Admit Handler" podUID="7a864509-b61f-4907-a4a8-6136d86fa60f" podNamespace="calico-system" podName="calico-node-h6cvk" Apr 30 03:48:13.812264 kubelet[3046]: E0430 03:48:13.808956 3046 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94898af1-d8d4-4a31-ac96-01740beca0cc" containerName="calico-apiserver" Apr 30 03:48:13.812264 kubelet[3046]: E0430 03:48:13.808978 3046 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86" containerName="calico-apiserver" Apr 30 03:48:13.812264 kubelet[3046]: E0430 03:48:13.808984 3046 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2" containerName="calico-node" Apr 30 03:48:13.812264 kubelet[3046]: E0430 03:48:13.809015 3046 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2" containerName="flexvol-driver" Apr 30 03:48:13.812264 kubelet[3046]: E0430 03:48:13.809021 3046 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2" containerName="install-cni" Apr 30 03:48:13.812264 kubelet[3046]: I0430 03:48:13.809049 3046 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0d8d012-e7a1-4b65-9dc5-bdfa589a9f86" containerName="calico-apiserver" Apr 30 03:48:13.812264 kubelet[3046]: I0430 03:48:13.809055 3046 memory_manager.go:354] "RemoveStaleState removing state" podUID="c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2" containerName="calico-node" Apr 30 03:48:13.812264 kubelet[3046]: I0430 03:48:13.809059 3046 memory_manager.go:354] "RemoveStaleState removing state" podUID="94898af1-d8d4-4a31-ac96-01740beca0cc" containerName="calico-apiserver" Apr 30 03:48:13.839577 kubelet[3046]: I0430 03:48:13.837360 3046 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-cni-bin-dir\") pod \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\" (UID: \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\") " Apr 30 03:48:13.839577 kubelet[3046]: I0430 03:48:13.837487 3046 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sst5\" (UniqueName: \"kubernetes.io/projected/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-kube-api-access-5sst5\") pod \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\" (UID: \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\") " Apr 30 03:48:13.839577 kubelet[3046]: I0430 03:48:13.837513 3046 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-policysync\") pod \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\" (UID: \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\") " Apr 30 03:48:13.839577 kubelet[3046]: I0430 03:48:13.837675 3046 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-node-certs\") pod \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\" (UID: \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\") " Apr 30 03:48:13.839577 kubelet[3046]: I0430 03:48:13.837696 3046 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-cni-net-dir\") pod \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\" (UID: \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\") " Apr 30 03:48:13.839577 kubelet[3046]: I0430 03:48:13.837746 3046 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2" (UID: "c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:48:13.839773 kubelet[3046]: I0430 03:48:13.837980 3046 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-cni-log-dir\") pod \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\" (UID: \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\") " Apr 30 03:48:13.839773 kubelet[3046]: I0430 03:48:13.838009 3046 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-tigera-ca-bundle\") pod \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\" (UID: \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\") " Apr 30 03:48:13.839773 kubelet[3046]: I0430 03:48:13.838030 3046 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-var-run-calico\") pod \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\" (UID: \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\") " Apr 30 03:48:13.839773 kubelet[3046]: I0430 03:48:13.838558 3046 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-flexvol-driver-host\") pod \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\" (UID: \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\") " Apr 30 03:48:13.839773 kubelet[3046]: I0430 03:48:13.838579 3046 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-var-lib-calico\") pod \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\" (UID: \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\") " Apr 30 03:48:13.839773 kubelet[3046]: I0430 03:48:13.838597 3046 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-xtables-lock\") pod \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\" (UID: \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\") " Apr 30 03:48:13.839929 kubelet[3046]: I0430 03:48:13.838616 3046 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-lib-modules\") pod \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\" (UID: \"c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2\") " Apr 30 03:48:13.839929 kubelet[3046]: I0430 03:48:13.838691 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a864509-b61f-4907-a4a8-6136d86fa60f-tigera-ca-bundle\") pod \"calico-node-h6cvk\" (UID: \"7a864509-b61f-4907-a4a8-6136d86fa60f\") " pod="calico-system/calico-node-h6cvk" Apr 30 03:48:13.839929 kubelet[3046]: I0430 03:48:13.838717 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7a864509-b61f-4907-a4a8-6136d86fa60f-cni-bin-dir\") pod \"calico-node-h6cvk\" (UID: \"7a864509-b61f-4907-a4a8-6136d86fa60f\") " pod="calico-system/calico-node-h6cvk" Apr 30 03:48:13.839929 kubelet[3046]: I0430 03:48:13.838736 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7a864509-b61f-4907-a4a8-6136d86fa60f-policysync\") pod \"calico-node-h6cvk\" (UID: \"7a864509-b61f-4907-a4a8-6136d86fa60f\") " pod="calico-system/calico-node-h6cvk" Apr 30 03:48:13.839929 kubelet[3046]: I0430 03:48:13.838755 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a864509-b61f-4907-a4a8-6136d86fa60f-lib-modules\") pod \"calico-node-h6cvk\" (UID: \"7a864509-b61f-4907-a4a8-6136d86fa60f\") " pod="calico-system/calico-node-h6cvk" Apr 30 03:48:13.839929 kubelet[3046]: I0430 03:48:13.838773 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7a864509-b61f-4907-a4a8-6136d86fa60f-var-lib-calico\") pod \"calico-node-h6cvk\" (UID: \"7a864509-b61f-4907-a4a8-6136d86fa60f\") " pod="calico-system/calico-node-h6cvk" Apr 30 03:48:13.840096 kubelet[3046]: I0430 03:48:13.840013 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7a864509-b61f-4907-a4a8-6136d86fa60f-node-certs\") pod \"calico-node-h6cvk\" (UID: \"7a864509-b61f-4907-a4a8-6136d86fa60f\") " pod="calico-system/calico-node-h6cvk" Apr 30 03:48:13.840096 kubelet[3046]: I0430 03:48:13.840038 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7a864509-b61f-4907-a4a8-6136d86fa60f-cni-net-dir\") pod \"calico-node-h6cvk\" (UID: \"7a864509-b61f-4907-a4a8-6136d86fa60f\") " pod="calico-system/calico-node-h6cvk" Apr 30 03:48:13.840096 kubelet[3046]: I0430 03:48:13.840056 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a864509-b61f-4907-a4a8-6136d86fa60f-xtables-lock\") pod \"calico-node-h6cvk\" (UID: \"7a864509-b61f-4907-a4a8-6136d86fa60f\") " pod="calico-system/calico-node-h6cvk" Apr 30 03:48:13.840096 kubelet[3046]: I0430 03:48:13.840069 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7a864509-b61f-4907-a4a8-6136d86fa60f-cni-log-dir\") pod \"calico-node-h6cvk\" (UID: \"7a864509-b61f-4907-a4a8-6136d86fa60f\") " pod="calico-system/calico-node-h6cvk" Apr 30 03:48:13.840096 kubelet[3046]: I0430 03:48:13.840083 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7a864509-b61f-4907-a4a8-6136d86fa60f-flexvol-driver-host\") pod \"calico-node-h6cvk\" (UID: \"7a864509-b61f-4907-a4a8-6136d86fa60f\") " pod="calico-system/calico-node-h6cvk" Apr 30 03:48:13.843987 kubelet[3046]: I0430 03:48:13.840098 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7a864509-b61f-4907-a4a8-6136d86fa60f-var-run-calico\") pod \"calico-node-h6cvk\" (UID: \"7a864509-b61f-4907-a4a8-6136d86fa60f\") " pod="calico-system/calico-node-h6cvk" Apr 30 03:48:13.843987 kubelet[3046]: I0430 03:48:13.840109 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lh8h\" (UniqueName: \"kubernetes.io/projected/7a864509-b61f-4907-a4a8-6136d86fa60f-kube-api-access-9lh8h\") pod \"calico-node-h6cvk\" (UID: \"7a864509-b61f-4907-a4a8-6136d86fa60f\") " pod="calico-system/calico-node-h6cvk" Apr 30 03:48:13.843987 kubelet[3046]: I0430 03:48:13.840125 3046 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-cni-bin-dir\") on node \"ci-4081-3-3-c-b54c1f5c93\" DevicePath \"\"" Apr 30 03:48:13.843987 kubelet[3046]: I0430 03:48:13.838254 3046 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-policysync" (OuterVolumeSpecName: "policysync") pod "c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2" (UID: "c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:48:13.843987 kubelet[3046]: I0430 03:48:13.838277 3046 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2" (UID: "c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:48:13.844161 kubelet[3046]: I0430 03:48:13.838290 3046 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2" (UID: "c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:48:13.844161 kubelet[3046]: I0430 03:48:13.838510 3046 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2" (UID: "c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:48:13.844161 kubelet[3046]: I0430 03:48:13.841224 3046 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2" (UID: "c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:48:13.844161 kubelet[3046]: I0430 03:48:13.841240 3046 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2" (UID: "c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:48:13.844161 kubelet[3046]: I0430 03:48:13.841566 3046 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2" (UID: "c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:48:13.844269 kubelet[3046]: I0430 03:48:13.841592 3046 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2" (UID: "c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:48:13.856827 kubelet[3046]: I0430 03:48:13.856762 3046 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-node-certs" (OuterVolumeSpecName: "node-certs") pod "c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2" (UID: "c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 03:48:13.857222 kubelet[3046]: I0430 03:48:13.857190 3046 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-kube-api-access-5sst5" (OuterVolumeSpecName: "kube-api-access-5sst5") pod "c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2" (UID: "c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2"). InnerVolumeSpecName "kube-api-access-5sst5". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:48:13.863482 kubelet[3046]: I0430 03:48:13.863462 3046 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2" (UID: "c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 03:48:13.899808 containerd[1635]: 2025-04-30 03:48:13.778 [INFO][6527] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Apr 30 03:48:13.899808 containerd[1635]: 2025-04-30 03:48:13.778 [INFO][6527] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" iface="eth0" netns="/var/run/netns/cni-5257d7ef-b370-a291-95f8-8bd4b930e94c" Apr 30 03:48:13.899808 containerd[1635]: 2025-04-30 03:48:13.781 [INFO][6527] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" iface="eth0" netns="/var/run/netns/cni-5257d7ef-b370-a291-95f8-8bd4b930e94c" Apr 30 03:48:13.899808 containerd[1635]: 2025-04-30 03:48:13.794 [INFO][6527] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" after=16.414092ms iface="eth0" netns="/var/run/netns/cni-5257d7ef-b370-a291-95f8-8bd4b930e94c" Apr 30 03:48:13.899808 containerd[1635]: 2025-04-30 03:48:13.795 [INFO][6527] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Apr 30 03:48:13.899808 containerd[1635]: 2025-04-30 03:48:13.795 [INFO][6527] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Apr 30 03:48:13.899808 containerd[1635]: 2025-04-30 03:48:13.849 [INFO][6550] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" HandleID="k8s-pod-network.b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0" Apr 30 03:48:13.899808 containerd[1635]: 2025-04-30 03:48:13.852 [INFO][6550] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:13.899808 containerd[1635]: 2025-04-30 03:48:13.852 [INFO][6550] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:13.899808 containerd[1635]: 2025-04-30 03:48:13.895 [INFO][6550] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" HandleID="k8s-pod-network.b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0" Apr 30 03:48:13.899808 containerd[1635]: 2025-04-30 03:48:13.895 [INFO][6550] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" HandleID="k8s-pod-network.b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0" Apr 30 03:48:13.899808 containerd[1635]: 2025-04-30 03:48:13.897 [INFO][6550] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:13.899808 containerd[1635]: 2025-04-30 03:48:13.898 [INFO][6527] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Apr 30 03:48:13.900805 containerd[1635]: time="2025-04-30T03:48:13.900399403Z" level=info msg="TearDown network for sandbox \"b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8\" successfully" Apr 30 03:48:13.900805 containerd[1635]: time="2025-04-30T03:48:13.900441052Z" level=info msg="StopPodSandbox for \"b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8\" returns successfully" Apr 30 03:48:13.941217 kubelet[3046]: I0430 03:48:13.941180 3046 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn2nf\" (UniqueName: \"kubernetes.io/projected/c2c1ef43-1732-4ee2-984a-7c96357acb4c-kube-api-access-hn2nf\") pod \"c2c1ef43-1732-4ee2-984a-7c96357acb4c\" (UID: \"c2c1ef43-1732-4ee2-984a-7c96357acb4c\") " Apr 30 03:48:13.941217 kubelet[3046]: I0430 03:48:13.941218 3046 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2c1ef43-1732-4ee2-984a-7c96357acb4c-tigera-ca-bundle\") pod \"c2c1ef43-1732-4ee2-984a-7c96357acb4c\" (UID: \"c2c1ef43-1732-4ee2-984a-7c96357acb4c\") " Apr 30 03:48:13.941362 kubelet[3046]: I0430 03:48:13.941356 3046 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-var-run-calico\") on node \"ci-4081-3-3-c-b54c1f5c93\" DevicePath \"\"" Apr 30 03:48:13.941385 kubelet[3046]: I0430 03:48:13.941365 3046 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-flexvol-driver-host\") on node \"ci-4081-3-3-c-b54c1f5c93\" DevicePath \"\"" Apr 30 03:48:13.941385 kubelet[3046]: I0430 03:48:13.941373 3046 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-xtables-lock\") on node \"ci-4081-3-3-c-b54c1f5c93\" DevicePath \"\"" Apr 30 03:48:13.941385 kubelet[3046]: I0430 03:48:13.941379 3046 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-var-lib-calico\") on node \"ci-4081-3-3-c-b54c1f5c93\" DevicePath \"\"" Apr 30 03:48:13.941440 kubelet[3046]: I0430 03:48:13.941386 3046 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-policysync\") on node \"ci-4081-3-3-c-b54c1f5c93\" DevicePath \"\"" Apr 30 03:48:13.941440 kubelet[3046]: I0430 03:48:13.941392 3046 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-cni-net-dir\") on node \"ci-4081-3-3-c-b54c1f5c93\" DevicePath \"\"" Apr 30 03:48:13.941440 kubelet[3046]: I0430 03:48:13.941399 3046 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-cni-log-dir\") on node \"ci-4081-3-3-c-b54c1f5c93\" DevicePath \"\"" Apr 30 03:48:13.941440 kubelet[3046]: I0430 03:48:13.941405 3046 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-lib-modules\") on node \"ci-4081-3-3-c-b54c1f5c93\" DevicePath \"\"" Apr 30 03:48:13.941440 kubelet[3046]: I0430 03:48:13.941412 3046 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-5sst5\" (UniqueName: \"kubernetes.io/projected/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-kube-api-access-5sst5\") on node \"ci-4081-3-3-c-b54c1f5c93\" DevicePath \"\"" Apr 30 03:48:13.941440 kubelet[3046]: I0430 03:48:13.941418 3046 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-node-certs\") on node \"ci-4081-3-3-c-b54c1f5c93\" DevicePath \"\"" Apr 30 03:48:13.941440 kubelet[3046]: I0430 03:48:13.941424 3046 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2-tigera-ca-bundle\") on node \"ci-4081-3-3-c-b54c1f5c93\" DevicePath \"\"" Apr 30 03:48:13.950400 kubelet[3046]: I0430 03:48:13.949616 3046 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2c1ef43-1732-4ee2-984a-7c96357acb4c-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "c2c1ef43-1732-4ee2-984a-7c96357acb4c" (UID: "c2c1ef43-1732-4ee2-984a-7c96357acb4c"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 03:48:13.950817 kubelet[3046]: I0430 03:48:13.950797 3046 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2c1ef43-1732-4ee2-984a-7c96357acb4c-kube-api-access-hn2nf" (OuterVolumeSpecName: "kube-api-access-hn2nf") pod "c2c1ef43-1732-4ee2-984a-7c96357acb4c" (UID: "c2c1ef43-1732-4ee2-984a-7c96357acb4c"). InnerVolumeSpecName "kube-api-access-hn2nf". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:48:14.042612 kubelet[3046]: I0430 03:48:14.042408 3046 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hn2nf\" (UniqueName: \"kubernetes.io/projected/c2c1ef43-1732-4ee2-984a-7c96357acb4c-kube-api-access-hn2nf\") on node \"ci-4081-3-3-c-b54c1f5c93\" DevicePath \"\"" Apr 30 03:48:14.042612 kubelet[3046]: I0430 03:48:14.042437 3046 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2c1ef43-1732-4ee2-984a-7c96357acb4c-tigera-ca-bundle\") on node \"ci-4081-3-3-c-b54c1f5c93\" DevicePath \"\"" Apr 30 03:48:14.127248 containerd[1635]: time="2025-04-30T03:48:14.126269054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h6cvk,Uid:7a864509-b61f-4907-a4a8-6136d86fa60f,Namespace:calico-system,Attempt:0,}" Apr 30 03:48:14.151936 containerd[1635]: time="2025-04-30T03:48:14.150965713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:48:14.151936 containerd[1635]: time="2025-04-30T03:48:14.151118703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:48:14.151936 containerd[1635]: time="2025-04-30T03:48:14.151128682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:48:14.151936 containerd[1635]: time="2025-04-30T03:48:14.151406580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:48:14.187310 containerd[1635]: time="2025-04-30T03:48:14.186530825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h6cvk,Uid:7a864509-b61f-4907-a4a8-6136d86fa60f,Namespace:calico-system,Attempt:0,} returns sandbox id \"94e062e6a3e255ccac6acf2b6806e9774d2599857441d585b153fa29aa83e388\"" Apr 30 03:48:14.189093 containerd[1635]: time="2025-04-30T03:48:14.189069174Z" level=info msg="CreateContainer within sandbox \"94e062e6a3e255ccac6acf2b6806e9774d2599857441d585b153fa29aa83e388\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 03:48:14.206284 containerd[1635]: time="2025-04-30T03:48:14.206250715Z" level=info msg="CreateContainer within sandbox \"94e062e6a3e255ccac6acf2b6806e9774d2599857441d585b153fa29aa83e388\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7e4273b9bacce071a8f39b3442211917a31df681b76e7b3c2d971c133b2ce5bb\"" Apr 30 03:48:14.207000 containerd[1635]: time="2025-04-30T03:48:14.206807782Z" level=info msg="StartContainer for \"7e4273b9bacce071a8f39b3442211917a31df681b76e7b3c2d971c133b2ce5bb\"" Apr 30 03:48:14.266204 containerd[1635]: time="2025-04-30T03:48:14.266175035Z" level=info msg="StartContainer for \"7e4273b9bacce071a8f39b3442211917a31df681b76e7b3c2d971c133b2ce5bb\" returns successfully" Apr 30 03:48:14.328303 containerd[1635]: time="2025-04-30T03:48:14.328237535Z" level=info msg="shim disconnected" id=7e4273b9bacce071a8f39b3442211917a31df681b76e7b3c2d971c133b2ce5bb namespace=k8s.io Apr 30 03:48:14.328303 containerd[1635]: time="2025-04-30T03:48:14.328282201Z" level=warning msg="cleaning up after shim disconnected" id=7e4273b9bacce071a8f39b3442211917a31df681b76e7b3c2d971c133b2ce5bb namespace=k8s.io Apr 30 03:48:14.328303 containerd[1635]: time="2025-04-30T03:48:14.328290416Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:48:14.373934 kubelet[3046]: I0430 03:48:14.373822 3046 scope.go:117] "RemoveContainer" containerID="ea7f5d8fda3930f1ed38ef1b15a32068c291dc1a7dc062926607e214e34561bd" Apr 30 03:48:14.375668 containerd[1635]: time="2025-04-30T03:48:14.375599646Z" level=info msg="RemoveContainer for \"ea7f5d8fda3930f1ed38ef1b15a32068c291dc1a7dc062926607e214e34561bd\"" Apr 30 03:48:14.379141 containerd[1635]: time="2025-04-30T03:48:14.378647462Z" level=info msg="RemoveContainer for \"ea7f5d8fda3930f1ed38ef1b15a32068c291dc1a7dc062926607e214e34561bd\" returns successfully" Apr 30 03:48:14.379192 kubelet[3046]: I0430 03:48:14.378797 3046 scope.go:117] "RemoveContainer" containerID="ea7f5d8fda3930f1ed38ef1b15a32068c291dc1a7dc062926607e214e34561bd" Apr 30 03:48:14.379919 containerd[1635]: time="2025-04-30T03:48:14.379487057Z" level=error msg="ContainerStatus for \"ea7f5d8fda3930f1ed38ef1b15a32068c291dc1a7dc062926607e214e34561bd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ea7f5d8fda3930f1ed38ef1b15a32068c291dc1a7dc062926607e214e34561bd\": not found" Apr 30 03:48:14.379966 kubelet[3046]: E0430 03:48:14.379760 3046 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ea7f5d8fda3930f1ed38ef1b15a32068c291dc1a7dc062926607e214e34561bd\": not found" containerID="ea7f5d8fda3930f1ed38ef1b15a32068c291dc1a7dc062926607e214e34561bd" Apr 30 03:48:14.379966 kubelet[3046]: I0430 03:48:14.379779 3046 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ea7f5d8fda3930f1ed38ef1b15a32068c291dc1a7dc062926607e214e34561bd"} err="failed to get container status \"ea7f5d8fda3930f1ed38ef1b15a32068c291dc1a7dc062926607e214e34561bd\": rpc error: code = NotFound desc = an error occurred when try to find container \"ea7f5d8fda3930f1ed38ef1b15a32068c291dc1a7dc062926607e214e34561bd\": not found" Apr 30 03:48:14.379966 kubelet[3046]: I0430 03:48:14.379794 3046 scope.go:117] "RemoveContainer" containerID="c3e5110b459900bd3808e96cd4b12da2b79a16e074a4c30327dd51e085adbf39" Apr 30 03:48:14.385743 containerd[1635]: time="2025-04-30T03:48:14.385711443Z" level=info msg="RemoveContainer for \"c3e5110b459900bd3808e96cd4b12da2b79a16e074a4c30327dd51e085adbf39\"" Apr 30 03:48:14.389489 containerd[1635]: time="2025-04-30T03:48:14.387591964Z" level=info msg="CreateContainer within sandbox \"94e062e6a3e255ccac6acf2b6806e9774d2599857441d585b153fa29aa83e388\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 03:48:14.389489 containerd[1635]: time="2025-04-30T03:48:14.389116859Z" level=info msg="RemoveContainer for \"c3e5110b459900bd3808e96cd4b12da2b79a16e074a4c30327dd51e085adbf39\" returns successfully" Apr 30 03:48:14.392002 kubelet[3046]: I0430 03:48:14.391944 3046 scope.go:117] "RemoveContainer" containerID="6107d0ef9475aefcee285683d508ffae69294f6563a51a79018b1b2cf7195d61" Apr 30 03:48:14.393731 containerd[1635]: time="2025-04-30T03:48:14.393712685Z" level=info msg="RemoveContainer for \"6107d0ef9475aefcee285683d508ffae69294f6563a51a79018b1b2cf7195d61\"" Apr 30 03:48:14.397689 containerd[1635]: time="2025-04-30T03:48:14.397661463Z" level=info msg="RemoveContainer for \"6107d0ef9475aefcee285683d508ffae69294f6563a51a79018b1b2cf7195d61\" returns successfully" Apr 30 03:48:14.397839 kubelet[3046]: I0430 03:48:14.397809 3046 scope.go:117] "RemoveContainer" containerID="523feefa3d1856551b8bcdc80d22c7f1851dce7cb8a5337f2d4ab6fd4f353ace" Apr 30 03:48:14.398542 containerd[1635]: time="2025-04-30T03:48:14.398521123Z" level=info msg="RemoveContainer for \"523feefa3d1856551b8bcdc80d22c7f1851dce7cb8a5337f2d4ab6fd4f353ace\"" Apr 30 03:48:14.402689 containerd[1635]: time="2025-04-30T03:48:14.402663027Z" level=info msg="RemoveContainer for \"523feefa3d1856551b8bcdc80d22c7f1851dce7cb8a5337f2d4ab6fd4f353ace\" returns successfully" Apr 30 03:48:14.403095 kubelet[3046]: I0430 03:48:14.403044 3046 scope.go:117] "RemoveContainer" containerID="c3e5110b459900bd3808e96cd4b12da2b79a16e074a4c30327dd51e085adbf39" Apr 30 03:48:14.403376 containerd[1635]: time="2025-04-30T03:48:14.403186812Z" level=error msg="ContainerStatus for \"c3e5110b459900bd3808e96cd4b12da2b79a16e074a4c30327dd51e085adbf39\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c3e5110b459900bd3808e96cd4b12da2b79a16e074a4c30327dd51e085adbf39\": not found" Apr 30 03:48:14.403599 kubelet[3046]: E0430 03:48:14.403280 3046 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c3e5110b459900bd3808e96cd4b12da2b79a16e074a4c30327dd51e085adbf39\": not found" containerID="c3e5110b459900bd3808e96cd4b12da2b79a16e074a4c30327dd51e085adbf39" Apr 30 03:48:14.403599 kubelet[3046]: I0430 03:48:14.403299 3046 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c3e5110b459900bd3808e96cd4b12da2b79a16e074a4c30327dd51e085adbf39"} err="failed to get container status \"c3e5110b459900bd3808e96cd4b12da2b79a16e074a4c30327dd51e085adbf39\": rpc error: code = NotFound desc = an error occurred when try to find container \"c3e5110b459900bd3808e96cd4b12da2b79a16e074a4c30327dd51e085adbf39\": not found" Apr 30 03:48:14.403599 kubelet[3046]: I0430 03:48:14.403326 3046 scope.go:117] "RemoveContainer" containerID="6107d0ef9475aefcee285683d508ffae69294f6563a51a79018b1b2cf7195d61" Apr 30 03:48:14.404243 kubelet[3046]: E0430 03:48:14.404118 3046 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6107d0ef9475aefcee285683d508ffae69294f6563a51a79018b1b2cf7195d61\": not found" containerID="6107d0ef9475aefcee285683d508ffae69294f6563a51a79018b1b2cf7195d61" Apr 30 03:48:14.404243 kubelet[3046]: I0430 03:48:14.404133 3046 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6107d0ef9475aefcee285683d508ffae69294f6563a51a79018b1b2cf7195d61"} err="failed to get container status \"6107d0ef9475aefcee285683d508ffae69294f6563a51a79018b1b2cf7195d61\": rpc error: code = NotFound desc = an error occurred when try to find container \"6107d0ef9475aefcee285683d508ffae69294f6563a51a79018b1b2cf7195d61\": not found" Apr 30 03:48:14.404243 kubelet[3046]: I0430 03:48:14.404143 3046 scope.go:117] "RemoveContainer" containerID="523feefa3d1856551b8bcdc80d22c7f1851dce7cb8a5337f2d4ab6fd4f353ace" Apr 30 03:48:14.405038 containerd[1635]: time="2025-04-30T03:48:14.403834932Z" level=error msg="ContainerStatus for \"6107d0ef9475aefcee285683d508ffae69294f6563a51a79018b1b2cf7195d61\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6107d0ef9475aefcee285683d508ffae69294f6563a51a79018b1b2cf7195d61\": not found" Apr 30 03:48:14.405589 containerd[1635]: time="2025-04-30T03:48:14.405549168Z" level=error msg="ContainerStatus for \"523feefa3d1856551b8bcdc80d22c7f1851dce7cb8a5337f2d4ab6fd4f353ace\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"523feefa3d1856551b8bcdc80d22c7f1851dce7cb8a5337f2d4ab6fd4f353ace\": not found" Apr 30 03:48:14.406785 kubelet[3046]: E0430 03:48:14.406751 3046 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"523feefa3d1856551b8bcdc80d22c7f1851dce7cb8a5337f2d4ab6fd4f353ace\": not found" containerID="523feefa3d1856551b8bcdc80d22c7f1851dce7cb8a5337f2d4ab6fd4f353ace" Apr 30 03:48:14.406785 kubelet[3046]: I0430 03:48:14.406769 3046 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"523feefa3d1856551b8bcdc80d22c7f1851dce7cb8a5337f2d4ab6fd4f353ace"} err="failed to get container status \"523feefa3d1856551b8bcdc80d22c7f1851dce7cb8a5337f2d4ab6fd4f353ace\": rpc error: code = NotFound desc = an error occurred when try to find container \"523feefa3d1856551b8bcdc80d22c7f1851dce7cb8a5337f2d4ab6fd4f353ace\": not found" Apr 30 03:48:14.411346 containerd[1635]: time="2025-04-30T03:48:14.411306518Z" level=info msg="CreateContainer within sandbox \"94e062e6a3e255ccac6acf2b6806e9774d2599857441d585b153fa29aa83e388\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"75f9f8c6e1f26fb49b69be2bed328a283fdba20bb350c6cd85aa8633bbc77f0e\"" Apr 30 03:48:14.412452 containerd[1635]: time="2025-04-30T03:48:14.412302910Z" level=info msg="StartContainer for \"75f9f8c6e1f26fb49b69be2bed328a283fdba20bb350c6cd85aa8633bbc77f0e\"" Apr 30 03:48:14.461800 systemd[1]: var-lib-kubelet-pods-c2c1ef43\x2d1732\x2d4ee2\x2d984a\x2d7c96357acb4c-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Apr 30 03:48:14.464035 systemd[1]: run-netns-cni\x2d5257d7ef\x2db370\x2da291\x2d95f8\x2d8bd4b930e94c.mount: Deactivated successfully. Apr 30 03:48:14.464117 systemd[1]: var-lib-kubelet-pods-c493ab4b\x2d4f1b\x2d4ecf\x2dbbb9\x2d16eb01191eb2-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Apr 30 03:48:14.464195 systemd[1]: var-lib-kubelet-pods-c2c1ef43\x2d1732\x2d4ee2\x2d984a\x2d7c96357acb4c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhn2nf.mount: Deactivated successfully. Apr 30 03:48:14.464269 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95bc9fc4b74d97da37cd866088304e53a66c69dd9e4f4a854c310b97c0fa1981-rootfs.mount: Deactivated successfully. Apr 30 03:48:14.464335 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-95bc9fc4b74d97da37cd866088304e53a66c69dd9e4f4a854c310b97c0fa1981-shm.mount: Deactivated successfully. Apr 30 03:48:14.464402 systemd[1]: var-lib-kubelet-pods-c493ab4b\x2d4f1b\x2d4ecf\x2dbbb9\x2d16eb01191eb2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5sst5.mount: Deactivated successfully. Apr 30 03:48:14.464614 systemd[1]: var-lib-kubelet-pods-c493ab4b\x2d4f1b\x2d4ecf\x2dbbb9\x2d16eb01191eb2-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Apr 30 03:48:14.530594 containerd[1635]: time="2025-04-30T03:48:14.525508679Z" level=info msg="shim disconnected" id=4dd040ca34c5d11459991d3eea3c83ddea76b05b367a23e1f149d61755732fff namespace=k8s.io Apr 30 03:48:14.530234 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4dd040ca34c5d11459991d3eea3c83ddea76b05b367a23e1f149d61755732fff-rootfs.mount: Deactivated successfully. Apr 30 03:48:14.532154 containerd[1635]: time="2025-04-30T03:48:14.531993841Z" level=warning msg="cleaning up after shim disconnected" id=4dd040ca34c5d11459991d3eea3c83ddea76b05b367a23e1f149d61755732fff namespace=k8s.io Apr 30 03:48:14.532154 containerd[1635]: time="2025-04-30T03:48:14.532021454Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:48:14.583980 containerd[1635]: time="2025-04-30T03:48:14.583940876Z" level=info msg="StopContainer for \"4dd040ca34c5d11459991d3eea3c83ddea76b05b367a23e1f149d61755732fff\" returns successfully" Apr 30 03:48:14.585233 containerd[1635]: time="2025-04-30T03:48:14.584803344Z" level=info msg="StopPodSandbox for \"434baaf6f9a446755eecb790a24ee4dc9d591085dfa417a8bc681cd83857649e\"" Apr 30 03:48:14.585233 containerd[1635]: time="2025-04-30T03:48:14.584827911Z" level=info msg="Container to stop \"4dd040ca34c5d11459991d3eea3c83ddea76b05b367a23e1f149d61755732fff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:48:14.593428 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-434baaf6f9a446755eecb790a24ee4dc9d591085dfa417a8bc681cd83857649e-shm.mount: Deactivated successfully. Apr 30 03:48:14.632113 containerd[1635]: time="2025-04-30T03:48:14.631206494Z" level=info msg="StartContainer for \"75f9f8c6e1f26fb49b69be2bed328a283fdba20bb350c6cd85aa8633bbc77f0e\" returns successfully" Apr 30 03:48:14.636170 containerd[1635]: time="2025-04-30T03:48:14.636037486Z" level=info msg="shim disconnected" id=434baaf6f9a446755eecb790a24ee4dc9d591085dfa417a8bc681cd83857649e namespace=k8s.io Apr 30 03:48:14.636170 containerd[1635]: time="2025-04-30T03:48:14.636075669Z" level=warning msg="cleaning up after shim disconnected" id=434baaf6f9a446755eecb790a24ee4dc9d591085dfa417a8bc681cd83857649e namespace=k8s.io Apr 30 03:48:14.636170 containerd[1635]: time="2025-04-30T03:48:14.636084755Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:48:14.637366 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-434baaf6f9a446755eecb790a24ee4dc9d591085dfa417a8bc681cd83857649e-rootfs.mount: Deactivated successfully. Apr 30 03:48:14.655764 containerd[1635]: time="2025-04-30T03:48:14.655205518Z" level=info msg="TearDown network for sandbox \"434baaf6f9a446755eecb790a24ee4dc9d591085dfa417a8bc681cd83857649e\" successfully" Apr 30 03:48:14.655764 containerd[1635]: time="2025-04-30T03:48:14.655238030Z" level=info msg="StopPodSandbox for \"434baaf6f9a446755eecb790a24ee4dc9d591085dfa417a8bc681cd83857649e\" returns successfully" Apr 30 03:48:14.847986 kubelet[3046]: I0430 03:48:14.847950 3046 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4a9a1592-fe0c-4cb6-b483-aa790fddb06a-typha-certs\") pod \"4a9a1592-fe0c-4cb6-b483-aa790fddb06a\" (UID: \"4a9a1592-fe0c-4cb6-b483-aa790fddb06a\") " Apr 30 03:48:14.848613 kubelet[3046]: I0430 03:48:14.848148 3046 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7dmp\" (UniqueName: \"kubernetes.io/projected/4a9a1592-fe0c-4cb6-b483-aa790fddb06a-kube-api-access-r7dmp\") pod \"4a9a1592-fe0c-4cb6-b483-aa790fddb06a\" (UID: \"4a9a1592-fe0c-4cb6-b483-aa790fddb06a\") " Apr 30 03:48:14.848613 kubelet[3046]: I0430 03:48:14.848169 3046 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a9a1592-fe0c-4cb6-b483-aa790fddb06a-tigera-ca-bundle\") pod \"4a9a1592-fe0c-4cb6-b483-aa790fddb06a\" (UID: \"4a9a1592-fe0c-4cb6-b483-aa790fddb06a\") " Apr 30 03:48:14.853135 kubelet[3046]: I0430 03:48:14.853112 3046 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a9a1592-fe0c-4cb6-b483-aa790fddb06a-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "4a9a1592-fe0c-4cb6-b483-aa790fddb06a" (UID: "4a9a1592-fe0c-4cb6-b483-aa790fddb06a"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 03:48:14.854020 kubelet[3046]: I0430 03:48:14.854004 3046 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a9a1592-fe0c-4cb6-b483-aa790fddb06a-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "4a9a1592-fe0c-4cb6-b483-aa790fddb06a" (UID: "4a9a1592-fe0c-4cb6-b483-aa790fddb06a"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 03:48:14.855005 kubelet[3046]: I0430 03:48:14.854965 3046 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a9a1592-fe0c-4cb6-b483-aa790fddb06a-kube-api-access-r7dmp" (OuterVolumeSpecName: "kube-api-access-r7dmp") pod "4a9a1592-fe0c-4cb6-b483-aa790fddb06a" (UID: "4a9a1592-fe0c-4cb6-b483-aa790fddb06a"). InnerVolumeSpecName "kube-api-access-r7dmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:48:14.949492 kubelet[3046]: I0430 03:48:14.949042 3046 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4a9a1592-fe0c-4cb6-b483-aa790fddb06a-typha-certs\") on node \"ci-4081-3-3-c-b54c1f5c93\" DevicePath \"\"" Apr 30 03:48:14.949492 kubelet[3046]: I0430 03:48:14.949070 3046 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-r7dmp\" (UniqueName: \"kubernetes.io/projected/4a9a1592-fe0c-4cb6-b483-aa790fddb06a-kube-api-access-r7dmp\") on node \"ci-4081-3-3-c-b54c1f5c93\" DevicePath \"\"" Apr 30 03:48:14.949492 kubelet[3046]: I0430 03:48:14.949078 3046 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a9a1592-fe0c-4cb6-b483-aa790fddb06a-tigera-ca-bundle\") on node \"ci-4081-3-3-c-b54c1f5c93\" DevicePath \"\"" Apr 30 03:48:14.979994 kubelet[3046]: I0430 03:48:14.979625 3046 topology_manager.go:215] "Topology Admit Handler" podUID="597d54e3-91fe-4e69-adbe-f92aaaa78096" podNamespace="calico-system" podName="calico-typha-988dc976d-qnfth" Apr 30 03:48:14.979994 kubelet[3046]: E0430 03:48:14.979693 3046 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4a9a1592-fe0c-4cb6-b483-aa790fddb06a" containerName="calico-typha" Apr 30 03:48:14.979994 kubelet[3046]: E0430 03:48:14.979702 3046 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c2c1ef43-1732-4ee2-984a-7c96357acb4c" containerName="calico-kube-controllers" Apr 30 03:48:14.979994 kubelet[3046]: I0430 03:48:14.979724 3046 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2c1ef43-1732-4ee2-984a-7c96357acb4c" containerName="calico-kube-controllers" Apr 30 03:48:14.979994 kubelet[3046]: I0430 03:48:14.979731 3046 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a9a1592-fe0c-4cb6-b483-aa790fddb06a" containerName="calico-typha" Apr 30 03:48:15.050271 kubelet[3046]: I0430 03:48:15.050168 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfmk4\" (UniqueName: \"kubernetes.io/projected/597d54e3-91fe-4e69-adbe-f92aaaa78096-kube-api-access-tfmk4\") pod \"calico-typha-988dc976d-qnfth\" (UID: \"597d54e3-91fe-4e69-adbe-f92aaaa78096\") " pod="calico-system/calico-typha-988dc976d-qnfth" Apr 30 03:48:15.050271 kubelet[3046]: I0430 03:48:15.050205 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/597d54e3-91fe-4e69-adbe-f92aaaa78096-typha-certs\") pod \"calico-typha-988dc976d-qnfth\" (UID: \"597d54e3-91fe-4e69-adbe-f92aaaa78096\") " pod="calico-system/calico-typha-988dc976d-qnfth" Apr 30 03:48:15.050271 kubelet[3046]: I0430 03:48:15.050222 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/597d54e3-91fe-4e69-adbe-f92aaaa78096-tigera-ca-bundle\") pod \"calico-typha-988dc976d-qnfth\" (UID: \"597d54e3-91fe-4e69-adbe-f92aaaa78096\") " pod="calico-system/calico-typha-988dc976d-qnfth" Apr 30 03:48:15.291238 containerd[1635]: time="2025-04-30T03:48:15.290712962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-988dc976d-qnfth,Uid:597d54e3-91fe-4e69-adbe-f92aaaa78096,Namespace:calico-system,Attempt:0,}" Apr 30 03:48:15.322941 containerd[1635]: time="2025-04-30T03:48:15.321700504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:48:15.322941 containerd[1635]: time="2025-04-30T03:48:15.321755868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:48:15.322941 containerd[1635]: time="2025-04-30T03:48:15.321775216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:48:15.322941 containerd[1635]: time="2025-04-30T03:48:15.321878942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:48:15.389251 kubelet[3046]: I0430 03:48:15.389015 3046 scope.go:117] "RemoveContainer" containerID="4dd040ca34c5d11459991d3eea3c83ddea76b05b367a23e1f149d61755732fff" Apr 30 03:48:15.392510 containerd[1635]: time="2025-04-30T03:48:15.391829078Z" level=info msg="RemoveContainer for \"4dd040ca34c5d11459991d3eea3c83ddea76b05b367a23e1f149d61755732fff\"" Apr 30 03:48:15.415949 containerd[1635]: time="2025-04-30T03:48:15.415663859Z" level=info msg="RemoveContainer for \"4dd040ca34c5d11459991d3eea3c83ddea76b05b367a23e1f149d61755732fff\" returns successfully" Apr 30 03:48:15.416066 kubelet[3046]: I0430 03:48:15.415841 3046 scope.go:117] "RemoveContainer" containerID="4dd040ca34c5d11459991d3eea3c83ddea76b05b367a23e1f149d61755732fff" Apr 30 03:48:15.417641 containerd[1635]: time="2025-04-30T03:48:15.417332698Z" level=error msg="ContainerStatus for \"4dd040ca34c5d11459991d3eea3c83ddea76b05b367a23e1f149d61755732fff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4dd040ca34c5d11459991d3eea3c83ddea76b05b367a23e1f149d61755732fff\": not found" Apr 30 03:48:15.420295 kubelet[3046]: E0430 03:48:15.417446 3046 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4dd040ca34c5d11459991d3eea3c83ddea76b05b367a23e1f149d61755732fff\": not found" containerID="4dd040ca34c5d11459991d3eea3c83ddea76b05b367a23e1f149d61755732fff" Apr 30 03:48:15.420295 kubelet[3046]: I0430 03:48:15.417588 3046 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4dd040ca34c5d11459991d3eea3c83ddea76b05b367a23e1f149d61755732fff"} err="failed to get container status \"4dd040ca34c5d11459991d3eea3c83ddea76b05b367a23e1f149d61755732fff\": rpc error: code = NotFound desc = an error occurred when try to find container \"4dd040ca34c5d11459991d3eea3c83ddea76b05b367a23e1f149d61755732fff\": not found" Apr 30 03:48:15.456112 systemd[1]: var-lib-kubelet-pods-4a9a1592\x2dfe0c\x2d4cb6\x2db483\x2daa790fddb06a-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Apr 30 03:48:15.456407 systemd[1]: var-lib-kubelet-pods-4a9a1592\x2dfe0c\x2d4cb6\x2db483\x2daa790fddb06a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr7dmp.mount: Deactivated successfully. Apr 30 03:48:15.456684 systemd[1]: var-lib-kubelet-pods-4a9a1592\x2dfe0c\x2d4cb6\x2db483\x2daa790fddb06a-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Apr 30 03:48:15.466270 containerd[1635]: time="2025-04-30T03:48:15.466229160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-988dc976d-qnfth,Uid:597d54e3-91fe-4e69-adbe-f92aaaa78096,Namespace:calico-system,Attempt:0,} returns sandbox id \"3477592728af2b4eccdf8b8a0d42c16cef079c3a0665e9c2138901bb91323532\"" Apr 30 03:48:15.486813 containerd[1635]: time="2025-04-30T03:48:15.486718150Z" level=info msg="CreateContainer within sandbox \"3477592728af2b4eccdf8b8a0d42c16cef079c3a0665e9c2138901bb91323532\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 03:48:15.499850 containerd[1635]: time="2025-04-30T03:48:15.499686962Z" level=info msg="CreateContainer within sandbox \"3477592728af2b4eccdf8b8a0d42c16cef079c3a0665e9c2138901bb91323532\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"59393632a60a7189a369fa7c94e168357a657ed5585e85fc63ba20ea23207937\"" Apr 30 03:48:15.504600 containerd[1635]: time="2025-04-30T03:48:15.500392822Z" level=info msg="StartContainer for \"59393632a60a7189a369fa7c94e168357a657ed5585e85fc63ba20ea23207937\"" Apr 30 03:48:15.503391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2372893736.mount: Deactivated successfully. Apr 30 03:48:15.581302 containerd[1635]: time="2025-04-30T03:48:15.581264903Z" level=info msg="StartContainer for \"59393632a60a7189a369fa7c94e168357a657ed5585e85fc63ba20ea23207937\" returns successfully" Apr 30 03:48:15.674374 containerd[1635]: time="2025-04-30T03:48:15.674320857Z" level=info msg="shim disconnected" id=75f9f8c6e1f26fb49b69be2bed328a283fdba20bb350c6cd85aa8633bbc77f0e namespace=k8s.io Apr 30 03:48:15.674374 containerd[1635]: time="2025-04-30T03:48:15.674367876Z" level=warning msg="cleaning up after shim disconnected" id=75f9f8c6e1f26fb49b69be2bed328a283fdba20bb350c6cd85aa8633bbc77f0e namespace=k8s.io Apr 30 03:48:15.674374 containerd[1635]: time="2025-04-30T03:48:15.674375169Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:48:15.908017 kubelet[3046]: I0430 03:48:15.907885 3046 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a9a1592-fe0c-4cb6-b483-aa790fddb06a" path="/var/lib/kubelet/pods/4a9a1592-fe0c-4cb6-b483-aa790fddb06a/volumes" Apr 30 03:48:15.909292 kubelet[3046]: I0430 03:48:15.909254 3046 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2c1ef43-1732-4ee2-984a-7c96357acb4c" path="/var/lib/kubelet/pods/c2c1ef43-1732-4ee2-984a-7c96357acb4c/volumes" Apr 30 03:48:15.909905 kubelet[3046]: I0430 03:48:15.909853 3046 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2" path="/var/lib/kubelet/pods/c493ab4b-4f1b-4ecf-bbb9-16eb01191eb2/volumes" Apr 30 03:48:16.422631 containerd[1635]: time="2025-04-30T03:48:16.422377937Z" level=info msg="CreateContainer within sandbox \"94e062e6a3e255ccac6acf2b6806e9774d2599857441d585b153fa29aa83e388\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 03:48:16.451860 containerd[1635]: time="2025-04-30T03:48:16.451810086Z" level=info msg="CreateContainer within sandbox \"94e062e6a3e255ccac6acf2b6806e9774d2599857441d585b153fa29aa83e388\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"effeab5e14766e3bac2cf943e5d1fd0dad624e8159a132953498428290215855\"" Apr 30 03:48:16.455576 containerd[1635]: time="2025-04-30T03:48:16.455524608Z" level=info msg="StartContainer for \"effeab5e14766e3bac2cf943e5d1fd0dad624e8159a132953498428290215855\"" Apr 30 03:48:16.466522 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75f9f8c6e1f26fb49b69be2bed328a283fdba20bb350c6cd85aa8633bbc77f0e-rootfs.mount: Deactivated successfully. Apr 30 03:48:16.601211 containerd[1635]: time="2025-04-30T03:48:16.600850026Z" level=info msg="StartContainer for \"effeab5e14766e3bac2cf943e5d1fd0dad624e8159a132953498428290215855\" returns successfully" Apr 30 03:48:16.722307 kubelet[3046]: I0430 03:48:16.719737 3046 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-988dc976d-qnfth" podStartSLOduration=3.719720839 podStartE2EDuration="3.719720839s" podCreationTimestamp="2025-04-30 03:48:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:48:16.473740932 +0000 UTC m=+74.645272814" watchObservedRunningTime="2025-04-30 03:48:16.719720839 +0000 UTC m=+74.891252719" Apr 30 03:48:16.722307 kubelet[3046]: I0430 03:48:16.719925 3046 topology_manager.go:215] "Topology Admit Handler" podUID="b2d6aeb0-8573-4e18-ace1-d7373d90dd3a" podNamespace="calico-system" podName="calico-kube-controllers-6cf7bf66f6-xbpfk" Apr 30 03:48:16.767160 kubelet[3046]: I0430 03:48:16.767134 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rvr6\" (UniqueName: \"kubernetes.io/projected/b2d6aeb0-8573-4e18-ace1-d7373d90dd3a-kube-api-access-4rvr6\") pod \"calico-kube-controllers-6cf7bf66f6-xbpfk\" (UID: \"b2d6aeb0-8573-4e18-ace1-d7373d90dd3a\") " pod="calico-system/calico-kube-controllers-6cf7bf66f6-xbpfk" Apr 30 03:48:16.767525 kubelet[3046]: I0430 03:48:16.767477 3046 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2d6aeb0-8573-4e18-ace1-d7373d90dd3a-tigera-ca-bundle\") pod \"calico-kube-controllers-6cf7bf66f6-xbpfk\" (UID: \"b2d6aeb0-8573-4e18-ace1-d7373d90dd3a\") " pod="calico-system/calico-kube-controllers-6cf7bf66f6-xbpfk" Apr 30 03:48:17.031492 containerd[1635]: time="2025-04-30T03:48:17.031353981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cf7bf66f6-xbpfk,Uid:b2d6aeb0-8573-4e18-ace1-d7373d90dd3a,Namespace:calico-system,Attempt:0,}" Apr 30 03:48:17.169658 systemd-networkd[1258]: calie8b49898ea2: Link UP Apr 30 03:48:17.170264 systemd-networkd[1258]: calie8b49898ea2: Gained carrier Apr 30 03:48:17.185954 containerd[1635]: 2025-04-30 03:48:17.095 [INFO][6953] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--6cf7bf66f6--xbpfk-eth0 calico-kube-controllers-6cf7bf66f6- calico-system b2d6aeb0-8573-4e18-ace1-d7373d90dd3a 1135 0 2025-04-30 03:48:14 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6cf7bf66f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-3-c-b54c1f5c93 calico-kube-controllers-6cf7bf66f6-xbpfk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie8b49898ea2 [] []}} ContainerID="a6091b34f4a4f112f4bd373955e43dfd67e7e3828347ad759ade1904a5f8ea71" Namespace="calico-system" Pod="calico-kube-controllers-6cf7bf66f6-xbpfk" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--6cf7bf66f6--xbpfk-" Apr 30 03:48:17.185954 containerd[1635]: 2025-04-30 03:48:17.095 [INFO][6953] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a6091b34f4a4f112f4bd373955e43dfd67e7e3828347ad759ade1904a5f8ea71" Namespace="calico-system" Pod="calico-kube-controllers-6cf7bf66f6-xbpfk" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--6cf7bf66f6--xbpfk-eth0" Apr 30 03:48:17.185954 containerd[1635]: 2025-04-30 03:48:17.126 [INFO][6964] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6091b34f4a4f112f4bd373955e43dfd67e7e3828347ad759ade1904a5f8ea71" HandleID="k8s-pod-network.a6091b34f4a4f112f4bd373955e43dfd67e7e3828347ad759ade1904a5f8ea71" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--6cf7bf66f6--xbpfk-eth0" Apr 30 03:48:17.185954 containerd[1635]: 2025-04-30 03:48:17.135 [INFO][6964] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a6091b34f4a4f112f4bd373955e43dfd67e7e3828347ad759ade1904a5f8ea71" HandleID="k8s-pod-network.a6091b34f4a4f112f4bd373955e43dfd67e7e3828347ad759ade1904a5f8ea71" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--6cf7bf66f6--xbpfk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000292ab0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-c-b54c1f5c93", "pod":"calico-kube-controllers-6cf7bf66f6-xbpfk", "timestamp":"2025-04-30 03:48:17.126829887 +0000 UTC"}, Hostname:"ci-4081-3-3-c-b54c1f5c93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:48:17.185954 containerd[1635]: 2025-04-30 03:48:17.135 [INFO][6964] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:48:17.185954 containerd[1635]: 2025-04-30 03:48:17.135 [INFO][6964] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:48:17.185954 containerd[1635]: 2025-04-30 03:48:17.135 [INFO][6964] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-c-b54c1f5c93' Apr 30 03:48:17.185954 containerd[1635]: 2025-04-30 03:48:17.137 [INFO][6964] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a6091b34f4a4f112f4bd373955e43dfd67e7e3828347ad759ade1904a5f8ea71" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:48:17.185954 containerd[1635]: 2025-04-30 03:48:17.141 [INFO][6964] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:48:17.185954 containerd[1635]: 2025-04-30 03:48:17.146 [INFO][6964] ipam/ipam.go 489: Trying affinity for 192.168.106.64/26 host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:48:17.185954 containerd[1635]: 2025-04-30 03:48:17.149 [INFO][6964] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.64/26 host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:48:17.185954 containerd[1635]: 2025-04-30 03:48:17.151 [INFO][6964] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.64/26 host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:48:17.185954 containerd[1635]: 2025-04-30 03:48:17.151 [INFO][6964] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.64/26 handle="k8s-pod-network.a6091b34f4a4f112f4bd373955e43dfd67e7e3828347ad759ade1904a5f8ea71" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:48:17.185954 containerd[1635]: 2025-04-30 03:48:17.153 [INFO][6964] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a6091b34f4a4f112f4bd373955e43dfd67e7e3828347ad759ade1904a5f8ea71 Apr 30 03:48:17.185954 containerd[1635]: 2025-04-30 03:48:17.157 [INFO][6964] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.64/26 handle="k8s-pod-network.a6091b34f4a4f112f4bd373955e43dfd67e7e3828347ad759ade1904a5f8ea71" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:48:17.185954 containerd[1635]: 2025-04-30 03:48:17.163 [INFO][6964] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.73/26] block=192.168.106.64/26 handle="k8s-pod-network.a6091b34f4a4f112f4bd373955e43dfd67e7e3828347ad759ade1904a5f8ea71" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:48:17.185954 containerd[1635]: 2025-04-30 03:48:17.163 [INFO][6964] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.73/26] handle="k8s-pod-network.a6091b34f4a4f112f4bd373955e43dfd67e7e3828347ad759ade1904a5f8ea71" host="ci-4081-3-3-c-b54c1f5c93" Apr 30 03:48:17.185954 containerd[1635]: 2025-04-30 03:48:17.163 [INFO][6964] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:48:17.185954 containerd[1635]: 2025-04-30 03:48:17.163 [INFO][6964] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.73/26] IPv6=[] ContainerID="a6091b34f4a4f112f4bd373955e43dfd67e7e3828347ad759ade1904a5f8ea71" HandleID="k8s-pod-network.a6091b34f4a4f112f4bd373955e43dfd67e7e3828347ad759ade1904a5f8ea71" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--6cf7bf66f6--xbpfk-eth0" Apr 30 03:48:17.190080 containerd[1635]: 2025-04-30 03:48:17.166 [INFO][6953] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a6091b34f4a4f112f4bd373955e43dfd67e7e3828347ad759ade1904a5f8ea71" Namespace="calico-system" Pod="calico-kube-controllers-6cf7bf66f6-xbpfk" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--6cf7bf66f6--xbpfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--6cf7bf66f6--xbpfk-eth0", GenerateName:"calico-kube-controllers-6cf7bf66f6-", Namespace:"calico-system", SelfLink:"", UID:"b2d6aeb0-8573-4e18-ace1-d7373d90dd3a", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 48, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cf7bf66f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"", Pod:"calico-kube-controllers-6cf7bf66f6-xbpfk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie8b49898ea2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:48:17.190080 containerd[1635]: 2025-04-30 03:48:17.166 [INFO][6953] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.73/32] ContainerID="a6091b34f4a4f112f4bd373955e43dfd67e7e3828347ad759ade1904a5f8ea71" Namespace="calico-system" Pod="calico-kube-controllers-6cf7bf66f6-xbpfk" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--6cf7bf66f6--xbpfk-eth0" Apr 30 03:48:17.190080 containerd[1635]: 2025-04-30 03:48:17.166 [INFO][6953] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie8b49898ea2 ContainerID="a6091b34f4a4f112f4bd373955e43dfd67e7e3828347ad759ade1904a5f8ea71" Namespace="calico-system" Pod="calico-kube-controllers-6cf7bf66f6-xbpfk" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--6cf7bf66f6--xbpfk-eth0" Apr 30 03:48:17.190080 containerd[1635]: 2025-04-30 03:48:17.168 [INFO][6953] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6091b34f4a4f112f4bd373955e43dfd67e7e3828347ad759ade1904a5f8ea71" Namespace="calico-system" Pod="calico-kube-controllers-6cf7bf66f6-xbpfk" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--6cf7bf66f6--xbpfk-eth0" Apr 30 03:48:17.190080 containerd[1635]: 2025-04-30 03:48:17.169 [INFO][6953] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a6091b34f4a4f112f4bd373955e43dfd67e7e3828347ad759ade1904a5f8ea71" Namespace="calico-system" Pod="calico-kube-controllers-6cf7bf66f6-xbpfk" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--6cf7bf66f6--xbpfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--6cf7bf66f6--xbpfk-eth0", GenerateName:"calico-kube-controllers-6cf7bf66f6-", Namespace:"calico-system", SelfLink:"", UID:"b2d6aeb0-8573-4e18-ace1-d7373d90dd3a", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 48, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cf7bf66f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-b54c1f5c93", ContainerID:"a6091b34f4a4f112f4bd373955e43dfd67e7e3828347ad759ade1904a5f8ea71", Pod:"calico-kube-controllers-6cf7bf66f6-xbpfk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie8b49898ea2", MAC:"4a:45:a2:ed:c5:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:48:17.190080 containerd[1635]: 2025-04-30 03:48:17.180 [INFO][6953] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a6091b34f4a4f112f4bd373955e43dfd67e7e3828347ad759ade1904a5f8ea71" Namespace="calico-system" Pod="calico-kube-controllers-6cf7bf66f6-xbpfk" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--6cf7bf66f6--xbpfk-eth0" Apr 30 03:48:17.207189 containerd[1635]: time="2025-04-30T03:48:17.206968114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:48:17.207189 containerd[1635]: time="2025-04-30T03:48:17.207016746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:48:17.207189 containerd[1635]: time="2025-04-30T03:48:17.207029981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:48:17.207189 containerd[1635]: time="2025-04-30T03:48:17.207097880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:48:17.258585 containerd[1635]: time="2025-04-30T03:48:17.258507253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cf7bf66f6-xbpfk,Uid:b2d6aeb0-8573-4e18-ace1-d7373d90dd3a,Namespace:calico-system,Attempt:0,} returns sandbox id \"a6091b34f4a4f112f4bd373955e43dfd67e7e3828347ad759ade1904a5f8ea71\"" Apr 30 03:48:17.264880 containerd[1635]: time="2025-04-30T03:48:17.264781114Z" level=info msg="CreateContainer within sandbox \"a6091b34f4a4f112f4bd373955e43dfd67e7e3828347ad759ade1904a5f8ea71\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 03:48:17.273254 containerd[1635]: time="2025-04-30T03:48:17.273219455Z" level=info msg="CreateContainer within sandbox \"a6091b34f4a4f112f4bd373955e43dfd67e7e3828347ad759ade1904a5f8ea71\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"92c10b89a772e87f7a916cc1001c233b97ed4eba25c1b5e8781fbaf7d0cd4f88\"" Apr 30 03:48:17.273848 containerd[1635]: time="2025-04-30T03:48:17.273613653Z" level=info msg="StartContainer for \"92c10b89a772e87f7a916cc1001c233b97ed4eba25c1b5e8781fbaf7d0cd4f88\"" Apr 30 03:48:17.334475 containerd[1635]: time="2025-04-30T03:48:17.334437721Z" level=info msg="StartContainer for \"92c10b89a772e87f7a916cc1001c233b97ed4eba25c1b5e8781fbaf7d0cd4f88\" returns successfully" Apr 30 03:48:17.475358 kubelet[3046]: I0430 03:48:17.475314 3046 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-h6cvk" podStartSLOduration=4.47529752 podStartE2EDuration="4.47529752s" podCreationTimestamp="2025-04-30 03:48:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:48:17.473377184 +0000 UTC m=+75.644909075" watchObservedRunningTime="2025-04-30 03:48:17.47529752 +0000 UTC m=+75.646829401" Apr 30 03:48:17.496812 kubelet[3046]: I0430 03:48:17.494583 3046 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6cf7bf66f6-xbpfk" podStartSLOduration=3.49456519 podStartE2EDuration="3.49456519s" podCreationTimestamp="2025-04-30 03:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:48:17.494198323 +0000 UTC m=+75.665730205" watchObservedRunningTime="2025-04-30 03:48:17.49456519 +0000 UTC m=+75.666097071" Apr 30 03:48:18.692200 systemd-journald[1184]: Under memory pressure, flushing caches. Apr 30 03:48:18.690992 systemd-resolved[1522]: Under memory pressure, flushing caches. Apr 30 03:48:18.690997 systemd-resolved[1522]: Flushed all caches. Apr 30 03:48:19.010374 systemd-networkd[1258]: calie8b49898ea2: Gained IPv6LL Apr 30 03:48:24.645062 systemd-journald[1184]: Under memory pressure, flushing caches. Apr 30 03:48:24.642568 systemd-resolved[1522]: Under memory pressure, flushing caches. Apr 30 03:48:24.642574 systemd-resolved[1522]: Flushed all caches. Apr 30 03:48:48.709213 systemd-journald[1184]: Under memory pressure, flushing caches. Apr 30 03:48:48.706758 systemd-resolved[1522]: Under memory pressure, flushing caches. Apr 30 03:48:48.706766 systemd-resolved[1522]: Flushed all caches. Apr 30 03:48:51.629421 update_engine[1613]: I20250430 03:48:51.629338 1613 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 30 03:48:51.629421 update_engine[1613]: I20250430 03:48:51.629415 1613 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 30 03:48:51.632815 update_engine[1613]: I20250430 03:48:51.632781 1613 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 30 03:48:51.647265 update_engine[1613]: I20250430 03:48:51.647225 1613 omaha_request_params.cc:62] Current group set to lts Apr 30 03:48:51.648535 update_engine[1613]: I20250430 03:48:51.648468 1613 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 30 03:48:51.648535 update_engine[1613]: I20250430 03:48:51.648491 1613 update_attempter.cc:643] Scheduling an action processor start. Apr 30 03:48:51.648535 update_engine[1613]: I20250430 03:48:51.648515 1613 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 03:48:51.648636 update_engine[1613]: I20250430 03:48:51.648564 1613 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 30 03:48:51.648656 update_engine[1613]: I20250430 03:48:51.648638 1613 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 03:48:51.648656 update_engine[1613]: I20250430 03:48:51.648649 1613 omaha_request_action.cc:272] Request: Apr 30 03:48:51.648656 update_engine[1613]: Apr 30 03:48:51.648656 update_engine[1613]: Apr 30 03:48:51.648656 update_engine[1613]: Apr 30 03:48:51.648656 update_engine[1613]: Apr 30 03:48:51.648656 update_engine[1613]: Apr 30 03:48:51.648656 update_engine[1613]: Apr 30 03:48:51.648656 update_engine[1613]: Apr 30 03:48:51.648656 update_engine[1613]: Apr 30 03:48:51.652762 update_engine[1613]: I20250430 03:48:51.648658 1613 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 03:48:51.675464 locksmithd[1653]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 30 03:48:51.683300 update_engine[1613]: I20250430 03:48:51.682987 1613 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 03:48:51.684239 update_engine[1613]: I20250430 03:48:51.683254 1613 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 03:48:51.684925 update_engine[1613]: E20250430 03:48:51.684794 1613 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 03:48:51.685373 update_engine[1613]: I20250430 03:48:51.685351 1613 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 30 03:49:01.485267 update_engine[1613]: I20250430 03:49:01.485196 1613 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 03:49:01.485642 update_engine[1613]: I20250430 03:49:01.485405 1613 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 03:49:01.485707 update_engine[1613]: I20250430 03:49:01.485651 1613 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 03:49:01.486224 update_engine[1613]: E20250430 03:49:01.486195 1613 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 03:49:01.486265 update_engine[1613]: I20250430 03:49:01.486239 1613 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 30 03:49:03.203269 kubelet[3046]: I0430 03:49:03.203224 3046 scope.go:117] "RemoveContainer" containerID="42a0dc42a55ad5197a5b25a2f510eb9cefbb60e066d611a244d1722c6522bcd6" Apr 30 03:49:03.224745 containerd[1635]: time="2025-04-30T03:49:03.209512652Z" level=info msg="RemoveContainer for \"42a0dc42a55ad5197a5b25a2f510eb9cefbb60e066d611a244d1722c6522bcd6\"" Apr 30 03:49:03.257844 containerd[1635]: time="2025-04-30T03:49:03.257797821Z" level=info msg="RemoveContainer for \"42a0dc42a55ad5197a5b25a2f510eb9cefbb60e066d611a244d1722c6522bcd6\" returns successfully" Apr 30 03:49:03.259242 containerd[1635]: time="2025-04-30T03:49:03.259179178Z" level=info msg="StopPodSandbox for \"3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc\"" Apr 30 03:49:03.369806 containerd[1635]: 2025-04-30 03:49:03.318 [WARNING][7405] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0" Apr 30 03:49:03.369806 containerd[1635]: 2025-04-30 03:49:03.319 [INFO][7405] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Apr 30 03:49:03.369806 containerd[1635]: 2025-04-30 03:49:03.319 [INFO][7405] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" iface="eth0" netns="" Apr 30 03:49:03.369806 containerd[1635]: 2025-04-30 03:49:03.319 [INFO][7405] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Apr 30 03:49:03.369806 containerd[1635]: 2025-04-30 03:49:03.319 [INFO][7405] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Apr 30 03:49:03.369806 containerd[1635]: 2025-04-30 03:49:03.357 [INFO][7413] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" HandleID="k8s-pod-network.3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0" Apr 30 03:49:03.369806 containerd[1635]: 2025-04-30 03:49:03.358 [INFO][7413] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:49:03.369806 containerd[1635]: 2025-04-30 03:49:03.358 [INFO][7413] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:49:03.369806 containerd[1635]: 2025-04-30 03:49:03.366 [WARNING][7413] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" HandleID="k8s-pod-network.3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0" Apr 30 03:49:03.369806 containerd[1635]: 2025-04-30 03:49:03.366 [INFO][7413] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" HandleID="k8s-pod-network.3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0" Apr 30 03:49:03.369806 containerd[1635]: 2025-04-30 03:49:03.367 [INFO][7413] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:49:03.369806 containerd[1635]: 2025-04-30 03:49:03.368 [INFO][7405] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Apr 30 03:49:03.370400 containerd[1635]: time="2025-04-30T03:49:03.369843753Z" level=info msg="TearDown network for sandbox \"3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc\" successfully" Apr 30 03:49:03.370400 containerd[1635]: time="2025-04-30T03:49:03.369870254Z" level=info msg="StopPodSandbox for \"3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc\" returns successfully" Apr 30 03:49:03.371382 containerd[1635]: time="2025-04-30T03:49:03.371339075Z" level=info msg="RemovePodSandbox for \"3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc\"" Apr 30 03:49:03.373636 containerd[1635]: time="2025-04-30T03:49:03.373616954Z" level=info msg="Forcibly stopping sandbox \"3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc\"" Apr 30 03:49:03.451217 containerd[1635]: 2025-04-30 03:49:03.418 [WARNING][7432] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0" Apr 30 03:49:03.451217 containerd[1635]: 2025-04-30 03:49:03.419 [INFO][7432] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Apr 30 03:49:03.451217 containerd[1635]: 2025-04-30 03:49:03.419 [INFO][7432] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" iface="eth0" netns="" Apr 30 03:49:03.451217 containerd[1635]: 2025-04-30 03:49:03.419 [INFO][7432] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Apr 30 03:49:03.451217 containerd[1635]: 2025-04-30 03:49:03.419 [INFO][7432] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Apr 30 03:49:03.451217 containerd[1635]: 2025-04-30 03:49:03.437 [INFO][7439] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" HandleID="k8s-pod-network.3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0" Apr 30 03:49:03.451217 containerd[1635]: 2025-04-30 03:49:03.437 [INFO][7439] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:49:03.451217 containerd[1635]: 2025-04-30 03:49:03.437 [INFO][7439] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:49:03.451217 containerd[1635]: 2025-04-30 03:49:03.446 [WARNING][7439] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" HandleID="k8s-pod-network.3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0" Apr 30 03:49:03.451217 containerd[1635]: 2025-04-30 03:49:03.446 [INFO][7439] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" HandleID="k8s-pod-network.3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--2tcgf-eth0" Apr 30 03:49:03.451217 containerd[1635]: 2025-04-30 03:49:03.447 [INFO][7439] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:49:03.451217 containerd[1635]: 2025-04-30 03:49:03.449 [INFO][7432] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc" Apr 30 03:49:03.453220 containerd[1635]: time="2025-04-30T03:49:03.451290241Z" level=info msg="TearDown network for sandbox \"3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc\" successfully" Apr 30 03:49:03.536530 containerd[1635]: time="2025-04-30T03:49:03.536096268Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:49:03.536530 containerd[1635]: time="2025-04-30T03:49:03.536168114Z" level=info msg="RemovePodSandbox \"3af8eae1cad955ba1a9c4f423dbb45ccd501588d9cef581f091e2554d2f637cc\" returns successfully" Apr 30 03:49:03.537051 containerd[1635]: time="2025-04-30T03:49:03.536837698Z" level=info msg="StopPodSandbox for \"95bc9fc4b74d97da37cd866088304e53a66c69dd9e4f4a854c310b97c0fa1981\"" Apr 30 03:49:03.537051 containerd[1635]: time="2025-04-30T03:49:03.536951763Z" level=info msg="TearDown network for sandbox \"95bc9fc4b74d97da37cd866088304e53a66c69dd9e4f4a854c310b97c0fa1981\" successfully" Apr 30 03:49:03.537051 containerd[1635]: time="2025-04-30T03:49:03.536964797Z" level=info msg="StopPodSandbox for \"95bc9fc4b74d97da37cd866088304e53a66c69dd9e4f4a854c310b97c0fa1981\" returns successfully" Apr 30 03:49:03.537333 containerd[1635]: time="2025-04-30T03:49:03.537271447Z" level=info msg="RemovePodSandbox for \"95bc9fc4b74d97da37cd866088304e53a66c69dd9e4f4a854c310b97c0fa1981\"" Apr 30 03:49:03.537333 containerd[1635]: time="2025-04-30T03:49:03.537292686Z" level=info msg="Forcibly stopping sandbox \"95bc9fc4b74d97da37cd866088304e53a66c69dd9e4f4a854c310b97c0fa1981\"" Apr 30 03:49:03.537333 containerd[1635]: time="2025-04-30T03:49:03.537330899Z" level=info msg="TearDown network for sandbox \"95bc9fc4b74d97da37cd866088304e53a66c69dd9e4f4a854c310b97c0fa1981\" successfully" Apr 30 03:49:03.567946 containerd[1635]: time="2025-04-30T03:49:03.567368352Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"95bc9fc4b74d97da37cd866088304e53a66c69dd9e4f4a854c310b97c0fa1981\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:49:03.567946 containerd[1635]: time="2025-04-30T03:49:03.567476416Z" level=info msg="RemovePodSandbox \"95bc9fc4b74d97da37cd866088304e53a66c69dd9e4f4a854c310b97c0fa1981\" returns successfully" Apr 30 03:49:03.569241 containerd[1635]: time="2025-04-30T03:49:03.568338464Z" level=info msg="StopPodSandbox for \"d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070\"" Apr 30 03:49:03.634985 containerd[1635]: 2025-04-30 03:49:03.607 [WARNING][7457] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" Apr 30 03:49:03.634985 containerd[1635]: 2025-04-30 03:49:03.607 [INFO][7457] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Apr 30 03:49:03.634985 containerd[1635]: 2025-04-30 03:49:03.607 [INFO][7457] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" iface="eth0" netns="" Apr 30 03:49:03.634985 containerd[1635]: 2025-04-30 03:49:03.607 [INFO][7457] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Apr 30 03:49:03.634985 containerd[1635]: 2025-04-30 03:49:03.607 [INFO][7457] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Apr 30 03:49:03.634985 containerd[1635]: 2025-04-30 03:49:03.624 [INFO][7464] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" HandleID="k8s-pod-network.d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" Apr 30 03:49:03.634985 containerd[1635]: 2025-04-30 03:49:03.624 [INFO][7464] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:49:03.634985 containerd[1635]: 2025-04-30 03:49:03.624 [INFO][7464] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:49:03.634985 containerd[1635]: 2025-04-30 03:49:03.630 [WARNING][7464] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" HandleID="k8s-pod-network.d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" Apr 30 03:49:03.634985 containerd[1635]: 2025-04-30 03:49:03.630 [INFO][7464] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" HandleID="k8s-pod-network.d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" Apr 30 03:49:03.634985 containerd[1635]: 2025-04-30 03:49:03.631 [INFO][7464] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:49:03.634985 containerd[1635]: 2025-04-30 03:49:03.633 [INFO][7457] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Apr 30 03:49:03.636046 containerd[1635]: time="2025-04-30T03:49:03.635345532Z" level=info msg="TearDown network for sandbox \"d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070\" successfully" Apr 30 03:49:03.636046 containerd[1635]: time="2025-04-30T03:49:03.635370910Z" level=info msg="StopPodSandbox for \"d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070\" returns successfully" Apr 30 03:49:03.636046 containerd[1635]: time="2025-04-30T03:49:03.636038840Z" level=info msg="RemovePodSandbox for \"d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070\"" Apr 30 03:49:03.636496 containerd[1635]: time="2025-04-30T03:49:03.636076312Z" level=info msg="Forcibly stopping sandbox \"d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070\"" Apr 30 03:49:03.694327 containerd[1635]: 2025-04-30 03:49:03.665 [WARNING][7483] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" Apr 30 03:49:03.694327 containerd[1635]: 2025-04-30 03:49:03.665 [INFO][7483] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Apr 30 03:49:03.694327 containerd[1635]: 2025-04-30 03:49:03.666 [INFO][7483] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" iface="eth0" netns="" Apr 30 03:49:03.694327 containerd[1635]: 2025-04-30 03:49:03.666 [INFO][7483] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Apr 30 03:49:03.694327 containerd[1635]: 2025-04-30 03:49:03.666 [INFO][7483] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Apr 30 03:49:03.694327 containerd[1635]: 2025-04-30 03:49:03.685 [INFO][7491] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" HandleID="k8s-pod-network.d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" Apr 30 03:49:03.694327 containerd[1635]: 2025-04-30 03:49:03.685 [INFO][7491] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:49:03.694327 containerd[1635]: 2025-04-30 03:49:03.685 [INFO][7491] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:49:03.694327 containerd[1635]: 2025-04-30 03:49:03.690 [WARNING][7491] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" HandleID="k8s-pod-network.d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" Apr 30 03:49:03.694327 containerd[1635]: 2025-04-30 03:49:03.690 [INFO][7491] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" HandleID="k8s-pod-network.d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--apiserver--855c9c9d54--6r7fb-eth0" Apr 30 03:49:03.694327 containerd[1635]: 2025-04-30 03:49:03.691 [INFO][7491] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:49:03.694327 containerd[1635]: 2025-04-30 03:49:03.693 [INFO][7483] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070" Apr 30 03:49:03.695129 containerd[1635]: time="2025-04-30T03:49:03.694360080Z" level=info msg="TearDown network for sandbox \"d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070\" successfully" Apr 30 03:49:03.698155 containerd[1635]: time="2025-04-30T03:49:03.698103042Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:49:03.698155 containerd[1635]: time="2025-04-30T03:49:03.698161323Z" level=info msg="RemovePodSandbox \"d9a43326e4bf7ce74d1f7f7787082bbf2bf4ed9c93dd77c5db87a35bff5f4070\" returns successfully" Apr 30 03:49:03.698705 containerd[1635]: time="2025-04-30T03:49:03.698687676Z" level=info msg="StopPodSandbox for \"434baaf6f9a446755eecb790a24ee4dc9d591085dfa417a8bc681cd83857649e\"" Apr 30 03:49:03.698792 containerd[1635]: time="2025-04-30T03:49:03.698745335Z" level=info msg="TearDown network for sandbox \"434baaf6f9a446755eecb790a24ee4dc9d591085dfa417a8bc681cd83857649e\" successfully" Apr 30 03:49:03.698792 containerd[1635]: time="2025-04-30T03:49:03.698769951Z" level=info msg="StopPodSandbox for \"434baaf6f9a446755eecb790a24ee4dc9d591085dfa417a8bc681cd83857649e\" returns successfully" Apr 30 03:49:03.699687 containerd[1635]: time="2025-04-30T03:49:03.699186137Z" level=info msg="RemovePodSandbox for \"434baaf6f9a446755eecb790a24ee4dc9d591085dfa417a8bc681cd83857649e\"" Apr 30 03:49:03.699687 containerd[1635]: time="2025-04-30T03:49:03.699215602Z" level=info msg="Forcibly stopping sandbox \"434baaf6f9a446755eecb790a24ee4dc9d591085dfa417a8bc681cd83857649e\"" Apr 30 03:49:03.699687 containerd[1635]: time="2025-04-30T03:49:03.699265567Z" level=info msg="TearDown network for sandbox \"434baaf6f9a446755eecb790a24ee4dc9d591085dfa417a8bc681cd83857649e\" successfully" Apr 30 03:49:03.702060 containerd[1635]: time="2025-04-30T03:49:03.701974739Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"434baaf6f9a446755eecb790a24ee4dc9d591085dfa417a8bc681cd83857649e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:49:03.702060 containerd[1635]: time="2025-04-30T03:49:03.702011659Z" level=info msg="RemovePodSandbox \"434baaf6f9a446755eecb790a24ee4dc9d591085dfa417a8bc681cd83857649e\" returns successfully" Apr 30 03:49:03.702485 containerd[1635]: time="2025-04-30T03:49:03.702293420Z" level=info msg="StopPodSandbox for \"b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8\"" Apr 30 03:49:03.756462 containerd[1635]: 2025-04-30 03:49:03.730 [WARNING][7510] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0" Apr 30 03:49:03.756462 containerd[1635]: 2025-04-30 03:49:03.730 [INFO][7510] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Apr 30 03:49:03.756462 containerd[1635]: 2025-04-30 03:49:03.730 [INFO][7510] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" iface="eth0" netns="" Apr 30 03:49:03.756462 containerd[1635]: 2025-04-30 03:49:03.730 [INFO][7510] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Apr 30 03:49:03.756462 containerd[1635]: 2025-04-30 03:49:03.730 [INFO][7510] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Apr 30 03:49:03.756462 containerd[1635]: 2025-04-30 03:49:03.748 [INFO][7518] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" HandleID="k8s-pod-network.b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0" Apr 30 03:49:03.756462 containerd[1635]: 2025-04-30 03:49:03.748 [INFO][7518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:49:03.756462 containerd[1635]: 2025-04-30 03:49:03.748 [INFO][7518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:49:03.756462 containerd[1635]: 2025-04-30 03:49:03.752 [WARNING][7518] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" HandleID="k8s-pod-network.b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0" Apr 30 03:49:03.756462 containerd[1635]: 2025-04-30 03:49:03.752 [INFO][7518] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" HandleID="k8s-pod-network.b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0" Apr 30 03:49:03.756462 containerd[1635]: 2025-04-30 03:49:03.754 [INFO][7518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:49:03.756462 containerd[1635]: 2025-04-30 03:49:03.755 [INFO][7510] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Apr 30 03:49:03.757619 containerd[1635]: time="2025-04-30T03:49:03.756800611Z" level=info msg="TearDown network for sandbox \"b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8\" successfully" Apr 30 03:49:03.757619 containerd[1635]: time="2025-04-30T03:49:03.756824846Z" level=info msg="StopPodSandbox for \"b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8\" returns successfully" Apr 30 03:49:03.757619 containerd[1635]: time="2025-04-30T03:49:03.757240852Z" level=info msg="RemovePodSandbox for \"b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8\"" Apr 30 03:49:03.757619 containerd[1635]: time="2025-04-30T03:49:03.757268092Z" level=info msg="Forcibly stopping sandbox \"b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8\"" Apr 30 03:49:03.813124 containerd[1635]: 2025-04-30 03:49:03.786 [WARNING][7536] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" WorkloadEndpoint="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0" Apr 30 03:49:03.813124 containerd[1635]: 2025-04-30 03:49:03.786 [INFO][7536] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Apr 30 03:49:03.813124 containerd[1635]: 2025-04-30 03:49:03.786 [INFO][7536] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" iface="eth0" netns="" Apr 30 03:49:03.813124 containerd[1635]: 2025-04-30 03:49:03.786 [INFO][7536] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Apr 30 03:49:03.813124 containerd[1635]: 2025-04-30 03:49:03.786 [INFO][7536] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Apr 30 03:49:03.813124 containerd[1635]: 2025-04-30 03:49:03.803 [INFO][7543] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" HandleID="k8s-pod-network.b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0" Apr 30 03:49:03.813124 containerd[1635]: 2025-04-30 03:49:03.803 [INFO][7543] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:49:03.813124 containerd[1635]: 2025-04-30 03:49:03.803 [INFO][7543] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:49:03.813124 containerd[1635]: 2025-04-30 03:49:03.808 [WARNING][7543] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" HandleID="k8s-pod-network.b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0" Apr 30 03:49:03.813124 containerd[1635]: 2025-04-30 03:49:03.808 [INFO][7543] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" HandleID="k8s-pod-network.b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Workload="ci--4081--3--3--c--b54c1f5c93-k8s-calico--kube--controllers--548667d5d7--rpfg6-eth0" Apr 30 03:49:03.813124 containerd[1635]: 2025-04-30 03:49:03.809 [INFO][7543] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:49:03.813124 containerd[1635]: 2025-04-30 03:49:03.811 [INFO][7536] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8" Apr 30 03:49:03.813124 containerd[1635]: time="2025-04-30T03:49:03.813107969Z" level=info msg="TearDown network for sandbox \"b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8\" successfully" Apr 30 03:49:03.816771 containerd[1635]: time="2025-04-30T03:49:03.816728321Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:49:03.816966 containerd[1635]: time="2025-04-30T03:49:03.816804264Z" level=info msg="RemovePodSandbox \"b82f58fb9b6360c3fcb020ef89f1782abac3a7e8b6f403601088da7d194b10b8\" returns successfully" Apr 30 03:49:11.484828 update_engine[1613]: I20250430 03:49:11.484649 1613 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 03:49:11.485580 update_engine[1613]: I20250430 03:49:11.485031 1613 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 03:49:11.485580 update_engine[1613]: I20250430 03:49:11.485410 1613 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 03:49:11.486459 update_engine[1613]: E20250430 03:49:11.486390 1613 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 03:49:11.486603 update_engine[1613]: I20250430 03:49:11.486508 1613 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 30 03:49:21.484354 update_engine[1613]: I20250430 03:49:21.484273 1613 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 03:49:21.484824 update_engine[1613]: I20250430 03:49:21.484570 1613 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 03:49:21.484866 update_engine[1613]: I20250430 03:49:21.484843 1613 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 03:49:21.485502 update_engine[1613]: E20250430 03:49:21.485471 1613 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 03:49:21.485567 update_engine[1613]: I20250430 03:49:21.485522 1613 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 03:49:21.485567 update_engine[1613]: I20250430 03:49:21.485530 1613 omaha_request_action.cc:617] Omaha request response: Apr 30 03:49:21.485618 update_engine[1613]: E20250430 03:49:21.485602 1613 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 30 03:49:21.485702 update_engine[1613]: I20250430 03:49:21.485623 1613 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 30 03:49:21.485702 update_engine[1613]: I20250430 03:49:21.485628 1613 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 03:49:21.485702 update_engine[1613]: I20250430 03:49:21.485632 1613 update_attempter.cc:306] Processing Done. Apr 30 03:49:21.485702 update_engine[1613]: E20250430 03:49:21.485647 1613 update_attempter.cc:619] Update failed. Apr 30 03:49:21.485702 update_engine[1613]: I20250430 03:49:21.485651 1613 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 30 03:49:21.485702 update_engine[1613]: I20250430 03:49:21.485656 1613 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 30 03:49:21.485702 update_engine[1613]: I20250430 03:49:21.485661 1613 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 30 03:49:21.485838 update_engine[1613]: I20250430 03:49:21.485728 1613 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 03:49:21.485838 update_engine[1613]: I20250430 03:49:21.485749 1613 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 03:49:21.485838 update_engine[1613]: I20250430 03:49:21.485754 1613 omaha_request_action.cc:272] Request: Apr 30 03:49:21.485838 update_engine[1613]: Apr 30 03:49:21.485838 update_engine[1613]: Apr 30 03:49:21.485838 update_engine[1613]: Apr 30 03:49:21.485838 update_engine[1613]: Apr 30 03:49:21.485838 update_engine[1613]: Apr 30 03:49:21.485838 update_engine[1613]: Apr 30 03:49:21.485838 update_engine[1613]: I20250430 03:49:21.485759 1613 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 03:49:21.486056 update_engine[1613]: I20250430 03:49:21.486032 1613 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 03:49:21.486246 update_engine[1613]: I20250430 03:49:21.486161 1613 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 03:49:21.486289 locksmithd[1653]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 30 03:49:21.486787 update_engine[1613]: E20250430 03:49:21.486752 1613 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 03:49:21.486825 update_engine[1613]: I20250430 03:49:21.486793 1613 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 03:49:21.486825 update_engine[1613]: I20250430 03:49:21.486799 1613 omaha_request_action.cc:617] Omaha request response: Apr 30 03:49:21.486825 update_engine[1613]: I20250430 03:49:21.486805 1613 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 03:49:21.486825 update_engine[1613]: I20250430 03:49:21.486809 1613 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 03:49:21.486825 update_engine[1613]: I20250430 03:49:21.486814 1613 update_attempter.cc:306] Processing Done. Apr 30 03:49:21.486825 update_engine[1613]: I20250430 03:49:21.486819 1613 update_attempter.cc:310] Error event sent. Apr 30 03:49:21.487508 update_engine[1613]: I20250430 03:49:21.486826 1613 update_check_scheduler.cc:74] Next update check in 49m12s Apr 30 03:49:21.487551 locksmithd[1653]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 30 03:49:47.065602 systemd[1]: run-containerd-runc-k8s.io-92c10b89a772e87f7a916cc1001c233b97ed4eba25c1b5e8781fbaf7d0cd4f88-runc.WQrnH7.mount: Deactivated successfully. Apr 30 03:51:42.457283 systemd[1]: Started sshd@7-37.27.250.194:22-139.178.68.195:56236.service - OpenSSH per-connection server daemon (139.178.68.195:56236). Apr 30 03:51:43.462285 sshd[7883]: Accepted publickey for core from 139.178.68.195 port 56236 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:51:43.464864 sshd[7883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:51:43.473095 systemd-logind[1611]: New session 8 of user core. Apr 30 03:51:43.478146 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 03:51:44.587562 sshd[7883]: pam_unix(sshd:session): session closed for user core Apr 30 03:51:44.590777 systemd[1]: sshd@7-37.27.250.194:22-139.178.68.195:56236.service: Deactivated successfully. Apr 30 03:51:44.595463 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 03:51:44.597073 systemd-logind[1611]: Session 8 logged out. Waiting for processes to exit. Apr 30 03:51:44.598638 systemd-logind[1611]: Removed session 8. Apr 30 03:51:49.750516 systemd[1]: Started sshd@8-37.27.250.194:22-139.178.68.195:44848.service - OpenSSH per-connection server daemon (139.178.68.195:44848). Apr 30 03:51:50.714069 sshd[7940]: Accepted publickey for core from 139.178.68.195 port 44848 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:51:50.715279 sshd[7940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:51:50.718826 systemd-logind[1611]: New session 9 of user core. Apr 30 03:51:50.723065 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 03:51:51.486185 sshd[7940]: pam_unix(sshd:session): session closed for user core Apr 30 03:51:51.488722 systemd[1]: sshd@8-37.27.250.194:22-139.178.68.195:44848.service: Deactivated successfully. Apr 30 03:51:51.491546 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 03:51:51.492993 systemd-logind[1611]: Session 9 logged out. Waiting for processes to exit. Apr 30 03:51:51.494396 systemd-logind[1611]: Removed session 9. Apr 30 03:51:56.650111 systemd[1]: Started sshd@9-37.27.250.194:22-139.178.68.195:55404.service - OpenSSH per-connection server daemon (139.178.68.195:55404). Apr 30 03:51:57.636967 sshd[7955]: Accepted publickey for core from 139.178.68.195 port 55404 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:51:57.639162 sshd[7955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:51:57.643549 systemd-logind[1611]: New session 10 of user core. Apr 30 03:51:57.655091 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 03:51:58.407174 sshd[7955]: pam_unix(sshd:session): session closed for user core Apr 30 03:51:58.410023 systemd[1]: sshd@9-37.27.250.194:22-139.178.68.195:55404.service: Deactivated successfully. Apr 30 03:51:58.412024 systemd-logind[1611]: Session 10 logged out. Waiting for processes to exit. Apr 30 03:51:58.412337 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 03:51:58.413679 systemd-logind[1611]: Removed session 10. Apr 30 03:51:58.570327 systemd[1]: Started sshd@10-37.27.250.194:22-139.178.68.195:55410.service - OpenSSH per-connection server daemon (139.178.68.195:55410). Apr 30 03:51:59.537418 sshd[7970]: Accepted publickey for core from 139.178.68.195 port 55410 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:51:59.537271 sshd[7970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:51:59.541525 systemd-logind[1611]: New session 11 of user core. Apr 30 03:51:59.548093 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 03:52:00.330855 sshd[7970]: pam_unix(sshd:session): session closed for user core Apr 30 03:52:00.334188 systemd[1]: sshd@10-37.27.250.194:22-139.178.68.195:55410.service: Deactivated successfully. Apr 30 03:52:00.336081 systemd-logind[1611]: Session 11 logged out. Waiting for processes to exit. Apr 30 03:52:00.337003 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 03:52:00.337673 systemd-logind[1611]: Removed session 11. Apr 30 03:52:00.493365 systemd[1]: Started sshd@11-37.27.250.194:22-139.178.68.195:55426.service - OpenSSH per-connection server daemon (139.178.68.195:55426). Apr 30 03:52:01.458376 sshd[7983]: Accepted publickey for core from 139.178.68.195 port 55426 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:52:01.459669 sshd[7983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:52:01.464263 systemd-logind[1611]: New session 12 of user core. Apr 30 03:52:01.472119 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 03:52:02.213531 sshd[7983]: pam_unix(sshd:session): session closed for user core Apr 30 03:52:02.216039 systemd[1]: sshd@11-37.27.250.194:22-139.178.68.195:55426.service: Deactivated successfully. Apr 30 03:52:02.219845 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 03:52:02.219971 systemd-logind[1611]: Session 12 logged out. Waiting for processes to exit. Apr 30 03:52:02.221369 systemd-logind[1611]: Removed session 12. Apr 30 03:52:07.377157 systemd[1]: Started sshd@12-37.27.250.194:22-139.178.68.195:40982.service - OpenSSH per-connection server daemon (139.178.68.195:40982). Apr 30 03:52:08.343393 sshd[8003]: Accepted publickey for core from 139.178.68.195 port 40982 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:52:08.344845 sshd[8003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:52:08.349034 systemd-logind[1611]: New session 13 of user core. Apr 30 03:52:08.354098 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 03:52:09.106087 sshd[8003]: pam_unix(sshd:session): session closed for user core Apr 30 03:52:09.108510 systemd[1]: sshd@12-37.27.250.194:22-139.178.68.195:40982.service: Deactivated successfully. Apr 30 03:52:09.111952 systemd-logind[1611]: Session 13 logged out. Waiting for processes to exit. Apr 30 03:52:09.113082 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 03:52:09.114093 systemd-logind[1611]: Removed session 13. Apr 30 03:52:14.267407 systemd[1]: Started sshd@13-37.27.250.194:22-139.178.68.195:40996.service - OpenSSH per-connection server daemon (139.178.68.195:40996). Apr 30 03:52:15.228283 sshd[8040]: Accepted publickey for core from 139.178.68.195 port 40996 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:52:15.229549 sshd[8040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:52:15.233447 systemd-logind[1611]: New session 14 of user core. Apr 30 03:52:15.238230 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 03:52:15.966339 sshd[8040]: pam_unix(sshd:session): session closed for user core Apr 30 03:52:15.969807 systemd[1]: sshd@13-37.27.250.194:22-139.178.68.195:40996.service: Deactivated successfully. Apr 30 03:52:15.974170 systemd-logind[1611]: Session 14 logged out. Waiting for processes to exit. Apr 30 03:52:15.974809 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 03:52:15.976701 systemd-logind[1611]: Removed session 14. Apr 30 03:52:21.131825 systemd[1]: Started sshd@14-37.27.250.194:22-139.178.68.195:33938.service - OpenSSH per-connection server daemon (139.178.68.195:33938). Apr 30 03:52:22.120103 sshd[8094]: Accepted publickey for core from 139.178.68.195 port 33938 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:52:22.122471 sshd[8094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:52:22.130020 systemd-logind[1611]: New session 15 of user core. Apr 30 03:52:22.134425 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 03:52:22.857044 sshd[8094]: pam_unix(sshd:session): session closed for user core Apr 30 03:52:22.859453 systemd[1]: sshd@14-37.27.250.194:22-139.178.68.195:33938.service: Deactivated successfully. Apr 30 03:52:22.861959 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 03:52:22.862497 systemd-logind[1611]: Session 15 logged out. Waiting for processes to exit. Apr 30 03:52:22.863673 systemd-logind[1611]: Removed session 15. Apr 30 03:52:23.021320 systemd[1]: Started sshd@15-37.27.250.194:22-139.178.68.195:33950.service - OpenSSH per-connection server daemon (139.178.68.195:33950). Apr 30 03:52:23.984045 sshd[8108]: Accepted publickey for core from 139.178.68.195 port 33950 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:52:23.985323 sshd[8108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:52:23.989078 systemd-logind[1611]: New session 16 of user core. Apr 30 03:52:23.996096 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 03:52:24.933332 sshd[8108]: pam_unix(sshd:session): session closed for user core Apr 30 03:52:24.938093 systemd[1]: sshd@15-37.27.250.194:22-139.178.68.195:33950.service: Deactivated successfully. Apr 30 03:52:24.944561 systemd-logind[1611]: Session 16 logged out. Waiting for processes to exit. Apr 30 03:52:24.945448 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 03:52:24.948721 systemd-logind[1611]: Removed session 16. Apr 30 03:52:25.097698 systemd[1]: Started sshd@16-37.27.250.194:22-139.178.68.195:33952.service - OpenSSH per-connection server daemon (139.178.68.195:33952). Apr 30 03:52:26.082513 sshd[8121]: Accepted publickey for core from 139.178.68.195 port 33952 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:52:26.084175 sshd[8121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:52:26.088939 systemd-logind[1611]: New session 17 of user core. Apr 30 03:52:26.092202 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 03:52:28.605318 sshd[8121]: pam_unix(sshd:session): session closed for user core Apr 30 03:52:28.614769 systemd[1]: sshd@16-37.27.250.194:22-139.178.68.195:33952.service: Deactivated successfully. Apr 30 03:52:28.622296 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 03:52:28.622800 systemd-logind[1611]: Session 17 logged out. Waiting for processes to exit. Apr 30 03:52:28.624441 systemd-logind[1611]: Removed session 17. Apr 30 03:52:28.765464 systemd[1]: Started sshd@17-37.27.250.194:22-139.178.68.195:49608.service - OpenSSH per-connection server daemon (139.178.68.195:49608). Apr 30 03:52:29.751178 sshd[8141]: Accepted publickey for core from 139.178.68.195 port 49608 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:52:29.753807 sshd[8141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:52:29.761982 systemd-logind[1611]: New session 18 of user core. Apr 30 03:52:29.767328 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 03:52:30.788226 systemd-journald[1184]: Under memory pressure, flushing caches. Apr 30 03:52:30.785968 systemd-resolved[1522]: Under memory pressure, flushing caches. Apr 30 03:52:30.785974 systemd-resolved[1522]: Flushed all caches. Apr 30 03:52:31.069302 sshd[8141]: pam_unix(sshd:session): session closed for user core Apr 30 03:52:31.072740 systemd[1]: sshd@17-37.27.250.194:22-139.178.68.195:49608.service: Deactivated successfully. Apr 30 03:52:31.077418 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 03:52:31.078042 systemd-logind[1611]: Session 18 logged out. Waiting for processes to exit. Apr 30 03:52:31.080651 systemd-logind[1611]: Removed session 18. Apr 30 03:52:31.232182 systemd[1]: Started sshd@18-37.27.250.194:22-139.178.68.195:49622.service - OpenSSH per-connection server daemon (139.178.68.195:49622). Apr 30 03:52:32.204968 sshd[8153]: Accepted publickey for core from 139.178.68.195 port 49622 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:52:32.207586 sshd[8153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:52:32.214704 systemd-logind[1611]: New session 19 of user core. Apr 30 03:52:32.222245 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 03:52:32.982207 sshd[8153]: pam_unix(sshd:session): session closed for user core Apr 30 03:52:32.986528 systemd[1]: sshd@18-37.27.250.194:22-139.178.68.195:49622.service: Deactivated successfully. Apr 30 03:52:32.991608 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 03:52:32.991666 systemd-logind[1611]: Session 19 logged out. Waiting for processes to exit. Apr 30 03:52:32.995780 systemd-logind[1611]: Removed session 19. Apr 30 03:52:38.146136 systemd[1]: Started sshd@19-37.27.250.194:22-139.178.68.195:41500.service - OpenSSH per-connection server daemon (139.178.68.195:41500). Apr 30 03:52:39.117292 sshd[8170]: Accepted publickey for core from 139.178.68.195 port 41500 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:52:39.119521 sshd[8170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:52:39.125966 systemd-logind[1611]: New session 20 of user core. Apr 30 03:52:39.135261 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 03:52:39.848534 sshd[8170]: pam_unix(sshd:session): session closed for user core Apr 30 03:52:39.852630 systemd[1]: sshd@19-37.27.250.194:22-139.178.68.195:41500.service: Deactivated successfully. Apr 30 03:52:39.858816 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 03:52:39.859200 systemd-logind[1611]: Session 20 logged out. Waiting for processes to exit. Apr 30 03:52:39.862484 systemd-logind[1611]: Removed session 20. Apr 30 03:52:44.161105 systemd[1]: run-containerd-runc-k8s.io-effeab5e14766e3bac2cf943e5d1fd0dad624e8159a132953498428290215855-runc.NRaUh7.mount: Deactivated successfully. Apr 30 03:52:45.011127 systemd[1]: Started sshd@20-37.27.250.194:22-139.178.68.195:41508.service - OpenSSH per-connection server daemon (139.178.68.195:41508). Apr 30 03:52:45.994028 sshd[8206]: Accepted publickey for core from 139.178.68.195 port 41508 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:52:45.996641 sshd[8206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:52:46.001635 systemd-logind[1611]: New session 21 of user core. Apr 30 03:52:46.007250 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 03:52:46.797070 sshd[8206]: pam_unix(sshd:session): session closed for user core Apr 30 03:52:46.800180 systemd[1]: sshd@20-37.27.250.194:22-139.178.68.195:41508.service: Deactivated successfully. Apr 30 03:52:46.803780 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 03:52:46.805052 systemd-logind[1611]: Session 21 logged out. Waiting for processes to exit. Apr 30 03:52:46.806562 systemd-logind[1611]: Removed session 21. Apr 30 03:53:05.299089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61f1daf7e4df85451c85e133df0735024cee20d490eca5138befcfae9fba1ab3-rootfs.mount: Deactivated successfully. Apr 30 03:53:05.351191 containerd[1635]: time="2025-04-30T03:53:05.339752744Z" level=info msg="shim disconnected" id=61f1daf7e4df85451c85e133df0735024cee20d490eca5138befcfae9fba1ab3 namespace=k8s.io Apr 30 03:53:05.351191 containerd[1635]: time="2025-04-30T03:53:05.351137263Z" level=warning msg="cleaning up after shim disconnected" id=61f1daf7e4df85451c85e133df0735024cee20d490eca5138befcfae9fba1ab3 namespace=k8s.io Apr 30 03:53:05.351191 containerd[1635]: time="2025-04-30T03:53:05.351151139Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:53:05.391377 containerd[1635]: time="2025-04-30T03:53:05.391200733Z" level=info msg="shim disconnected" id=9ed94da080fbbcaf22b805c6c4cc39a071db303f296acd880c42d4b8f0c68be9 namespace=k8s.io Apr 30 03:53:05.391377 containerd[1635]: time="2025-04-30T03:53:05.391243013Z" level=warning msg="cleaning up after shim disconnected" id=9ed94da080fbbcaf22b805c6c4cc39a071db303f296acd880c42d4b8f0c68be9 namespace=k8s.io Apr 30 03:53:05.391377 containerd[1635]: time="2025-04-30T03:53:05.391249976Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:53:05.394231 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ed94da080fbbcaf22b805c6c4cc39a071db303f296acd880c42d4b8f0c68be9-rootfs.mount: Deactivated successfully. Apr 30 03:53:05.412954 kubelet[3046]: E0430 03:53:05.407526 3046 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:53278->10.0.0.2:2379: read: connection timed out" Apr 30 03:53:05.510682 containerd[1635]: time="2025-04-30T03:53:05.510516999Z" level=info msg="shim disconnected" id=7d0bff9a08b3e9a1e7a8fd21975e5b801ac7e31936e16d58a51fe9c34979a229 namespace=k8s.io Apr 30 03:53:05.510682 containerd[1635]: time="2025-04-30T03:53:05.510581410Z" level=warning msg="cleaning up after shim disconnected" id=7d0bff9a08b3e9a1e7a8fd21975e5b801ac7e31936e16d58a51fe9c34979a229 namespace=k8s.io Apr 30 03:53:05.510682 containerd[1635]: time="2025-04-30T03:53:05.510590517Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:53:05.511592 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d0bff9a08b3e9a1e7a8fd21975e5b801ac7e31936e16d58a51fe9c34979a229-rootfs.mount: Deactivated successfully. Apr 30 03:53:06.454008 kubelet[3046]: I0430 03:53:06.453573 3046 scope.go:117] "RemoveContainer" containerID="9ed94da080fbbcaf22b805c6c4cc39a071db303f296acd880c42d4b8f0c68be9" Apr 30 03:53:06.459347 kubelet[3046]: I0430 03:53:06.459335 3046 scope.go:117] "RemoveContainer" containerID="61f1daf7e4df85451c85e133df0735024cee20d490eca5138befcfae9fba1ab3" Apr 30 03:53:06.480054 kubelet[3046]: I0430 03:53:06.479432 3046 scope.go:117] "RemoveContainer" containerID="7d0bff9a08b3e9a1e7a8fd21975e5b801ac7e31936e16d58a51fe9c34979a229" Apr 30 03:53:06.502859 containerd[1635]: time="2025-04-30T03:53:06.502805971Z" level=info msg="CreateContainer within sandbox \"0d2fe42cc14aabba9877db327872b55ae94d8cadd9a2730afcc739341b1d49da\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 30 03:53:06.504055 containerd[1635]: time="2025-04-30T03:53:06.502805981Z" level=info msg="CreateContainer within sandbox \"9ca51c990713367cfdbe8d6e5f5240c63340bc804ae4baee12e46d85947729d5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 30 03:53:06.602343 containerd[1635]: time="2025-04-30T03:53:06.602312105Z" level=info msg="CreateContainer within sandbox \"87f9eb28eaddebbf0ef14060b7a4bb9f5f8fdbc890e5e5367737391d44e98700\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 30 03:53:06.654254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2228000472.mount: Deactivated successfully. Apr 30 03:53:06.660333 containerd[1635]: time="2025-04-30T03:53:06.660292624Z" level=info msg="CreateContainer within sandbox \"87f9eb28eaddebbf0ef14060b7a4bb9f5f8fdbc890e5e5367737391d44e98700\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"574cca2aa9637396e23a3569875c9adc1aba98520d79afabc54aa2f83adb1dba\"" Apr 30 03:53:06.667262 containerd[1635]: time="2025-04-30T03:53:06.666651965Z" level=info msg="CreateContainer within sandbox \"0d2fe42cc14aabba9877db327872b55ae94d8cadd9a2730afcc739341b1d49da\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"35b53f2e815b3290734f5a8d8c64da3b4b537f1eb6311bc7bfd3720041d75bdd\"" Apr 30 03:53:06.671924 containerd[1635]: time="2025-04-30T03:53:06.671734633Z" level=info msg="CreateContainer within sandbox \"9ca51c990713367cfdbe8d6e5f5240c63340bc804ae4baee12e46d85947729d5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"670b7ae376c31de16ce7404348d99acfd16ce6de95002d789ea04797f30efd5c\"" Apr 30 03:53:06.682186 containerd[1635]: time="2025-04-30T03:53:06.681489773Z" level=info msg="StartContainer for \"670b7ae376c31de16ce7404348d99acfd16ce6de95002d789ea04797f30efd5c\"" Apr 30 03:53:06.682186 containerd[1635]: time="2025-04-30T03:53:06.681521272Z" level=info msg="StartContainer for \"35b53f2e815b3290734f5a8d8c64da3b4b537f1eb6311bc7bfd3720041d75bdd\"" Apr 30 03:53:06.683128 containerd[1635]: time="2025-04-30T03:53:06.681496025Z" level=info msg="StartContainer for \"574cca2aa9637396e23a3569875c9adc1aba98520d79afabc54aa2f83adb1dba\"" Apr 30 03:53:06.799359 containerd[1635]: time="2025-04-30T03:53:06.798711665Z" level=info msg="StartContainer for \"574cca2aa9637396e23a3569875c9adc1aba98520d79afabc54aa2f83adb1dba\" returns successfully" Apr 30 03:53:06.803720 containerd[1635]: time="2025-04-30T03:53:06.803612208Z" level=info msg="StartContainer for \"670b7ae376c31de16ce7404348d99acfd16ce6de95002d789ea04797f30efd5c\" returns successfully" Apr 30 03:53:06.806980 containerd[1635]: time="2025-04-30T03:53:06.806915863Z" level=info msg="StartContainer for \"35b53f2e815b3290734f5a8d8c64da3b4b537f1eb6311bc7bfd3720041d75bdd\" returns successfully"